commit | 50f9189501838af878aa21e9f788ad6436317b09 | [log] [tgz] |
---|---|---|
author | Sergii Golovatiuk <sgolovatiuk@mirantis.com> | Fri Aug 04 18:11:06 2017 +0200 |
committer | Dennis Dmitriev <dis.xcom@gmail.com> | Wed Aug 09 04:35:30 2017 -0400 |
tree | 90360a6250f856eda878e8ae202df585acb6edea | |
parent | 41f0b79f84f4444419ee3eee109ccdf1207a7d07 [diff] |
Refactor reclass cloning SALT_MODELS_REF_CHANGE SALT_MODELS_COMMIT are both exclusive. If SALT_MODELS_REF_CHANGE is specified then it means we don't need to do a checkout to special commit. SALT_MODELS_COMMIT is specified but SALT_MODELS_REF_CHANGE is not, then we need to checkout that commit. * This change refactor the logic across all templates. * This change introduce for loop for SALT_MODELS_REF_CHANGE. Several commits can be specified using space as separator. Here is an example export SALT_MODELS_REF_CHANGE='refs/changes/12/8412/2 refs/changes/58/8458/1' All commits should not have any conflicts. In case of conflicts create dependant commits and use top one. * The 'cmd:' for the step "Clone reclass models with submodules" was moved to shared-salt.yaml and included to other templates as a jinja2 'macro'. * SALT_MODELS_SYSTEM_COMMIT now works as expected. If not specified then will be used the 'system' commit specified in the cluster model submodule. Doc-Impact Change-Id: I0d57b1eea79a7c011231dcf7f46fb3599a62c33f Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com> Reviewed-on: https://review.gerrithub.io/372906 Reviewed-by: Sergii Golovatiuk <holser@gmail.com> Reviewed-by: Dennis Dmitriev <dis.xcom@gmail.com> Tested-by: Dennis Dmitriev <dis.xcom@gmail.com>
Please send patches using gerrithub.io:
git remote add gerrit ssh://review.gerrithub.io:29418/Mirantis/tcp-qa git review
git clone https://github.com/Mirantis/tcp-qa cd ./tcp-qa
pip install -r ./tcp_tests/requirements.txt
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img -O ./xenial-server-cloudimg-amd64.qcow2
LAB_CONFIG_NAME variable maps cluster name from the model repository with the set of templates in the ./tcp_tests/templates/ folder.
export LAB_CONFIG_NAME=virtual-mcp-ocata-dvr # OVS-DVR with ocata packages export LAB_CONFIG_NAME=virtual-mcp-ocata-ovs # OVS-NO-DVR with ocata packages export LAB_CONFIG_NAME=virtual-mcp-ocata-cicd # Operational Support System Tools export LAB_CONFIG_NAME=virtual-mcp11-dvr # OVS-DVR with neutron packages export LAB_CONFIG_NAME=virtual-mcp11-ovs # OVS-NO-DVR with neutron packages export LAB_CONFIG_NAME=virtual-mcp11-dpdk # OVS-DPDK with neutron packages
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional export REPOSITORY_SUITE=testing LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_default
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional export REPOSITORY_SUITE=testing LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_run_rally
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional export REPOSITORY_SUITE=testing LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_oss_install_default
Note: This lab is not finished yet. TBD: configure vsrx node
export ENV_NAME=tcpcloud-mk22 # You can set any env name export LAB_CONFIG_NAME=mk22-qa-lab01 # Name of set of templates export VSRX_PATH=./vSRX.img # /path/to/vSRX.img, or to ./xenial-server-cloudimg-amd64.qcow2 as a temporary workaround LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_default
, or as an alternative there is another test that use deploy scripts from models repository written on bash [2]:
LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_with_scripts
Labs with names mk22-lab-basic and mk22-lab-avdanced are deprecated and not recommended to use.
To create VMs using HugePages, configure the server (see below) and then use the following variable:
export DRIVER_USE_HUGEPAGES=true
This is a runtime-based steps. To make it persistent, you need to edit some configs.
service apparmor stop service apparmor teardown update-rc.d -f apparmor remove apt-get remove apparmor
2Mb * 30000 = ~60Gb RAM will be used for HugePages. Suitable for CI servers with 64Gb RAM and no other heavy services except libvirt.
WARNING! Too high value will hang your server, be carefull and try lower values first.
echo 28000 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages apt-get install -y hugepages hugeadm --set-recommended-shmmax cat /proc/meminfo | grep HugePages
mkdir -p /mnt/hugepages2M mount -t hugetlbfs hugetlbfs /mnt/hugepages2M
echo "hugetlbfs_mount = '/mnt/hugepages2M'" > /etc/libvirt/qemu.conf service libvirt-bin restart
dos.py create-env ./tcp_tests/templates/underlay/mk22-lab-basic.yaml dos.py start "${ENV_NAME}"
Then, wait until cloud-init is finished and port 22 is open (~3-4 minutes), and login with root:r00tme
[1] https://github.com/openstack/fuel-devops/blob/master/doc/source/install.rst
[2] https://github.com/Mirantis/mk-lab-salt-model/tree/dash/scripts