commit | a5bd16593a80e272883ed98c45d923a8db345aa8 | [log] [tgz] |
---|---|---|
author | Dennis Dmitriev <ddmitriev@mirantis.com> | Thu May 31 20:57:19 2018 +0300 |
committer | Dennis Dmitriev <ddmitriev@mirantis.com> | Thu May 31 21:34:13 2018 +0300 |
tree | c76ae4c12786b7b4b45067c915737e20bb71f488 | |
parent | 940453e873f0e1a070cb9da0743e7f1d642cf44a [diff] |
Add waiting for the correct build_id in the JenkinClient:run_build() If the job is queued while waiting for the free executors, then the method was getting the wrong build_id belonging to the previous job, for instance it will return build_id=28 in this example: №29 (pending—Build #28 is already in progress (ETA:17 min)) №28 [In progress] May 31, 2018 6:07 PM * Add 'timeout' parameter (default 600sec) in run_build() to wait until the queued build is started * Add pooling the queue until the build is started, get the build number assigned to build in queue object * Add 'verbose' parameter (default False) to show the reason from Jenkins why the build is not started yet, for example: pending the job 'deploy_openstack' : Build #22 is already in progress (ETA:1 min 48 sec) pending the job 'deploy_openstack' : Build #22 is already in progress (ETA:1 min 18 sec) pending the job 'deploy_openstack' : Build #22 is already in progress (ETA:48 sec) * Rename the parameter 'print_job_output' to 'verbose' in the method wait_end_of_build() to have the common parameter naming. Change-Id: Id49b5b45a8127e769b89860b19424081f37f6f38
Please send patches using gerrithub.io:
git remote add gerrit ssh://review.gerrithub.io:29418/Mirantis/tcp-qa git review
git clone https://github.com/Mirantis/tcp-qa cd ./tcp-qa
pip install -r ./tcp_tests/requirements.txt
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img -O ./xenial-server-cloudimg-amd64.qcow2
LAB_CONFIG_NAME variable maps cluster name from the model repository with the set of templates in the ./tcp_tests/templates/ folder.
export LAB_CONFIG_NAME=virtual-mcp-ocata-dvr # OVS-DVR with ocata packages export LAB_CONFIG_NAME=virtual-mcp-ocata-ovs # OVS-NO-DVR with ocata packages export LAB_CONFIG_NAME=virtual-mcp-ocata-cicd # Operational Support System Tools export LAB_CONFIG_NAME=virtual-mcp11-dvr # OVS-DVR with neutron packages export LAB_CONFIG_NAME=virtual-mcp11-ovs # OVS-NO-DVR with neutron packages export LAB_CONFIG_NAME=virtual-mcp11-dpdk # OVS-DPDK with neutron packages
Note: The recommended repo is testing
. Possible choices: stable, testing, nightly. Nightly contains latest packages.
export REPOSITORY_SUITE=testing
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_default
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_run_rally
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_oss_install_default
Note: This lab is not finished yet. TBD: configure vsrx node
export ENV_NAME=tcpcloud-mk22 # You can set any env name export LAB_CONFIG_NAME=mk22-qa-lab01 # Name of set of templates export VSRX_PATH=./vSRX.img # /path/to/vSRX.img, or to ./xenial-server-cloudimg-amd64.qcow2 as a temporary workaround LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_default
, or as an alternative there is another test that use deploy scripts from models repository written on bash [2]:
LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_with_scripts
Labs with names mk22-lab-basic and mk22-lab-avdanced are deprecated and not recommended to use.
To create VMs using HugePages, configure the server (see below) and then use the following variable:
export DRIVER_USE_HUGEPAGES=true
This is a runtime-based steps. To make it persistent, you need to edit some configs.
service apparmor stop service apparmor teardown update-rc.d -f apparmor remove apt-get remove apparmor
2Mb * 30000 = ~60Gb RAM will be used for HugePages. Suitable for CI servers with 64Gb RAM and no other heavy services except libvirt.
WARNING! Too high value will hang your server, be carefull and try lower values first.
echo 28000 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages apt-get install -y hugepages hugeadm --set-recommended-shmmax cat /proc/meminfo | grep HugePages
mkdir -p /mnt/hugepages2M mount -t hugetlbfs hugetlbfs /mnt/hugepages2M
echo "hugetlbfs_mount = '/mnt/hugepages2M'" > /etc/libvirt/qemu.conf service libvirt-bin restart
dos.py create-env ./tcp_tests/templates/underlay/mk22-lab-basic.yaml dos.py start "${ENV_NAME}"
Then, wait until cloud-init is finished and port 22 is open (~3-4 minutes), and login with root:r00tme
[1] https://github.com/openstack/fuel-devops/blob/master/doc/source/install.rst
[2] https://github.com/Mirantis/mk-lab-salt-model/tree/dash/scripts