commit | b01b90ebc400e6ad25833c3367a266fe5f7d2d5b | [log] [tgz] |
---|---|---|
author | Dennis Dmitriev <ddmitriev@mirantis.com> | Thu Jun 07 14:57:53 2018 +0300 |
committer | Dennis Dmitriev <ddmitriev@mirantis.com> | Fri Jun 08 21:23:50 2018 +0300 |
tree | 679b865633dcfb4994b4c23ee458f923b0542e0a | |
parent | aa53d49d0914619c4f02d84da390ab8ad44d29c7 [diff] |
Bootstrap with opened SSH on nodes master_config.sh script for cfg01-day01 image requires working SSH service to perform 'ssh-keyscan' to prepare Jenkins. 1. To wait the end of the bootstrap process with opened SSH, two flags added: /is_cloud_init_started - file on the node, indicates that the bootstrap process should wait for the flag /is_cloud_init_finished /is_cloud_init_finished - file on the node, indicates that the bootstrap process can be finished successfully 2. Backward compatibility: - if the SSH service is available, but /is_cloud_init_started not found, then the bootstrap process will be finished as successful. - if any of (AuthenticationException, BadAuthenticationType) exceptions caused, the bootstrap process will be finished as successful. 3. For each node, at least 2 successful (in terms of #1 or #2 above) SSH checks should pass before the bootstrap is finished. That allows to avoid intermediate allowness of ssh service during bootstrap using the second check. It is necessary for cases when ssh service is started by the preparation process but is stopped for a while until the cloud-init script is finished or the node is rebooted. Change-Id: I82fb10efa8a67d080b725a66a3185fc845d2b1a0
Please send patches using gerrithub.io:
git remote add gerrit ssh://review.gerrithub.io:29418/Mirantis/tcp-qa git review
git clone https://github.com/Mirantis/tcp-qa cd ./tcp-qa
pip install -r ./tcp_tests/requirements.txt
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img -O ./xenial-server-cloudimg-amd64.qcow2
LAB_CONFIG_NAME variable maps cluster name from the model repository with the set of templates in the ./tcp_tests/templates/ folder.
export LAB_CONFIG_NAME=virtual-mcp-ocata-dvr # OVS-DVR with ocata packages export LAB_CONFIG_NAME=virtual-mcp-ocata-ovs # OVS-NO-DVR with ocata packages export LAB_CONFIG_NAME=virtual-mcp-ocata-cicd # Operational Support System Tools export LAB_CONFIG_NAME=virtual-mcp11-dvr # OVS-DVR with neutron packages export LAB_CONFIG_NAME=virtual-mcp11-ovs # OVS-NO-DVR with neutron packages export LAB_CONFIG_NAME=virtual-mcp11-dpdk # OVS-DPDK with neutron packages
Note: The recommended repo is testing
. Possible choices: stable, testing, nightly. Nightly contains latest packages.
export REPOSITORY_SUITE=testing
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_default
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_run_rally
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_oss_install_default
Note: This lab is not finished yet. TBD: configure vsrx node
export ENV_NAME=tcpcloud-mk22 # You can set any env name export LAB_CONFIG_NAME=mk22-qa-lab01 # Name of set of templates export VSRX_PATH=./vSRX.img # /path/to/vSRX.img, or to ./xenial-server-cloudimg-amd64.qcow2 as a temporary workaround LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_default
, or as an alternative there is another test that use deploy scripts from models repository written on bash [2]:
LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_with_scripts
Labs with names mk22-lab-basic and mk22-lab-avdanced are deprecated and not recommended to use.
To create VMs using HugePages, configure the server (see below) and then use the following variable:
export DRIVER_USE_HUGEPAGES=true
This is a runtime-based steps. To make it persistent, you need to edit some configs.
service apparmor stop service apparmor teardown update-rc.d -f apparmor remove apt-get remove apparmor
2Mb * 30000 = ~60Gb RAM will be used for HugePages. Suitable for CI servers with 64Gb RAM and no other heavy services except libvirt.
WARNING! Too high value will hang your server, be carefull and try lower values first.
echo 28000 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages apt-get install -y hugepages hugeadm --set-recommended-shmmax cat /proc/meminfo | grep HugePages
mkdir -p /mnt/hugepages2M mount -t hugetlbfs hugetlbfs /mnt/hugepages2M
echo "hugetlbfs_mount = '/mnt/hugepages2M'" > /etc/libvirt/qemu.conf service libvirt-bin restart
dos.py create-env ./tcp_tests/templates/underlay/mk22-lab-basic.yaml dos.py start "${ENV_NAME}"
Then, wait until cloud-init is finished and port 22 is open (~3-4 minutes), and login with root:r00tme
[1] https://github.com/openstack/fuel-devops/blob/master/doc/source/install.rst
[2] https://github.com/Mirantis/mk-lab-salt-model/tree/dash/scripts