commit | 4597288fcd52f6bdce5328b964b327b7d12a97d8 | [log] [tgz] |
---|---|---|
author | Dennis Dmitriev <ddmitriev@mirantis.com> | Mon Jun 12 20:06:01 2017 +0300 |
committer | Dennis Dmitriev <dis.xcom@gmail.com> | Mon Jun 12 13:25:16 2017 -0400 |
tree | 5b365bff12346878f481678861a7b6e09610ee04 | |
parent | 64126da726698bdf2eac7db6817257a1ece5c2bf [diff] |
Workaround to keep connectivity to kvm* nodes after linux state State linux.network.interface creates br_mgm with the same address as on the DHCP management interface that is included into the bridge. To keep the connectivity, let's remove the IP address from the DHCP interface after the linux.network.interface state if exists. Change-Id: Id5a4d7c5515219091f3e79acc254986a147eb602 Reviewed-on: https://review.gerrithub.io/365048 Reviewed-by: Dennis Dmitriev <dis.xcom@gmail.com> Tested-by: Dennis Dmitriev <dis.xcom@gmail.com>
Default template used here requires 20 vCPU and 52Gb host RAM.
git clone https://github.com/Mirantis/tcp-qa cd ./tcp-qa
pip install -r ./tcp_tests/requirements.txt
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img -O ./xenial-server-cloudimg-amd64.qcow2
LAB_CONFIG_NAME variable maps cluster name from the model repository with the set of templates in the ./tcp_tests/templates/ folder.
export LAB_CONFIG_NAME=virtual-mcp-ocata-dvr # OVS-DVR with ocata packages export LAB_CONFIG_NAME=virtual-mcp-ocata-ovs # OVS-NO-DVR with ocata packages export LAB_CONFIG_NAME=virtual-mcp11-dvr # OVS-DVR with neutron packages export LAB_CONFIG_NAME=virtual-mcp11-ovs # OVS-NO-DVR with neutron packages export LAB_CONFIG_NAME=virtual-mcp11-dpdk # OVS-DPDK with neutron packages
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_default
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_run_rally
Note: This lab is not finished yet. TBD: configure vsrx node
export ENV_NAME=tcpcloud-mk22 # You can set any env name export LAB_CONFIG_NAME=mk22-qa-lab01 # Name of set of templates export VSRX_PATH=./vSRX.img # /path/to/vSRX.img, or to ./xenial-server-cloudimg-amd64.qcow2 as a temporary workaround LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_default
, or as an alternative there is another test that use deploy scripts from models repository written on bash [2]:
LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_with_scripts
Labs with names mk22-lab-basic and mk22-lab-avdanced are deprecated and not recommended to use.
dos.py create-env ./tcp_tests/templates/underlay/mk22-lab-basic.yaml dos.py start "${ENV_NAME}"
Then, wait until cloud-init is finished and port 22 is open (~3-4 minutes), and login with root:r00tme
[1] https://github.com/openstack/fuel-devops/blob/master/doc/source/install.rst
[2] https://github.com/Mirantis/mk-lab-salt-model/tree/dash/scripts