Use generated RSA key while accessing nodes via SSH

Generate RSA key pair (optionally it could be provided by
operator via environment variable) and use it for SSH
authentication instead of password.

Also it's possible to provide the key from a local file now.

Change-Id: I5fea4d55337f294cd7829392b91b2cca7b85ead5
Reviewed-on: https://review.gerrithub.io/367254
Reviewed-by: Victor Ryzhenkin <vryzhenkin@mirantis.com>
Reviewed-by: Dennis Dmitriev <dis.xcom@gmail.com>
Tested-by: Dennis Dmitriev <dis.xcom@gmail.com>
62 files changed
tree: 83a1b87f0a847b81df3bc258a36d1b921ca7512f
  1. tcp_tests/
  2. tect-qa/
  3. .gitignore
  4. .gitreview
  5. .travis.yml
  6. pytest.ini
  7. README.md
  8. setup.cfg
  9. setup.py
  10. tox.ini
README.md

tcp-qa

Default template used here requires 20 vCPU and 52Gb host RAM.

Clone the repo

git clone https://github.com/Mirantis/tcp-qa
cd ./tcp-qa

Install requirements

pip install -r ./tcp_tests/requirements.txt
  • Note: Please read [1] if you don't have fuel-devops installed, because there are required some additional packages and configuration.

Get cloudinit image

wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img -O ./xenial-server-cloudimg-amd64.qcow2

Choose the name of the cluster model

LAB_CONFIG_NAME variable maps cluster name from the model repository with the set of templates in the ./tcp_tests/templates/ folder.

export LAB_CONFIG_NAME=virtual-mcp-ocata-dvr  # OVS-DVR with ocata packages
export LAB_CONFIG_NAME=virtual-mcp-ocata-ovs  # OVS-NO-DVR with ocata packages
export LAB_CONFIG_NAME=virtual-mcp11-dvr  # OVS-DVR with neutron packages
export LAB_CONFIG_NAME=virtual-mcp11-ovs  # OVS-NO-DVR with neutron packages
export LAB_CONFIG_NAME=virtual-mcp11-dpdk  # OVS-DPDK with neutron packages

Run deploy test

export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2
export SHUTDOWN_ENV_ON_TEARDOWN=false  # Optional

LC_ALL=en_US.UTF-8  py.test -vvv -s -k test_tcp_install_default

Run deploy test and rally verify (tempest)

export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2
export SHUTDOWN_ENV_ON_TEARDOWN=false  # Optional

LC_ALL=en_US.UTF-8  py.test -vvv -s -k test_tcp_install_run_rally

Run deploy test for mk22-qa-lab01 (outdated)

Note: This lab is not finished yet. TBD: configure vsrx node

export ENV_NAME=tcpcloud-mk22  # You can set any env name
export LAB_CONFIG_NAME=mk22-qa-lab01  # Name of set of templates
export VSRX_PATH=./vSRX.img           # /path/to/vSRX.img, or to ./xenial-server-cloudimg-amd64.qcow2 as a temporary workaround

LC_ALL=en_US.UTF-8  py.test -vvv -s -k test_tcp_install_default

, or as an alternative there is another test that use deploy scripts from models repository written on bash [2]:

LC_ALL=en_US.UTF-8  py.test -vvv -s -k test_tcp_install_with_scripts

Labs with names mk22-lab-basic and mk22-lab-avdanced are deprecated and not recommended to use.

Create and start the env for manual tests

dos.py create-env ./tcp_tests/templates/underlay/mk22-lab-basic.yaml
dos.py start "${ENV_NAME}"

Then, wait until cloud-init is finished and port 22 is open (~3-4 minutes), and login with root:r00tme

[1] https://github.com/openstack/fuel-devops/blob/master/doc/source/install.rst

[2] https://github.com/Mirantis/mk-lab-salt-model/tree/dash/scripts