commit | 3ec2e53d4b55ecf267671dcd0ac1745c5f69aaf8 | [log] [tgz] |
---|---|---|
author | Dennis Dmitriev <ddmitriev@mirantis.com> | Fri Jun 08 04:33:34 2018 +0300 |
committer | Dennis Dmitriev <ddmitriev@mirantis.com> | Fri Jun 08 20:58:08 2018 +0300 |
tree | 73797b09f3ee58a7711ed6ebb250d50642ce391c | |
parent | a397f2649cbda6f976583ec6b145035a848c6243 [diff] |
Add tools to run jenkins jobs and remote commands - ./tcp_tests/utils/create_devops_env.py Creates a fuel-devops enviromnet with VMs in disabled state, to generate networks and addresses for inventory. Required parameters: export ENV_NAME=test export LAB_CONFIG_NAME=<template directory with underlay.yml> export MANAGER=devops Other parameters may be required for the underlay.yml CLI example: export PYTHONPATH=$(pwd) python ./tcp_tests/utils/create_devops_env.py - ./tcp_tests/utils/run_jenkins_job.py Run a jenkins job with parameters, wait for completion, print the console output to stdout while waiting. Required parameters: export JENKINS_URL=http://host:port/ export JENKINS_USER=admin export JENKINS_PASS=admin CLI example: JOB_PARAMETERS="{ \"SALT_MASTER_URL\": \"${SALTAPI_URL}\", \"STACK_INSTALL\": \"core,cicd\" }" JOB_PREFIX="[ {job_name} #{build_number}:cicd {time} ] " python ./tcp_tests/utils/run_jenkins_job.py \ --verbose \ --job-name=deploy_openstack \ --job-parameters="$JOB_PARAMETERS" \ --job-output-prefix="$JOB_PREFIX" - ./tcp_tests/utils/get_param.py Get a single parameter from the salt pillar. Useful to get addresses and other scalar values. Required parameters are the same as for 'pepper' CLI: export SALTAPI_URL=http://${SALT_MASTER_IP}:6969/ export SALTAPI_USER='salt' export SALTAPI_PASS='icecream12345!' export SALTAPI_EAUTH='pam' CLI example: export JENKINS_HOST=$(./tcp_tests/utils/get_param.py \ -C 'I@docker:client:stack:jenkins' \ pillar.get jenkins:client:master:host) - ./tcp_tests/utils/run_template_commands.py Run remote commands from the ./tcp_tests/templates/ No environment varialbes are required, but may be useful to provide the INI config from some completed deployment. CLI example: export TESTS_CONFIGS=$(pwd)/test_salt_deployed.ini ./tcp_tests/utils/run_template_commands.py \ ./tcp_tests/templates/<lab_name>/common_services.yaml - some env files for sourcing to get access to different APIs. This will simplify using the scripts above. . ./tcp_tests/utils/env_salt # salt-api access . ./tcp_tests/utils/env_jenkins_day01 # jenkins on salt-master . ./tcp_tests/utils/env_jenkins_cicd # jenkins on cicd . ./tcp_tests/utils/env_k8s # k8s api access - fixed UnderlayManager.sudo_check_call() to remove deprecation warning. Improvements to JenkisClient: - Add JenkinsWrapper class to workaround the bug https://bugs.launchpad.net/python-jenkins/+bug/1775047 which is happened to CICD Jenkins behind the haproxy - improved waiting for start of the job in run_build() - new argument 'interval' in wait_end_of_build(), to set the polling interval while waiting the job - new argument 'job_output_prefix' in wait_end_of_build(), which allows to set the prefix to each line of the console output of the job; with some pre-defined template keys. - improved printing the job output in case of non-unicode characters Change-Id: Ie7d1324d8247e55ba9c0f0492ca39fc176ff4935
Please send patches using gerrithub.io:
git remote add gerrit ssh://review.gerrithub.io:29418/Mirantis/tcp-qa git review
git clone https://github.com/Mirantis/tcp-qa cd ./tcp-qa
pip install -r ./tcp_tests/requirements.txt
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img -O ./xenial-server-cloudimg-amd64.qcow2
LAB_CONFIG_NAME variable maps cluster name from the model repository with the set of templates in the ./tcp_tests/templates/ folder.
export LAB_CONFIG_NAME=virtual-mcp-ocata-dvr # OVS-DVR with ocata packages export LAB_CONFIG_NAME=virtual-mcp-ocata-ovs # OVS-NO-DVR with ocata packages export LAB_CONFIG_NAME=virtual-mcp-ocata-cicd # Operational Support System Tools export LAB_CONFIG_NAME=virtual-mcp11-dvr # OVS-DVR with neutron packages export LAB_CONFIG_NAME=virtual-mcp11-ovs # OVS-NO-DVR with neutron packages export LAB_CONFIG_NAME=virtual-mcp11-dpdk # OVS-DPDK with neutron packages
Note: The recommended repo is testing
. Possible choices: stable, testing, nightly. Nightly contains latest packages.
export REPOSITORY_SUITE=testing
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_default
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_run_rally
export IMAGE_PATH1604=./xenial-server-cloudimg-amd64.qcow2 export SHUTDOWN_ENV_ON_TEARDOWN=false # Optional LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_oss_install_default
Note: This lab is not finished yet. TBD: configure vsrx node
export ENV_NAME=tcpcloud-mk22 # You can set any env name export LAB_CONFIG_NAME=mk22-qa-lab01 # Name of set of templates export VSRX_PATH=./vSRX.img # /path/to/vSRX.img, or to ./xenial-server-cloudimg-amd64.qcow2 as a temporary workaround LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_default
, or as an alternative there is another test that use deploy scripts from models repository written on bash [2]:
LC_ALL=en_US.UTF-8 py.test -vvv -s -k test_tcp_install_with_scripts
Labs with names mk22-lab-basic and mk22-lab-avdanced are deprecated and not recommended to use.
To create VMs using HugePages, configure the server (see below) and then use the following variable:
export DRIVER_USE_HUGEPAGES=true
This is a runtime-based steps. To make it persistent, you need to edit some configs.
service apparmor stop service apparmor teardown update-rc.d -f apparmor remove apt-get remove apparmor
2Mb * 30000 = ~60Gb RAM will be used for HugePages. Suitable for CI servers with 64Gb RAM and no other heavy services except libvirt.
WARNING! Too high value will hang your server, be carefull and try lower values first.
echo 28000 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages apt-get install -y hugepages hugeadm --set-recommended-shmmax cat /proc/meminfo | grep HugePages
mkdir -p /mnt/hugepages2M mount -t hugetlbfs hugetlbfs /mnt/hugepages2M
echo "hugetlbfs_mount = '/mnt/hugepages2M'" > /etc/libvirt/qemu.conf service libvirt-bin restart
dos.py create-env ./tcp_tests/templates/underlay/mk22-lab-basic.yaml dos.py start "${ENV_NAME}"
Then, wait until cloud-init is finished and port 22 is open (~3-4 minutes), and login with root:r00tme
[1] https://github.com/openstack/fuel-devops/blob/master/doc/source/install.rst
[2] https://github.com/Mirantis/mk-lab-salt-model/tree/dash/scripts