Enable DPDK in tests preparation for correct flavor creation
Change set vcpu_pinning to wider because 1 cpu is not enough for example
if we boot 2 VMs.
Changed roles for gtw node to be able to use external network and vxlan
in case of DPDK.
Remove cinder volume from ctl nodes.
Change dpdk_lcore_mask to use vcpus from both NUMAs (In case of 2 NUMAs
enabled on virtual machine, we are not able to determine wich NUMA our
interfaces belong to. The dpdk_lcore_cpus should be used exactly from
NUMA wich interface belongs to)
Change pmd-cpu-mask to use vcpus which are not intersect with nova
pinned cpus and cpus from lcore mask.
Change compute_ovs_dpdk_socket_mem to use memory from both NUMAs
(because we have 2 NUMAs in our computes)
Change-Id: Ic8e7704473e396f181524571f2b0d8826046610b
diff --git a/tcp_tests/templates/cookied-mcp-pike-dpdk/_context-environment.yaml b/tcp_tests/templates/cookied-mcp-pike-dpdk/_context-environment.yaml
index 53fdd68..0cd60ba 100644
--- a/tcp_tests/templates/cookied-mcp-pike-dpdk/_context-environment.yaml
+++ b/tcp_tests/templates/cookied-mcp-pike-dpdk/_context-environment.yaml
@@ -158,7 +158,10 @@
ens4:
role: single_ctl
ens5:
- role: single_ovs_br_prv
- mtu: 1500
+ role: bond0_ab_ovs_vxlan_mesh_no_tag
+ ens6:
+ role: bond0_ab_ovs_vxlan_mesh_no_tag
ens7:
- role: bond1_ab_ovs_floating
+ role: single_ovs_br_floating
+ external_address: 10.90.0.110
+ external_network_netmask: 255.255.255.0