[mitaka-trusty] Restore ability to deploy mitaka release on trusty

As maintenance team we still responsible for one mitaka-trusty cloud.
In case of incoming bugfix-request we need an ability to deploy mitaka
cluster using trusty-based mitaka packages and environment.

Some times ago (last successful run was Oct.9) this deployment were broken.
This was caused by removal of trusty iscsi daemon's name from formulas
(this was expectable but still painful).

To ensure that we will not get deployment errors like this (which will be
because trusty deployment even not legacy) we want to bind our deployment jobs
to last successful deployment than we can get. After some investigation this combination of
packages/formulas/model/deployment steps was found.

We want to use 2018.8.0 MCP release with some hacks inside tcp-qa deployment process.
This SHOULD NOT affects any existing deployment but will give us a chance to verify
shipped packages in the future.

Change-Id: I0d067c2c85a61714621aab6233dcfe79d55c4a18
diff --git a/tcp_tests/templates/virtual-mcp-trusty/openstack.yaml b/tcp_tests/templates/virtual-mcp-trusty/openstack.yaml
index c001d39..fff0966 100644
--- a/tcp_tests/templates/virtual-mcp-trusty/openstack.yaml
+++ b/tcp_tests/templates/virtual-mcp-trusty/openstack.yaml
@@ -6,6 +6,10 @@
 {% from 'shared-salt.yaml' import IPV4_NET_EXTERNAL_PREFIX with context %}
 {% from 'shared-salt.yaml' import IPV4_NET_TENANT_PREFIX with context %}
 
+# vkhlyunev: shared steps are constantly updating due master development so
+# we cant use them for old release. For openstack.yaml we can use some shared
+# steps for now but TODO: bind deployment workflow to 2018.8.0 state
+
 # Install OpenStack control services
 - description: Sync time
   cmd: salt --hard-crash --state-output=mixed --state-verbose=False -G 'oscodename:trusty' cmd.run "service ntp stop && ntpdate pool.ntp.org && service ntp start"
@@ -23,11 +27,94 @@
   retry: {count: 1, delay: 5}
   skip_fail: true
 
-{{ SHARED_OPENSTACK.MACRO_INSTALL_KEYSTONE() }}
+- description: Install keystone service on primary node
+  cmd: salt --hard-crash --state-output=mixed --state-verbose=False
+    -C 'I@keystone:server and *01*' state.sls keystone.server
+  node_name: {{ HOSTNAME_CFG01 }}
+  retry: {count: 2, delay: 15}
+  skip_fail: false
+
+- description: Install keystone service on other nodes
+  cmd: salt --hard-crash --state-output=mixed --state-verbose=False
+    -C 'I@keystone:server' state.sls keystone.server
+  node_name: {{ HOSTNAME_CFG01 }}
+  retry: {count: 2, delay: 15}
+  skip_fail: false
+
+- description: Restart apache due to PROD-10477
+  cmd: salt --hard-crash --state-output=mixed --state-verbose=False 'ctl*'
+    cmd.run "service apache2 restart"
+  node_name: {{ HOSTNAME_CFG01 }}
+  retry: {count: 1, delay: 15}
+  skip_fail: false
+
+- description: Check apache status to PROD-10477
+  cmd: salt --hard-crash --state-output=mixed --state-verbose=False 'ctl*'
+    cmd.run "service apache2 status"
+  node_name: {{ HOSTNAME_CFG01 }}
+  retry: {count: 1, delay: 15}
+  skip_fail: false
+
+- description: Mount glusterfs.client volumes (resuires created 'keystone' system user)
+  cmd: salt --hard-crash --state-output=mixed --state-verbose=False
+    -C 'I@keystone:server' state.sls glusterfs.client -b 1
+  node_name: {{ HOSTNAME_CFG01 }}
+  retry: {count: 1, delay: 5}
+  skip_fail: false
+
+- description: Update fernet keys for keystone server on the mounted glusterfs volume
+  cmd: salt --hard-crash --state-output=mixed --state-verbose=False
+    -C 'I@keystone:server' state.sls keystone.server -b 1
+  node_name: {{ HOSTNAME_CFG01 }}
+  retry: {count: 1, delay: 5}
+  skip_fail: false
+  
+- description: Populate keystone services/tenants/admins
+  cmd: salt --hard-crash --state-output=mixed --state-verbose=False
+    -C 'I@keystone:client' state.sls keystone.client
+  node_name: {{ HOSTNAME_CFG01 }}
+  retry: {count: 2, delay: 5}
+  skip_fail: false
+
+- description: Check keystone service-list
+  cmd: salt --hard-crash --state-output=mixed --state-verbose=False
+    -C "I@keystone:server" cmd.run ". /root/keystonercv3;
+    openstack service list"
+  node_name: {{ HOSTNAME_CFG01 }}
+  retry: {count: 1, delay: 5}
+  skip_fail: false
 
 {{ SHARED_OPENSTACK.MACRO_INSTALL_GLANCE() }}
 
-{{ SHARED_OPENSTACK.MACRO_INSTALL_NOVA() }}
+- description: Install nova service on primary node
+  cmd: salt --hard-crash --state-output=mixed --state-verbose=False
+    -C "I@nova:controller and *01*" state.sls nova.controller
+  node_name: {{ HOSTNAME_CFG01 }}
+  retry: {count: 1, delay: 5}
+  skip_fail: false
+
+- description: Install nova service on other nodes
+  cmd: salt --hard-crash --state-output=mixed --state-verbose=False
+    -C "I@nova:controller" state.sls nova.controller
+  node_name: {{ HOSTNAME_CFG01 }}
+  retry: {count: 1, delay: 5}
+  skip_fail: false
+
+- description: Check nova service-list
+  cmd: salt --hard-crash --state-output=mixed --state-verbose=False
+    -C "I@keystone:server" cmd.run ". /root/keystonercv3;
+    openstack compute service list"
+  node_name: {{ HOSTNAME_CFG01 }}
+  retry: {count: 1, delay: 5}
+  skip_fail: false
+
+- description: Check nova list
+  cmd: salt --hard-crash --state-output=mixed --state-verbose=False
+    -C "I@keystone:server" cmd.run ". /root/keystonercv3;
+    openstack server list"
+  node_name: {{ HOSTNAME_CFG01 }}
+  retry: {count: 1, delay: 5}
+  skip_fail: false
 
 {{ SHARED_OPENSTACK.MACRO_INSTALL_CINDER() }}