Merge Neutron AutoScaling and LoadBalancer tests
The tested use case is an autoscaling group of web app servers
behind a loadbalancer.
Test template was rewritten to use AutoScalingGroup, wait conditions and
outputs, so no other client than heat one is used.
Now there is no need to check for VM connectivity,
as stack/resources status COMPLETE now quite reliably means
that the listening servers are running.
This patch should probably also help with narrowing down the causes of
bug 1437203, since it does not checks instance connectivity with SSH.
Change-Id: Iec8e8061f9ab3f3841fa221722b7b7805760cdf7
Related-Bug: #1437203
Closes-Bug: #1435285
diff --git a/scenario/templates/test_autoscaling_lb_neutron.yaml b/scenario/templates/test_autoscaling_lb_neutron.yaml
new file mode 100644
index 0000000..d47e787
--- /dev/null
+++ b/scenario/templates/test_autoscaling_lb_neutron.yaml
@@ -0,0 +1,113 @@
+heat_template_version: 2015-04-30
+
+description: |
+ Template which tests Neutron load balancing requests to members of
+ Heat AutoScalingGroup.
+ Instances must be running some webserver on a given app_port
+ producing HTTP response that is different between servers
+ but stable over time for given server.
+
+parameters:
+ flavor:
+ type: string
+ image:
+ type: string
+ net:
+ type: string
+ subnet:
+ type: string
+ public_net:
+ type: string
+ app_port:
+ type: number
+ default: 8080
+ lb_port:
+ type: number
+ default: 80
+ timeout:
+ type: number
+ default: 600
+
+resources:
+
+ sec_group:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ rules:
+ - remote_ip_prefix: 0.0.0.0/0
+ protocol: tcp
+ port_range_min: { get_param: app_port }
+ port_range_max: { get_param: app_port }
+
+ asg:
+ type: OS::Heat::AutoScalingGroup
+ properties:
+ desired_capacity: 1
+ max_size: 2
+ min_size: 1
+ resource:
+ type: OS::Test::NeutronAppServer
+ properties:
+ image: { get_param: image }
+ flavor: { get_param: flavor }
+ net: { get_param: net}
+ sec_group: { get_resource: sec_group }
+ app_port: { get_param: app_port }
+ pool_id: { get_resource: pool }
+ timeout: { get_param: timeout }
+
+ scale_up:
+ type: OS::Heat::ScalingPolicy
+ properties:
+ adjustment_type: change_in_capacity
+ auto_scaling_group_id: { get_resource: asg }
+ scaling_adjustment: 1
+
+ scale_down:
+ type: OS::Heat::ScalingPolicy
+ properties:
+ adjustment_type: change_in_capacity
+ auto_scaling_group_id: { get_resource: asg }
+ scaling_adjustment: -1
+
+ health_monitor:
+ type: OS::Neutron::HealthMonitor
+ properties:
+ delay: 3
+ type: HTTP
+ timeout: 3
+ max_retries: 3
+
+ pool:
+ type: OS::Neutron::Pool
+ properties:
+ lb_method: ROUND_ROBIN
+ protocol: HTTP
+ subnet: { get_param: subnet }
+ monitors:
+ - { get_resource: health_monitor }
+ vip:
+ protocol_port: { get_param: lb_port }
+
+ floating_ip:
+ type: OS::Neutron::FloatingIP
+ properties:
+ floating_network: { get_param: public_net }
+ port_id:
+ { get_attr: [pool, vip, 'port_id'] }
+
+ loadbalancer:
+ type: OS::Neutron::LoadBalancer
+ properties:
+ pool_id: { get_resource: pool }
+ protocol_port: { get_param: app_port }
+
+outputs:
+ lburl:
+ description: URL of the loadbalanced app
+ value:
+ str_replace:
+ template: http://IP_ADDRESS:PORT
+ params:
+ IP_ADDRESS: { get_attr: [ floating_ip, floating_ip_address ] }
+ PORT: { get_param: lb_port }