Exercise: Scaling and Load Balancing your Architecture

Natasha Ong
This is some text inside of a div block.
4 min read

Exercise Overview:

In this hands-on exercise, you will use Elastic Load Balancing (ELB) and Amazon EC2 Auto Scaling to load balance and automatically scale your infrastructure. This heavier exercise could take up to 90 minutes to do - so give yourself time to scan through this activity first and get a sense of what to expect!

Key concepts:

  • AWS CloudFormation is a service that makes it faster to create new resources in AWS. Imagine you're building a bunch of things on AWS that involve multiple EC2 instances or databases. Instead of doing it one by one, CloudFormation lets you define everything you want to create in a template, called a stack. Then, it goes through the template and sets everything up for you in one go. This way, you can make sure you're building things in a smart, scalable, and consistent way.
  • ELB automatically spreads incoming traffic across multiple Amazon EC2 instances. ELB helps to keep your website/application up and running even if one of the instances goes down (fault tolerance).
  • Auto Scaling makes sure your website/application is always available and adapts to changing demand. It automatically adds more servers during high-demand periods to maintain performance and reduces capacity during quiet periods to cut costs. Ideal for applications with variable traffic throughout the day, Auto Scaling keeps your system efficient and cost-effective.

As a cloud practitioner or a cloud consultant, you'll learn to create solutions using AWS and communicate those solutions using architecture diagrams.

Here is an architecture diagram! You're not expected to fully understand it right now (you'll get a lot more of its components as you keep going in the course), but take a look at the diagram and notice how your EC2 instance (the orange web server) is nested within layers and layers of rectangles. This is a little taste of how an EC2 fits into the bigger picture of a company's AWS infrastructure.

We need to set up this architecture in order for load balancing and auto scaling to work - but no stress! We've created a template for you to download and use in Task 1, so you won't have to manually set up this architecture yourself.

Objectives:

By the end of this hands-on exercise, you should be able to do the following:

  • Create an AMI from an EC2 instance.
  • Create a load balancer.
  • Create a launch template and an Auto Scaling group.
  • Configure an Auto Scaling group to scale new instances within private subnets.
  • Use Amazon CloudWatch alarms to monitor the performance of your infrastructure.

Important:

A reminder that whenever you're using auto-scaling, you must monitor the EC2 instances being added. To be safe, once you reach Task 5 (where you'll create an auto-scaling group), time yourself 60 minutes to complete the rest of the exercise. Otherwise, AWS might start charging your account because of the extra EC2 instances added.

Task 0: Access the AWS Management Console

  1. Sign in to your IAM user and access your AWS Management Console.
  2. Select your preferred region for doing this exercise. We recommend picking the region that's closest to you.

Task 1: Launch a CloudFormation stack

In this task, you will launch a CloudFormation stack to set up the architecture. Before you start launching your CloudFormation template, create an EC2 key pair first - your CloudFormation stack needs it to work!

  1. To create a key pair, navigate to the EC2 console and choose Key Pairs from the left navigation pane (it's under Network & Security if you can't find it).
  2. Choose Create key pair.
  3. Name: WebServerKP
  4. Key pair type: RSA.
  5. Format: .pem (for macOS and Linux) or .ppk (for Windows OS).
  6. Choose Create key pair.
  7. The key pair will start downloading automatically.
  8. Now, you will launch your CloudFormation stack. Navigate to the CloudFormation console.
  9. Choose the orange Create stack button (not the white one at the top right corner).
  10. Configure the following:
  11. Prerequisite - Prepare template: Template is ready.
  12. Specify template: Upload a template file.
  13. Click here to download your CloudFormation template. The download should start instantly.
    This CloudFormation template will help you set up a virtual network in AWS. This virtual network has a few components you'll learn about soon - public and private subnets, an internet gateway and routing tables. You'll also learn what virtual network means later in the course! The most important thing about the file is that it also contains an EC2 instance (i.e. Web Server 1 in the diagram), which we'll use to create an auto scaling group later.

14 .Click Choose file. Upload the file you've just downloaded.

15. Choose Next.

16. Stack name: VPCStack

17. Choose Next.

18. Leave all others as default, and choose Next.

19. Finally, choose Submit.

20. Awesome! Now the CloudFormation script is running, which means it's creating AWS resources for you in the background. Wait until you see the status update to CREATE_COMPLETE before moving on.

Task 2: Create an AMI for auto-scaling

One of the resources the CloudFormation script launched is an EC2 instance called Web Server 1 (refer to the diagram)! In this task, you'll make an AMI (i.e. a template) of Web Server 1. Creating the AMI means you can launch new instances that are identical to Web Server 1, which will come in handy when we set up an Auto Scaling group.

  1. In the left navigation panel of the EC2 console, under the Instances section, choose Instances.
  2. Select the checkbox next to the Web Server 1 instance, which should appear in a Running state.
  3. From the Actions dropdown list, choose Image and templates > Create image, and then configure the following options:
  • Image name: Web Server AMI
  • Image description - optional: AMI for NextWork's Load Balancing and Auto Scaling exercise
  1. Choose Create image.
  • The confirmation screen displays the AMI ID for your new AMI. You use this AMI when launching the Auto Scaling group in Task 5.

Task 3: Create a load balancer

In this task, you'll create a load balancer that can balance traffic across multiple EC2 instances and Availability Zones.

  1. In the EC2 console, choose Load Balancers from the left navigation panel.
  2. Choose Create load balancer.
  3. In the Load balancer types section, under Application Load Balancer, choose Create.

Curious about the three types of load balancers?

Gateway Load Balancer (GWLB) - The main balancer for directing incoming traffic to the right region and data centre. Once it reaches the data centre, traffic gets directed again through the Application Load Balancer.
Application Load Balancer (ALB) - Reads the content of the request and identifies which server inside the data centre hosts the requested data. ALB is super helpful for websites/web applications, because it can handle situations where data for different parts of the same website (e.g. images, videos) live in different servers. The traffic might also go through the Network Load Balancer.
Network Load Balancer (NLB) - If a server contains different ports (i.e. different entry points designed to handle different types of traffic), the NLB can distribute traffic to make sure they arrive at the right port. While NLB is often used for distributing external traffic coming from the internet, it can also be used for internal communication between servers within an AWS infrastructure (which doesn't go through the internet).

4. On the Create Application Load Balancer page, in the Basic configuration section:

5. Load balancer name: LabExELB

6. In the Network mapping section, configure the following options:

7. For VPC, choose Lab VPC.

8. Wondering where this VPC came from? It's the VPC that the CloudFormation script created for us! VPC = Virtual Private Cloud, and you'll learn all about it later in the course.

9. For Mappings, choose both Availability Zones listed.

10. For the first Availability Zone, choose Public Subnet 1.

11. For the second Availability Zone, choose Public Subnet 2.

  • These options set up the load balancer to operate across multiple Availability Zones.

12. In the Security groups section, remove the default security group.

13. From the Security groups dropdown list, choose Web Security Group. The Web Security Group has already been created for you by the CloudFormation script, and it permits HTTP access (i.e. a common way for traffic to access servers through the internet). You'll also learn all about security groups later in the course.

14. In the Listeners and routing section, click on Create target group. This link opens a new browser tab.

  • Target group = a group of resources, such as EC2 instances, microservices, or containers that the load balancer should direct traffic to.

15. In the new tab, under Basic configuration, configure the following:

16. Choose a target type: Instances.

17. Target group name: labex-target-group

18. Leave the other settings as default. At the bottom of the page, choose Next.

19. On the Register targets page, choose Create target group. Wait for this success message to pop up at the top of your console:

20. Return to the Load balancers browser tab. Now that we're back in the Listeners and routing section, hit the Refresh button to the right of the select target group dropdown list.

21. From the dropdown list, choose labex-target-group.

22. At the bottom of the page, choose Create load balancer.

  • You should receive a message similar to the following:

23. To view the LabExELB load balancer that you created, choose View load balancer.

24. Hit the refresh button (next to the Actions dropdown) until you see the status of your load balancer update from provisioning to active. This might take a few minutes - but you're safe to head to the next few steps without waiting.

25. Copy the DNS name of the load balancer and paste it into a text editor. You need this information later in the lab.

Task 4: Create a launch template

In this task, you will create a launch template for your Auto Scaling group. A launch template is a template that an Auto Scaling group uses to launch EC2 instances. When you create a launch template, you have to specify information like the AMI, instance type and key pair.

  1. In the left navigation panel of the EC2 console, choose Launch Templates.
  2. Choose Create launch template.
  3. On the Create launch template page, in the Launch template name and description section, configure the following options:
  4. For Launch template name - required, enter labex-app-launch-template
  5. For Template version description, enter A web server for the load test app
  6. For Auto Scaling guidance, check the checkbox next to Provide guidance to help me set up a template that I can use with EC2 Auto Scaling.
  7. In the Application and OS Images (Amazon Machine Image) - required section, choose the My AMIs tab. Make sure Web Server AMI (i.e. the AMI that you've just created) is chosen.
  8. In the Instance type section, choose the Instance type dropdown list, and choose t2.micro.
  9. In the Key pair (login) section, confirm that the Key pair name dropdown list is set to Don't include in launch template.
  • We created a key pair in Task 1 is because the CloudFormation template relies on it to launch the EC2 instance. But, we won't need to connect to the instance in this exercise, so we won't need it in the launch template.

10. In the Network settings section, choose the Security groups dropdown list, and choose Web Security Group.

11. Leave the other settings as defaults, and choose Create launch template.

  • Awesome! The launch template will use the AMI, which includes all the necessary configurations, whenever it's spinning up new instances. This is a big time saver that means we won't need to re-enter any user data since it's all contained in the AMI.
  • You should receive a message similar to the following:

12. Choose View launch templates.

Task 5: Create an Auto Scaling group

In this task, you will use your launch template to create an Auto Scaling group.

  1. In the left navigation panel of your EC2 console, choose Auto Scaling Groups.
  2. Choose Create Auto Scaling group.
  3. For Auto Scaling group name, enter LabEx Auto Scaling Group
  4. For Launch template, choose the launch template that you've created i.e. labex-app-launch-template.
  5. Choose Next.
  6. On the Choose instance launch options page, under Network, configure the following options:
  7. From the VPC dropdown list, choose Lab VPC
  8. From the Availability Zones and subnets dropdown list, choose Private Subnet 1 (10.0.1.0/24) and Private Subnet 2 (10.0.3.0/24). It should look similar to this:
  • Why are we choosing private subnets here, but public subnets in Task 3, step 5?
  • In Task 3, the subnets chosen were for the load balancer that handles traffic coming from the public internet. In Task 5, you'll auto-scale your resources (EC2 instance) in a private subnet (i.e. where the public internet can't reach) to make it more secure. You'll learn much more about public vs private subnets later in the course!
  1. Choose Next.
  2. On the Configure advanced options – optional page, configure the following options:
  3. In the Load balancing section, choose Attach to an existing load balancer.
  4. In the Attach to an existing load balancer section, configure the following options:
  5. Choose Choose from your load balancer target groups.
  6. From the Existing load balancer target groups dropdown list, choose labex-target-group | HTTP.
  7. In the Health checks section, for Health check type, select Turn on Elastic Load Balancing health checks.
  8. Choose Next.
  9. On the Configure group size and scaling policies – optional page, configure the following options:
  10. In the Group size section, enter the following values:
  11. Desired capacity:2
  12. Minimum capacity: 2
  13. Maximum capacity: 4
  14. In the Automatic scaling – optional section, configure the following options:
  15. Choose Target tracking scaling policy.
  16. Metric type: Average CPU utilization.
  17. Target value: 50
  • This change tells Auto Scaling to maintain an Average CPU utilisation (i.e. the % of available compute power being used) across all instances of 50 percent. Auto Scaling automatically adds or removes capacity as required to keep the metric at or close to this target value.
  • Curious why we chose 50? 50% if used as a starting point for most apps, although it's not a one-size-fits-all rule!
  • Setting the target value too high = running at maximum CPU utilisation, which means higher costs and the reduced flexibility to handle sudden increases in traffic or workloads.
  • Setting the target too low = underutilisation of resources, which means wasted capacity and paying for more than what you need.

18. Choose Next.

19. On the Add notifications – optional page, choose Next.

20. On the Add tags – optional page, choose Add tag and configure the following options:

21. Key: Enter Name

22. Value - optional: Enter LabEx Instance

This will make sure the EC2 instances you create in your auto scaling group will have a name (LabEx Instance).

  1. Choose Next.
  2. Choose Create Auto Scaling group.

Woohoo! Now your auto scaling group will be launching EC2 instances in private subnets across both Availability Zones. Your Auto Scaling group initially shows an Instances count of zero, but new instances will be launched to reach your desired capacity of two instances.

Remember - time yourself to try finish this exercise in 60 minutes from now!

Task 6: Verify that load balancing is working

In this task, you'll check that load balancing is working correctly.

  1. In the left navigation panel of the EC2 console, under the Instances section, choose Instances.
  • You should see two new instances named LabEx Instance. If you don't, wait 30 seconds and hit the refresh button. These instances were launched by auto scaling - nice, it's working!

The load balancer will only send traffic to instances that are working properly, so let's make sure our new instances have passed their health check!

  1. In the left navigation panel, under the Load Balancing section, choose Target Groups.
  2. Choose labex-target-group.
  • In the Registered targets section, two Lab Instance targets should be listed for this target group.
  1. Wait until the Health status of both instances changes to healthy. To check for updates, choose the refresh button.
  2. Open a new web browser tab, paste the DNS name that you copied before, and press Enter.
  • The Load Test application should appear in your browser, which means that the load balancer received the request, sent it to one of the EC2 instances, and then passed back the result. As you keep refreshing the page, you'll see different InstanceId displaying on the page - which means different instances are getting picked by the load balancer.

Task 7: Test auto-scaling

You created an Auto Scaling group with a minimum of two instances and a maximum of four instances. Currently, two instances are running because the minimum size is two and the group is currently not seeing heavy traffic. You will now increase the load to cause auto-scaling to add additional instances.

  1. Return to the AWS Management Console, but keep the Load Test application tab open. You'll jump back into this tab soon.
  2. Head to the CloudWatch console.
  3. In the left navigation panel, in the Alarms section, choose All alarms. Two alarms are displayed. The Auto Scaling group automatically created these two alarms. These alarms automatically keep the average CPU load close to 50 percent while also staying within the limitation of running two to four EC2 instances.
  4. Choose the alarm that has AlarmHigh in its name. This alarm should show a State - OK.
  • If the state isn't OK, hit the refresh button on the console until the State changes.
  • This alarm adds more instances when the average CPU utilisation goes beyond 50. Right now, the OK state means it's still under 50.

5. Return to the browser tab with the Load Test application.

6. Next to the AWS logo, click Load Test. Make sure you only click this once! Clicking it multiple times might break the application.

  • This step makes the application create heavy loads. Please don't close this tab!

7. Return to the browser tab with the CloudWatch Management Console.

  • Time for a leg stretch - give yourself three minutes to wait for the heavy load to take effect. Hit the refresh button after three minutes. You should see the AlarmHigh chart indicating an increasing CPU percentage. Once it crosses the 50% line for more than 3 minutes, it initiates auto-scaling to add additional instances.
  • In around 5 minutes, the AlarmHigh alarm status should change to In alarm. Remember, do not press Load Test again as you wait.

8. Wait until the AlarmHigh alarm enters the In alarm state.

  • Woohoo! This means more EC2 instances have been launched.

9. Head back to the EC2 console.

10. In the left navigation panel, under the Instances section, choose Instances. Refresh the page. More than two instances of LabEx Instance should be running. This shows that auto-scaling created the new instances in response to the alarm.

Task 8: Delete the resources

In this task, you will delete all the resources that you created - let's avoid any charges from AWS!

Challenge yourself by first testing whether you can delete all the of the resources without any guidance. The resources are (in order):

  1. Auto scaling group
  2. Load balancer
  3. Target group
  4. Launch template
  5. Key pair
  6. AMI
  7. CloudFormation stack

Feeling stuck? Here are the steps:

  1. Head back to the Auto Scaling groups page in the EC2 console, select the auto-scaling group that you've created, choose Actions > Delete. Type delete to confirm the deletion.
  2. Next, head to Load Balancers. select the load balancer that you've created, and choose Actions > Delete load balancer. Type confirm to agree.
  3. Next, head to Target Groups. Select the target group that you've created, and choose Actions > Delete. Choose Yes, delete.
  4. Next, head to Launch Templates. Select the launch template that you've created, and choose Actions > Delete template.
  • Type Delete to confirm the deletion, and choose Delete.
  1. Next, head to Key Pairs. Select the key pair, and choose Actions > Delete.
  2. Next, head to AMIs. Select the AMI, and choose Actions > Disable AMI.
  3. Back to the CloudFormation console, Under Stacks, select the stack that you launched, and choose Delete > Delete.

Congratulations! You have successfully done the following:

  • Created an AMI (Amazon Machine Image) from an EC2 (Elastic Compute Cloud) instance.
  • Set up a load balancer.
  • Developed a launch template and established an Auto Scaling group.
  • Configured the Auto Scaling group to scale new instances within private subnets.
  • Utilized Amazon CloudWatch alarms to monitor the performance of the infrastructure.