AWS Elastic Beanstalk, Lambda and Batch

AWS Beanstalk

AWS Elastic Beanstalk is an AWS managed service that allows you to upload the code of your web application, along with the environment configurations, which will then allow Elastic Beanstalk to automatically provision and deploy the appropriate and necessary resources required within AWS to make the web application operational.

These resources can include other AWS services and features, such as EC2, Auto Scaling, application health-monitoring, and Elastic Load Balancing, in addition to capacity provisioning.

This automation and simplification makes it an ideal service for engineers who may not have the familiarity or the necessary skills within AWS to deploy, provision, monitor, and scale the correct environment themselves to run the developed applications. Instead, this responsibility is passed on to AWS Elastic Beanstalk to deploy the correct infrastructure to run the uploaded code. This provides a simple, effective, and quick solution to deploying your web application.

Once the application is up and running, you can continue to support and maintain the environment as you would with a custom built environment. You can additionally perform some of the maintenance tasks from the Elastic Beanstalk dashboard itself. Elastic Beanstalk is able to operate with a variety of different platforms and programming languages, making it a very flexible service for your DevOps teams.

One important point to note is that the service itself is free to use. There is no cost associated with Elastic Beanstalk, however, any resources that are created on your application’s behalf, such as EC2 instances, you will be charged for as per the standard pricing policies at the time of deployment.

AWS Beanstalk core components

So now we know at a high level what AWS Elastic Beanstalk is and does, let me run through some of its core components that creates the service.

Application Version

  • The application version:. An application version is a very specific reference to a section of deployable code. The application version will point typically to S3, simple storage service to where the deployable code may reside.

Environment

  • The environment: An environment refers to an application version that has been deployed on AWS resources. These resources are configured and provisioned by AWS Elastic Beanstalk. At this stage, the application is deployed as a solution and becomes operational within your environment. The environment is comprised of all the resources created by Elastic Beanstalk and not just an EC2 instance with your uploaded code.

Environment Configurations

  • Environment configurations: An environment configuration is a collection of parameters and settings that dictate how an environment will have its resources provisioned by Elastic Beanstalk and how these resources will behave. The environment tier. This component reflects on how Elastic Beanstalk provisions resources based on what the application is designed to do.

If the application manages and handles HTTP requests, then the app will be run in a web server environment. If the application does not process HTTP requests, and instead perhaps pulls data from an SQS queue, then it would run in a worker environment. I shall cover more on the differences between the web server and work environment shortly.

Configuration Template

  • Configuration Template: The configuration template is the template that provides the baseline for creating a new, unique environment configuration.

Platform

  • Platform: The platform is a culmination of components in which you can build your application upon using Elastic Beanstalk. These comprise of the operating system of the instance, the programming language, the server type, web or application, and components of Elastic Beanstalk itself, and as a whole can be defined as a platform.

Applications

  • Applications: Within Elastic Beanstalk, an application is a collection of different elements, such as environments, environment configurations, and application versions. In fact, you can have multiple application versions held within a single application. You can deploy your application across one of two different environment tiers, either the web server tier or the worker tier.

These tiers are configured differently depending on the use case of your application. The web server environment is typically used for standard web applications that operate and serve requests over HTTP port 80. This tier will typically use service and features such as Route 53, Elastic Load Balancing, Auto Scaling, EC2, and Security Groups. The worker environment is slightly different and are used by applications that will have a back-end processing task that will interact with AWS SQS, the Simple Queue Service. This tier typically uses the following AWS resources in this environment, an SQS Queue, an IAM Service Role, Auto Scaling, and EC2.

AWS Beanstalk workflow

Now you are aware of some of the terminology and components, we can look at how AWS Elastic Beanstalk operates a very simple workflow process for your application deployment and ongoing management in what can be defined in four simple steps. Firstly, you create an application. Next, you must upload your application version of the application to Elastic Beanstalk, along with some additional configuration information regarding the application itself. This creates the environment configuration. The environment is then created by Elastic Beanstalk with the appropriate resources to run your code. Any management of your application can then take place, such as deploying new versions of your application. If the management of your applications have altered the environment configuration, then your environment will automatically be updated to reflect the new code should additional resources be required.

AWS Lambda

AWS Lambda is a serverless compute service which has been designed to allow you to run your application code without having to manage and provision your own EC2 instances. This saves you having to maintain and administer an additional layer of technical responsibility within your solution. Instead, that responsibility is passed over to AWS to manage for you.

Essentially, serverless means that you do not need to worry about provisioning and managing your own compute resource to run your own code, instead this is managed and provisioned by AWS. AWS will start, scout, maintain, and stop the compute resources as required, which can be just as short as just a few milliseconds. Although it’s named serverless, it does of course require service, or at least compute power, to carry out your code requests, but because the AWS user does not need to be concerned with what’s managing this compute power, or where it’s provisioned from, it’s considered serverless from the user perspective.

If you don’t have to spend time operating, managing, patching, and securing an EC2 instance, then you have more time to focus on the code of your application and its business logic, while at the same time, optimizing costs. With AWS Lambda, you only ever have to pay for the compute power when Lambda is in use via Lambda functions.

Firstly, AWS Lambda needs to be aware of your code that you need run so you can either upload this code to AWS Lambda, or write it within the code editor that Lambda provides. Currently, AWS Lambda supports Notebook.js, JavaScript, Python, Java, Java 8 compatible, C#, .NET Core, Go, and also Ruby. It’s worth mentioning that the code that you write or upload can also include other libraries.

Once your code is within Lambda, you need to configure Lambda functions to execute your code upon specific triggers from supported event sources, such as S3. As an example, a Lambda function can be triggered when an S3 event occurs, such as an object being uploaded to an S3 bucket. Once the specific trigger is initiated during the normal operations of AWS, AWS Lambda will run your code, as per your Lambda function, using only the required compute power as defined. Later in this course I’ll cover more on when and how this compute power is specified. AWS records the compute time in milliseconds and the quantity of Lambda functions run to ascertain the cost of the service.

For an AWS Lambda application to operate, it requires a number of different elements. The following form the key constructs of a Lambda application.

  • Lambda function: The Lambda function is compiled of your own code that you want Lambda to invoke as per defined triggers.
  • Event source: Event sources are AWS services that can be used to trigger your Lambda functions, or put another way, they produce the events that your Lambda function essentially responds to by invoking it.
  • Trigger: The Trigger is essentially an operation from an event source that causes the function to invoke. So essentially triggering that function. For example, an Amazon S3 put request could be used as a trigger.

  • Downstream Resources These are the resources that are required during the execution of your Lambda function. For example, your function might call upon accessing a specific SNS topic, or a particular SQS queue. So they are not used as the source of the trigger, but instead they are the resources to be used to execute the code within the function upon invocation.

  • Log streams: In an effort to help you identify issues and troubleshoot issues with your Lambda function, you can add logging statements to help you identify if your code is operating as expected into a log stream. These log streams will essentially be a sequence of events that all come from the same function and recorded in CloudWatch. In addition to log streams, Lambda also sends common metrics of your functions to CloudWatch for monitoring and alerting.

Creating Lambda Functions

At a high level, the configuration steps for creating a Lambda function via the AWS Management Console could consist of selecting a blueprint, and AWS Lambda provides a large number of common blueprint templates which are preconfigured Lambda functions. To save time on your own code, you can select one of these blueprints and then customize it as necessary. An example of one of these blueprints could be the S3 get object, which is an Amazon S3 trigger that retrieves metadata for the object that is being updated. You then need to configure your triggers, and as I just explained, the trigger is an operation from an event source that causes the function to invoke and in my previous statement, I suggested an S3 put request. And then you need to finish configuring your function. And this section requires you to either upload your code or edit it in-line and it also requires you to define the required resources, the maximum execution timeout, the IAM Role, and Handler Name.

AWS Lambda summary

A key benefit of using AWS Lambda is that it is a highly scalable serverless service, coupled with fantastic cost optimization compared to EC2 as you are only charged for Compute power while the code is running and for the number of functions called.

AWS Batch

AWS Batch service is used to manage and run Batch computing workloads within AWS. Batch computing is primarily used in specialist use cases which require a vast amount of compute power across a cluster of compute resources to complete batch processing executing a series of jobs or tasks.

Outside of a cloud environment, it can be very difficult to maintain and manage a batch computing system. It requires specific software and requires the ability to consume the resources required, which can be very costly. However, with AWS Batch, many of these constraints, administration activities and maintenance tasks are removed.

You can seamlessly create a cluster of compute resources which is highly scalable, taking advantage of the elasticity if AWS, coping with any level of batch processing while optimizing the distribution of the workloads. All provisioning, monitoring, maintenance and management of the clusters themselves is taken care of by AWS, meaning there is no software to be installed by yourself.

AWS Batch Components

There are effectively five components that make up AWS Batch service which will help you to start using the service, these being:

  • Jobs: A job is classed as a unit of work that is to be run by AWS Batch. For example, this can be a Linux executable file, an application within an ECS cluster or a shell script. The jobs themselves run on EC2 instances as a containerized application. Each job can at any one time be in a number of different states, for example, submitted, pending, running, failed, amoung others.

  • Job definitions: These define specific parameters for the jobs themselves. They dictate how the job will run and with what configuration. Some examples of these may be how many vCPUs to use for the container, which data volume should be used, which IAM role should be used, allowing access for AWS Batch to communicate with other AWS services, and mount points.

  • Job queues: Jobs that are scheduled are placed into a job queue until they run. It’s also possible to have multiple queues with different priorities if needed. One queue could be used for on-demand EC2 instances, and another queue could be used for the spot instances. Both on-demand and spot instances are supported by AWS Batch, allowing you to optimize cost, and AWS Batch can even bid on your behalf for those spot instances.

  • Job scheduling: The Job Scheduler takes control of when a job should be run and from which Compute Environment. Typically it will operate on a first-in-first-out basis, and it will look at the different job queues that you have configured, ensuring that higher priority queues are run first, assuming all dependencies of that job have been met.

  • Compute Environments: These are the environments containing the compute resources to carry out the job. The environment can be defined as managed or unmanaged. A managed environment means that the service itself will handle provisioning, scaling and termination of your Compute instances based on the configuration parameters that you would enter regarding the instance type, purchase method, such as on-demand or spot. This environment is then created as an Amazon ECS Cluster. Unmanaged environments are provisioned, managed and maintained by you, which gives greater customization. However, it does require greater administration and maintenance and also requires you to create the necessary Amazon ECS Cluster that the managed environment would have done on your behalf.

AWS Batch Sumary

If you have a requirement to run multiple jobs in parallel using Batch computing, for example, to analyze financial risk models, perform media transcoding or engineering simulations, then AWS Batch would be a perfect solution.