AWS Application Load Balancer – ALB

AWS Application Load Balancer -ALB

  • Application Load Balancer operates at layer 7 (application layer) and allows defining routing rules based on content across multiple services or containers running on one or more EC2 instances.
  • scales the load balancer as traffic to the application changes over time.
  • can scale to the vast majority of workloads automatically.
  • supports health checks, used to monitor the health of registered targets so that the load balancer can send requests only to the healthy targets.

Application Load Balancer Components

  • A load balancer
    • serves as the single point of contact for clients.
    • distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple AZs, which increases the availability of the application.
    • one or more listeners can be added to the load balancer.
  • A listener
    • checks for connection requests from clients, using the configured protocol and port
    • rules defined determine, how the load balancer routes request to its registered targets.
    • each rule consists of a priority, one or more actions, and one or more conditions.
    • supported actions are
      • forward,
      • redirect,
      • fixed-response
    • supported rule conditions are
      • host-header
      • http-request-method
      • path-pattern
      • source-ip
      • http-header
      • query-string
    • when the conditions for a rule are met, its actions are performed.
    • a default rule for each listener must be defined, and optionally additional rules can be defined
  • Target group
    • routes requests to one or more registered targets, such as EC2 instances, using the specified protocol and port number
    • a target can be registered with multiple target groups.
    • health checks can be configured on a per target group basis.
    • health checks are performed on all targets registered to a target group that is specified in a listener rule for the load balancer.
    • target group supports
        • EC2 instances (can be managed as a part of the ASG)
        • ECS tasks
        • Lambda functions
        • Private IP Addresses on AWS or On-premises over VPN or DX.
    • supports weighted target group routing
      • enables routing of the traffic forwarded by a rule to multiple target groups.
      • enables use cases like blue-green, canary and hybrid deployments without the need for multiple load balancers.
      • also enables zero-downtime migration between on-premises and cloud or between different compute types like EC2 and Lambda.
  • When a load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply and then selects a target from the target group for the rule action.
  • Listener rules can be configured to route requests to different target groups based on the content of the application traffic.
  • Routing is performed independently for each target group, even when a target is registered with multiple target groups.
  • Routing algorithm used can be configured at the target group level.
  • Default routing algorithm is round-robin; alternatively, the least outstanding requests routing algorithm can also be specified

Classic Load Balancer vs Application Load Balancer vs Network Load Balancer

Refer Blog Post @ Classic Load Balancer vs Application Load Balancer vs Network Load Balancer

Application Load Balancer Benefits

  • Support for Path-based routing,  where listener rules can be configured to forward requests based on the URL in the request. This enables structuring applications as smaller services (microservices), and route requests to the correct service based on the content of the URL.
  • Support for routing requests to multiple services on a single EC2 instance by registering the instance using multiple ports using Dynamic Port mapping.
  • Support for containerized applications. EC2 Container Service (ECS) can select an unused port when scheduling a task and register the task with a target group using this port, enabling efficient use of the clusters.
  • Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level.
  • Attaching a target group to an Auto Scaling group enables scaling each service dynamically based on demand.

Application Load Balancer Features

  • supports load balancing of applications using HTTP and HTTPS (Secure HTTP) protocols
  • supports HTTP/2, which is enabled natively. Clients that support HTTP/2 can connect over TLS
  • supports WebSockets and Secure WebSockets natively
  • supports Request tracing, by default.
    • request tracing can be used to track HTTP requests from clients to targets or other services.
    • Load balancer upon receiving a request from a client, adds or updates the X-Amzn-Trace-Id header before sending the request to the target
    • Any services or applications between the load balancer and the target can also add or update this header.
  • supports Sticky Sessions (Session Affinity) using load balancer generated cookies, to route requests from the same client to the same target
  • supports SSL termination, to decrypt the request on ALB before sending it to the underlying targets.
    • an SSL certificate can be installed on the load balancer.
    • the load balancer uses this certificate to terminate the connection and then decrypt requests from clients before sending them to targets.
  • supports layer 7 specific features like X-Forwarded-For headers to help determine the actual client IP, port and protocol
  • automatically scales its request handling capacity in response to incoming application traffic.
  • supports hybrid load balancing,
    • If an application runs on targets distributed between a VPC and an on-premises location, they can be added to the same target group using their IP addresses
  • provides High Availability, by allowing you to specify more than one AZ and distribution of incoming traffic across multiple AZs.
  • integrates with ACM to provision and bind an SSL/TLS certificate to the load balancer thereby making the entire SSL offload process very easy
  • supports multiple certificates for the same domain to a secure listener
  • supports IPv6 addressing, for an Internet-facing load balancer
  • supports Cross-zone load balancing, by default
  • supports Security Groups to control the traffic allowed to and from the load balancer.
  • provides Access Logs, to record all requests sent to the load balancer, and store the logs in S3 for later analysis in compressed format
  • provides Delete Protection, to prevent the ALB from accidental deletion
  • supports Connection Idle Timeout – ALB maintains two connections for each request one with the Client (front end) and one with the target instance (back end). If no data has been sent or received by the time that the idle timeout period elapses, ALB closes the front-end connection
  • integrates with CloudWatch to provide metrics, such as request counts, error counts, error types, and request latency
  • integrates with AWS WAF, a web application firewall that helps protect web applications from attacks by allowing rules configuration based on IP addresses, HTTP headers, and custom URI strings
  • integrates with CloudTrail to receive a history of ALB API calls made on the AWS account
  • back-end server authentication (MTLS) is NOT supported

Application Load Balancer Listeners

  • A listener is a process that checks for connection requests, using the configured protocol and port
  • Listener supports HTTP & HTTPS protocol with Ports from 1-65535
  • ALB supports SSL Termination for HTTPS listeners, which helps to offload the work of encryption and decryption so that the targets can focus on their main work.
  • HTTPS listener supports exactly one SSL server certificate on the listener.
  • HTTPS listener must have at least one SSL server certificate on the listener
  • WebSockets with both HTTP and HTTPS listeners (Secure WebSockets)
  • Supports HTTP/2 with HTTPS listeners
    • 128 requests can be sent in parallel using one HTTP/2 connection.
    • ALB converts these to individual HTTP/1.1 requests and distributes them across the healthy targets in the target group using the round-robin routing algorithm.
    • HTTP/2 uses front-end connections more efficiently resulting in fewer connections between clients and the load balancer.
    • Server-push feature of HTTP/2 is not supported
  • Each listener has a default rule, and can optionally define additional rules.
  • Each rule consists of a priority, action, optional host condition, and optional path condition.
    • Priority – Rules are evaluated in priority order, from the lowest value to the highest value. The default rule has the lowest priority
    • Action – Each rule action has a type and a target group. Currently, the only supported type is forward, which forwards requests to the target group. You can change the target group for a rule at any time.
    • Condition – There are two types of rule conditions: host and path. When the conditions for a rule are met, then its action is taken
  • Host Condition or Host-based routing
    • Host conditions can be used to define rules that forward requests to different target groups based on the hostname in the host header
    • This enables support for multiple domains using a single ALB for e.g. orders.example.com, images.example.com, registration.example.com
    • Each host condition has one hostname. If the hostname in
  • Path Condition or path-based routing
    • Path conditions can be used to define rules that forward requests to different target groups based on the URL in the request
    • Each path condition has one path pattern for e.g. example.com/orders, example.com/images, example.com/registration
    • If the URL in a request matches the path pattern in a listener rule exactly, the request is routed using that rule.

Advantages over Classic Load Balancer

  • Support for path-based routing, where rules can be configured for the listener to forward requests based on the content of the URL
  • Support for host-based routing, where rules can be configured for the listener to forward requests based on the host field in the HTTP header.
  • Support for routing based on fields in the request, such as standard and custom HTTP headers and methods, query parameters, and source IP address
  • Support for routing requests to multiple applications on a single EC2 instance. Each instance or IP address can be registered with the same target group using multiple ports
  • Support for registering targets by IP address, including targets outside the VPC for the load balancer.
  • Support for redirecting requests from one URL to another.
  • Support for returning a custom HTTP response.
  • Support for registering Lambda functions as targets.
  • Support for the load balancer to authenticate users of the applications through their corporate or social identities before routing requests.
  • Support containerized applications with ECS using Dynamic port mapping
  • Support monitoring the health of each service independently, as health checks and many CloudWatch metrics are defined at the target group level
  • Attaching the target group to an Auto Scaling group enables scaling of each service dynamically based on demand
  • Access logs contain additional information & stored in compressed format
  • Improved load balancer performance.

Application Load Balancer Pricing

  • charged for each hour or partial hour that an ALB is running and the number of Load Balancer Capacity Units (LCU) used per hour.
  • An LCU is a new metric for determining ALB pricing
  • An LCU defines the maximum resource consumed in any one of the dimensions (new connections, active connections, bandwidth and rule evaluations) the Application Load Balancer processes the traffic.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are designing an application which requires websockets support, to exchange real-time messages with end-users without the end users having to request (or poll) the server for an update? Which ELB option should you choose?
    1. Use Application Load Balancer and enable comet support
    2. Use Classic Load Balancer which supports WebSockets
    3. Use Application Load Balancer which supports WebSockets
    4. Use Classic Load Balancer and enable comet support
  2. Which of the following Internet protocols does an AWS Application Load Balancer Support? Choose 2 answers
    A. ICMP
    B. UDP
    C. HTTP
    D. SNTP
    E. Websocket
  3. Your organization has configured an application behind ALB. However, Clients are complaining that they cannot connect to an Internet-facing load balancer. What cannot be the issue?
    1. Internet-facing load balancer is attached to a private subnet
    2. ALB Security Groups does not allow the traffic
    3. Subnet NACLs do not allow the traffic
    4. ALB was not assigned an EIP
  4. To protect your ALB from accidental deletion, you should
    1. enable Multi-Factor Authentication (MFA) protected access
    2. enable Delete Protection on the ALB
    3. enabled Termination Protection on the ALB
    4. ALB does not provide any feature to prevent accidental deletion
  5. Your organization is using ALB for servicing requests. One of the API request is facing consistent performance issues. Upon checking the flow, you find that the request flows through multiple services. How can you track the performance or timing issues in the application stack at the granularity of an individual request?
    1. Track the request using “X-Amzn-Trace-Id” HTTP header
    2. Track the request using “X-Amzn-Track-Id” HTTP header
    3. Track the request using “X-Aws-Track-Id” HTTP header
    4. Track the request using “X-Aws-Trace-Id” HTTP header

References

Refer AWS documentation – ELB_Application_Load_Balancer

AWS Route 53

How Route 53 Routes Traffic

AWS Route 53

  • Route 53 is a highly available and scalable Domain Name System (DNS) web service.
  • Route 53 provides three main functions:
    • Domain registration
      • allows domain names registration
    • Domain Name System (DNS) service
      • translates friendly domains names like www.example.com into IP addresses like 192.0.2.1
      • responds to DNS queries using a global network of authoritative DNS servers, which reduces latency
      • can route Internet traffic to CloudFront, Elastic Beanstalk, ELB, or S3. There’s no charge for DNS queries to these resources
    • Health Checking
      • can monitor the health of resources such as web and email servers.
      • sends automated requests over the Internet to the application to
        verify that it’s reachable, available, and functional
      • CloudWatch alarms can be configured for the health checks to send notifications when a resource becomes unavailable.
      • can be configured to route Internet traffic away from resources that are unavailable
    • Security
      • supports both DNSSEC for domain registration and DNSSEC signing

How Route 53 Routes Traffic

Supported DNS Resource Record Types

  • A (Address) Format
    • is an IPv4 address in dotted decimal notation for e.g. 192.0.2.1
  • AAAA Format
    • is an IPv6 address in colon-separated hexadecimal format
  • CNAME Format
    • is the same format as a domain name
    • DNS protocol does not allow creation of a CNAME record for the top node of a DNS namespace, also known as the zone apex for e.g. the DNS name example.com registration, the zone apex is example.com, a CNAME record for example.com cannot be created, but CNAME records can be created for www.example.com, newproduct.example.com etc.
    • If a CNAME record is created for a subdomain, any other resource record sets for that subdomain cannot be created for e.g. if a CNAME created for www.example.com, no other resource record sets for which the value of the Name field is www.example.com can be created
  • MX (Mail Xchange) Format
    • contains a decimal number that represents the priority of the MX record, and the domain name of an email server
  • NS (Name Server) Format
    • An NS record identifies the name servers for the hosted zone. The value for an NS record is the domain name of a name server.
  • PTR Format
    • A PTR record Value element is the same format as a domain name.
  • SOA (Start of Authority) Format
    • SOA record provides information about a domain and the corresponding Amazon Route 53 hosted zone
  • SPF (Sender Policy Framework) Format
    • SPF records were formerly used to verify the identity of the sender of email messages, however is not recommended
    • Instead of an SPF record, a TXT record that contains the applicable value is recommended
  • SRV Format
    • An SRV record Value element consists of four space-separated values.The first three values are decimal numbers representing priority, weight, and port. The fourth value is a domain name for e.g. 10 5 80 hostname.example.com
  • TXT (Text) Format
    • A TXT record contains a space-separated list of double-quoted strings. A single string include a maximum
      of 255 characters. In addition to the characters that are permitted unescaped in domain names, space
      is allowed in TXT strings

Alias Resource Record Sets

  • Route 53 supports alias resource record sets, which enables routing of queries to a CloudFront distribution, Elastic Beanstalk, ELB, an S3 bucket configured as a static website, or another Route 53 resource record set
  • Alias records are not standard for DNS RFC and are a Route 53 extension to DNS functionality
  • Alias record is similar to a CNAME record and is always of type A or AAAA
  • Alias record can be created both for the root domain or apex zone, such as example.com, and for subdomains, such as www.example.com. CNAME records can be used only for subdomains.
  • Route 53 automatically recognizes changes in the resource record sets that the alias resource record set refers to for e.g. for a site pointing to an load balancer, if the IP of the load balancer changes, it will reflect those changes automatically in the DNS answers without any changes to the hosted zone that contains resource record sets
  • Alias resource record set does not support TTL or Time to Time if it points to a CloudFront distribution, an ELB, or an S3 bucket. Underlying CloudFront, load balancer, or S3 TTLs are used.
  • Alias records are free to query and do not incur any charges.
  • Alias record supported targets
  • Alias records are not supported for
    • EC2 DNS
    • RDS endpoints.

Route 53 Alias vs CNAME

Refer to blog post @ Route 53 Alias vs CNAME

Route 53 CNAME vs Alias Records

Route 53 Hosted Zone

  • Hosted Zone is a container for records, which include information about how to route traffic for a domain (such as example.com) and all of its subdomains (such as www.example.com, retail.example.com, and seattle.accounting.example.com).
  • A hosted zone has the same name as the corresponding domain.
  • Routing Traffic to the Resources
    • Create a hosted zone with either a public hosted zone or a private hosted zone:
      • Public Hosted Zone – for routing internet traffic to the resources for a specific domain and its subdomains
      • Private hosted zone – for routing traffic within a VPC
    • Create records in the hosted zone
      • Records define where to route traffic for each domain name or subdomain name.
      • Name of each record in a hosted zone must end with the name of the hosted zone.
  • For public/private and private Hosted Zones that have overlapping
    namespaces, Route 53 Resolvers routes traffic to the most specific match.
  • IAM permissions apply only at the Hosted Zone level

Route 53 Health Checks

  • Route 53 health checks monitor the health and performance of the underlying resources.
  • Health check types
    • Health checks that monitor an endpoint, such as a web server.
      • Health checkers are located in locations around the world.
      • The health checker location and interval can be specified.
      • Health checker evaluates the health of the endpoint based
        • Response time
        • Specified failure threshold – Whether the endpoint responds to a number of consecutive health checks
      • The endpoint is considered healthy if more than 18% of health checkers report that an endpoint is healthy.
      • Health check is considered healthy if
        • HTTP and HTTPS health checks
          • TCP connection can be established within four seconds.
          • Returns 2xx or 3xx within two seconds after connecting.
        • TCP health checks
          • TCP connection can be established within ten seconds.
        • HTTP and HTTPS health checks with string matching
          • TCP connection can be established within four seconds.
          • Returns 2xx or 3xx within two seconds after connecting. 
          • Route 53 searches the response body for the specified string which must appear entirely in the first 5,120 bytes of the response body or the endpoint fails the health check.
    • Calculated health checks – Health checks that monitor the status of other health checks.
      • Health check that does the monitoring is the parent health check, and the health checks that are monitored are child health checks.
      • One parent health check can monitor the health of up to 255 child health checks
    • Health checks that monitor the status of a CloudWatch alarm.
      • Route 53 monitors the data stream for the corresponding alarm instead of monitoring the alarm state.
  • Route 53 checks the health of an endpoint by sending an HTTP, HTTPS, or TCP request to the specified IP address and port.
  • For a health check to succeed, the router and firewall rules must allow inbound traffic from the IP addresses that the health checkers use.

Route 53 Routing Policies

Refer Blog post @ Route 53 Routing Policies

Route 53 Resolver

Refer Blog post @ Route 53 Resolver

Route 53 Split-view (Split-horizon) DNS

  • Route 53 Split-view (Split-horizon) DNS helps access an internal version of the website using the same domain name that is used publicly.
  • Both a private and public hosted zone can be maintained with the same domain name for split-view DNS.
  • Ensure that DNS resolution and DNS hostnames are enabled on the source VPC.
  • DNS queries will respond with answers based on the source of the request.
  • From within the VPC, answers will come from the private hosted zone, while public queries will return answers from the public hosted zone.

Route 53 DNSSEC

  • DNSSEC – Domain Name System Security Extensions, a protocol for securing DNS traffic, helps protect a domain from DNS spoofing man-in-the-middle attacks.
  • DNSSEC works only for public hosted zones.
  • Route 53 supports DNSSEC signing as well as DNSSEC for domain registration.
  • With DNSSEC enabled for a domain, the DNS resolver establishes a chain of trust for responses from intermediate resolvers.
  • The chain of trust begins with the TLD registry for the domain (your domain’s parent zone) and ends with the authoritative name servers at your DNS service provider.
  • With DNSSEC enabled, Route 53 creates a key-signing key (KSK) using customer managed key in AWS KMS that supports DNSSEC. The customer-managed key must meet the following requirements
    • must be in the US East (N. Virginia) Region
    • must be an asymmetric customer managed key with an ECC_NIST_P256 key spec.

Route 53 Resolver DNS Firewall

  • Route 53 Resolver DNS Firewall provides protection for outbound DNS requests from the VPCs and can monitor and control the domains that the applications can query.
  • DNS Firewall can filter and regulate outbound DNS traffic for the VPC.
  • Reusable collections of filtering rules can be created in DNS Firewall rule groups and be associated with the VPC, with the activity monitored in DNS Firewall logs and metrics.
  • A primary use of DNS Firewall protections is to help prevent DNS exfiltration of the data. DNS exfiltration can happen when a bad actor compromises an application instance in the VPC and then uses DNS lookup to send data out of the VPC to a domain that they control.
  • DNS Firewall can be configured to
    • deny access to the domains that you know to be bad and allow all other queries to pass through OR
    • deny access to all domains except for the ones that you explicitly trust.
  • DNS Firewall is a feature of Route 53 Resolver and doesn’t require any additional Resolver setup to use.
  • Firewall Manager can be used to centrally configure and manage the DNS Firewall rule group associations for the VPCs across the accounts in an Organization. Firewall Manager automatically adds associations for VPCs that come into the scope of the Firewall Manager DNS Firewall policy

Route 53 Logging

  • DNS Query Logging
    • DNS Query logs contain information like
      • Domain or subdomain that was requested
      • Date and time of the request
      • DNS record type (such as A or AAAA)
      • Route 53 edge location that responded to the DNS query
      • DNS response code, such as NoError or ServFail
    • Route 53 will send DNS Query logs to CloudWatch Logs.
    • DNS Query Logging is only available for public hosted zones. Use Resolver Query logging for private hosted zones.
    • Query logs contain only the queries that DNS resolvers forward to Route 53 and do not include the entries cached by DNS resolvers.
  • Resolver Query Logging
    • Resolver Query logging logs the following queries
      • Queries that originate in specified VPCs, as well as the responses to those DNS queries.
      • Queries from on-premises resources that use an inbound Resolver endpoint.
      • Queries that use an outbound Resolver endpoint for recursive DNS resolution.
      • Queries that use Route 53 Resolver DNS Firewall rules to block, allow, or monitor domain lists.
    • Resolver query logging logs only unique queries, not queries that Resolver is able to respond to from the cache.
    • Resolver query logs include values such as the following:
      • AWS Region where the VPC was created
      • The ID of the VPC that the query originated from
      • The IP address of the instance that the query originated from
      • The instance ID of the resource that the query originated from
      • The date and time that the query was first made
      • The DNS name requested (such as prod.example.com)
      • The DNS record type (such as A or AAAA)
      • The DNS response code, such as NoError or ServFail
      • The DNS response data, such as the IP address that is returned in response to the DNS query
      • A response to a DNS Firewall rule action
    • Route 53 will send Resolver Query logs to

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What does Amazon Route53 provide?
    1. A global Content Delivery Network.
    2. None of these.
    3. A scalable Domain Name System
    4. An SSH endpoint for Amazon EC2.
  2. Does Amazon Route 53 support NS Records?
    1. Yes, it supports Name Service records.
    2. No
    3. It supports only MX records.
    4. Yes, it supports Name Server records. 
  3. Does Route 53 support MX Records?
    1. Yes
    2. It supports CNAME records, but not MX records.
    3. No
    4. Only Primary MX records. Secondary MX records are not supported.
  4. Which of the following statements are true about Amazon Route 53 resource records? Choose 2 answers
    1. An Alias record can map one DNS name to another Amazon Route 53 DNS name.
    2. A CNAME record can be created for your zone apex.
    3. An Amazon Route 53 CNAME record can point to any DNS record hosted anywhere.
    4. TTL can be set for an Alias record in Amazon Route 53.
    5. An Amazon Route 53 Alias record can point to any DNS record hosted anywhere.
  5. Which statements are true about Amazon Route 53? (Choose 2 answers)
    1. Amazon Route 53 is a region-level service
    2. You can register your domain name
    3. Amazon Route 53 can perform health checks and failovers to a backup site in the even of the primary site failure
    4. Amazon Route 53 only supports Latency-based routing
  6. A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should the customer configure the DNS zone apex record to point to the load balancer?
    1. Create an A record pointing to the IP address of the load balancer
    2. Create a CNAME record pointing to the load balancer DNS name.
    3. Create a CNAME record aliased to the load balancer DNS name.
    4. Create an A record aliased to the load balancer DNS name
  7. A user has configured ELB with three instances. The user wants to achieve High Availability as well as redundancy with ELB. Which of the below mentioned AWS services helps the user achieve this for ELB?
    1. Route 53
    2. AWS Mechanical Turk
    3. Auto Scaling
    4. AWS EMR
  8. How can the domain’s zone apex for example “myzoneapexdomain com” be pointed towards an Elastic Load Balancer?
    1. By using an AAAA record
    2. By using an A record
    3. By using an Amazon Route 53 CNAME record
    4. By using an Amazon Route 53 Alias record
  9. You need to create a simple, holistic check for your system’s general availability and uptime. Your system presents itself as an HTTP-speaking API. What is the simplest tool on AWS to achieve this with?
    1. Route53 Health Checks (Refer link)
    2. CloudWatch Health Checks
    3. AWS ELB Health Checks
    4. EC2 Health Checks
  10. Your organization’s corporate website must be available on www.acme.com and acme.com. How should you configure Amazon Route 53 to meet this requirement?
    1. Configure acme.com with an ALIAS record targeting the ELB. www.acme.com with an ALIAS record targeting the ELB.
    2. Configure acme.com with an A record targeting the ELB. www.acme.com with a CNAME record targeting the acme.com record.
    3. Configure acme.com with a CNAME record targeting the ELB. www.acme.com with a CNAME record targeting the acme.com record.
    4. Configure acme.com using a second ALIAS record with the ELB target. www.acme.com using a PTR record with the acme.com record target.

Further Reading

Amazon OpenSearch

Amazon OpenSearch

  • Amazon OpenSearch Service is a managed service that makes it easy to deploy, operate, and scale OpenSearch clusters in the AWS Cloud.
  • is the successor to Elasticsearch Service and supports OpenSearch and legacy Elasticsearch OSS.
  • is a fully open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analysis.
  • OpenSearch provides
    • instance types with numerous configurations of CPU, memory, and storage capacity, including cost-effective Graviton instances
    • Up to 3 PB of attached storage
    • Cost-effective UltraWarm and cold storage for read-only data
    • Integration with AWS IAM, VPC, VPC Security Groups
    • Encryption at Rest and in Transit
    • Authentication with Cognito, HTTP basic, or SAML authentication for OpenSearch Dashboards
    • Index-level, document-level, and field-level security
    • Multi-AZ setup with node allocation across two or three AZs in the same AWS Region
    • Dedicated master nodes to offload cluster management tasks
    • Automated snapshots to back up and restore OpenSearch Service domains
    • Integration with CloudWatch for monitoring, CloudTrail for auditing, S3, Kinesis, and DynamoDB for loading streaming data into OpenSearch Service.

OpenSearch Service Domain

  • An OpenSearch Service domain is synonymous with an OpenSearch cluster.
  • Domains are clusters with specified settings, instance types, instance counts, and storage resources.
  • automates common administrative tasks, such as performing backups, monitoring instances and patching software once the domain is running.
  • uses a blue/green deployment process when updating domains. Blue/green typically refers to the practice of running two production environments, one live and one idle, and switching the two as software changes are made.
  • All domains configured for multiple AZs have zone awareness enabled to ensure shards are distributed across AZs.

OpenSearch Security

  • OpenSearch Service domains support encryption at rest through AWS Key Management Service (KMS), node-to-node encryption over TLS, and the ability to require clients to communicate with HTTPS.
  • supports only symmetric encryption KMS keys, not asymmetric ones.
  • encrypts all indices, log files, swap files, and automated snapshots.
  • does not encrypt Manual snapshots and slow & error logs.
  • can be configured to be accessible with an endpoint within the VPC or a public endpoint accessible to the internet.
  • Network access for VPC endpoints is controlled by security groups and for public endpoints, access can be granted or restricted by IP address.
  • supports integration with Cognito, to allow the end-users to log-in to OpenSearch dashboards through enterprise identity providers such as Microsoft Active Directory using SAML 2.0, Cognito User Pools, and more.

OpenSearch Storage Tiers

  • OpenSearch Service supports three integrated storage tiers, Hot, UltraWarm and Cold.
  • Hot tier is powered by data nodes which are used for indexing, updating, and providing the fastest access to data.
  • UltraWarm nodes complement the hot tier by providing a fully managed, low-cost, read-only, warm storage tier for older and less-frequently accessed data.
  • UltraWarm uses S3 for storage and removes the need to configure a replica for the warm data.
  • Cold storage is a fully-managed lowest cost storage tier that makes it easy to securely store and analyze the historical logs on-demand.
  • Cold storage helps fully detach storage from compute when they are not actively performing analysis of their data and keep the data readily available at low cost.

OpenSearch Cross-Cluster Replication

  • Cross-cluster replication helps automate copying and synchronizing indices from one cluster to another at low latency in the same or different AWS Regions.
  • Domains participating in cross-cluster replications need to meet the following criteria:
    • Participating domains should be on Elasticsearch version 7.10
    • Participating domains need to have encryption in transit enabled
    • Participating domains need to have Fine-Grained Access Control (FGAC) enabled
    • Participating domains versions should adhere to the same rules as rolling version upgrade
  • Current implementation of cross-cluster replication does not support Ultrawarm or Cold Storage.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

Amazon_OpenSearch

AWS ElastiCache Redis vs Memcached

AWS ElastiCache Redis vs Memcached

AWS ElastiCache Redis vs Memcached

ElastiCache supports the Memcached and Redis cache engines. Each engine provides some advantages and ElastiCache Redis vs Memcached provides the key differences between the two engines.

AWS ElastiCache Redis vs Memcached

Memcached

  • Need the simplest model possible.
  • Need to run large nodes with multiple cores or threads.
  • Need the ability to scale out and in, adding and removing nodes as demand on the system increases and decreases.
  • Need to cache objects.

Redis

  • Need complex data types, such as strings, hashes, lists, sets, sorted sets, and bitmaps.
  • Need to sort or rank in-memory datasets.
  • Need persistence of the key store.
  • Need to replicate the data from the primary to one or more read replicas for read-intensive applications.
  • Need automatic failover if the primary node fails.
  • Need publish and subscribe (pub/sub) capabilities – to inform clients about events on the server.
  • Need backup and restore capabilities.
  • Need to support multiple databases.
  • Need Encryption at Rest and in Transit.
  • Need the ability to dynamically add or remove shards from the Redis (cluster mode enabled) cluster.
  • Need to authenticate users with role-based access control.
  • Need geospatial indexing (clustered mode or non-clustered mode)
  • Need to meet compliance requirements – HIPPA, FedRAMP, PCI-DSS.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants to implement best practices on AWS. Which implementation would help eliminate “Single Point of failure”?
    1. ElastiCache Memcached deployment
    2. RDS Single-AZ deployment
    3. EC2 instances on a single AZ
    4. ElastiCache Redis deployment
  2. You are launching your first ElastiCache cache cluster, and start using Memcached. Which of the following is NOT a key feature of Memcached?
    1. You need the ability to scale your Cache horizontally as you grow.
    2. You use more advanced data types, such as lists, hashes, and sets.
    3. You need a simple caching model as possible.
    4. Object caching is your primary goal to offload your database.

References

Amazon_ElastiCache_Selecting_Engine

AWS Transfer Family

AWS Transfer Family

  • AWS Transfer Family is a secure transfer service that helps transfer files into and out of AWS storage services.
  • AWS Transfer Family supports transferring data from or to S3 and EFS.
  • AWS Transfer Family supports transferring data over the following protocols:
    • Secure Shell (SSH) File Transfer Protocol (SFTP)
    • File Transfer Protocol Secure (FTPS)
    • File Transfer Protocol (FTP)
  • Transfer Family provides the following benefits
    • A fully managed service that scales in real-time
    • Compatible with existing applications with no need to modify them or run any file transfer protocol infrastructure.
    • A fully managed, serverless File Transfer Workflow service that makes it easy to set up, run, automate, and monitor the processing of files.
  • Transfer Family supports up to 3 AZs and is backed by an auto-scaling, redundant fleet for your connection and transfer requests.
  • Transfer Family supports three identity provider options:
    • Service Managed, where you store user identities within the service. It is supported for server endpoints that are enabled for SFTP only.
    • Microsoft Active Directory, and,
    • Custom Identity Providers, which helps to integrate an identity provider of your choice.

File Transfer Workflows – MFTW

  • File Transfer Workflows – MFTW is a fully managed, serverless File Transfer Workflow service that makes it easy to set up, run, automate, and monitor the processing of uploaded files.
  • MFTW can be used to automate various processing steps such as copying, tagging, scanning, filtering, compressing/decompressing, and encrypting/decrypting the data that is transferred using Transfer Family.
  • MFTW provides end-to-end visibility for tracking and auditability.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A solutions architect must provide a fully managed replacement for an on-premises solution that allows employees and partners to exchange files. The solution must be easily accessible to employees connecting from on-premises systems, remote employees, and external partners. Which solution meets these requirements?
    1. Use AWS Transfer for SFTP to transfer files into and out of Amazon S3.
    2. Use AWS Snowball Edge for local storage and large-scale data transfers.
    3. Use Amazon FSx to store and transfer files to make them available remotely.
    4. Use AWS Storage Gateway to create a volume gateway to store and transfer files to Amazon S3.

References

AWS_Transfer_Family

AWS DataSync

AWS DataSync

  • AWS DataSync is an online data transfer service that simplifies, automates, and accelerates moving data between storage systems and services.
  • DataSync provides end-to-end security, including encryption and integrity validation.
  • DataSync automates both the management of data-transfer processes and the infrastructure required for high-performance and secure data transfer.
  • DataSync uses a purpose-built network protocol and a parallel, multi-threaded architecture to accelerate the transfers.
  • A DataSync agent is a VM or EC2 instance that AWS DataSync uses to read from or write to a storage system. Agents are commonly used when copying data from on-premises storage to AWS.
  • DataSync transfer is described by a Task and a Task Execution is an individual run of a DataSync task.
  • A task can be configured for locations (source and destination), schedule and how it treats metadata, deleted files, and permissions.
  • Task scheduling automatically runs tasks on the configured schedule with hourly, daily, or weekly options.
  • Each time a task is started it performs an incremental copy, transferring only the changes from the source to the destination.
  • If a task is interrupted, for instance, if the network connection goes down or the agent is restarted, the next run of the task will transfer missing files, and the data will be complete and consistent at the end of this run.
  • AWS DataSync can be used with the Direct Connect link to access public service endpoints or private VPC endpoints.
  • The amount of network bandwidth that AWS DataSync will use can be controlled by configuring the built-in bandwidth throttle.

DataSync Supported Locations

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is migrating its applications to AWS. Currently, applications that run on-premises generate hundreds of terabytes of data that is stored on a shared file system. The company is running an analytics application in the cloud that runs hourly to generate insights from this data. The company needs a solution to handle the ongoing data transfer between the on-premises shared file system and Amazon S3. The solution also must be able to handle occasional interruptions in internet connectivity. Which solutions should the company use for the data transfer to meet these requirements?
    1. AWS DataSync
    2. AWS Migration Hub
    3. AWS Snowball Edge Storage Optimized
    4. AWS Transfer for SFTP

References

AWS_DataSync

AWS CloudFront vs Global Accelerator

AWS CloudFront vs Global Accelerator

AWS CloudFront vs Global Accelerator

  • Global Accelerator and CloudFront both use the AWS global network and its edge locations around the world.
  • Both services integrate with AWS Shield for DDoS protection.
  • Performance
    • CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery).
    • Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions.
  • Use Cases
    • CloudFront is a good fit for HTTP use cases
    • Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or VoIP, as well as for HTTP use cases that require static IP addresses or deterministic, fast regional failover.
  • Caching
    • CloudFront supports Edge caching
    • Global Accelerator does not support Edge Caching.

AWS CloudFront vs Global Accelerator

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants to improve the availability and performance of its stateless UDP-based workload. The workload is deployed on Amazon EC2 instances in multiple AWS Regions. What should a solutions architect recommend to accomplish this?
    1. Place the EC2 instances behind Network Load Balancers (NLBs) in each Region. Create an accelerator using AWS Global Accelerator. Use the NLBs as endpoints for the accelerator.
    2. Place the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create an accelerator using AWS Global Accelerator. Use the ALBs as endpoints for the accelerator.
    3. Place the EC2 instances behind Network Load Balancers (NLBs) in each Region. Create a CloudFront distribution with an origin that uses  Route 53 latency-based routing to route requests to the NLBs.
    4. Place the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create a CloudFront distribution with an origin that uses Route 53 latency-based routing to route requests to the ALBs.

References

AWS_Global_Accelerator_FAQs

AWS Gateway Load Balancer – GWLB

AWS Gateway Load Balancer GWLB

AWS Gateway Load Balancer – GWLB

  • Gateway Load Balancer helps deploy, scale, and manage virtual appliances, such as firewalls, intrusion detection and prevention systems (IDS/IPS), and deep packet inspection systems.
  • GWLB and its registered virtual appliance instances exchange application traffic using the GENEVE (Generic Network Virtualization Encapsulation) protocol on port 6081.
  • operates at Layer 3 of the OSI model, the network layer.
  • transparently passes all Layer 3 traffic through third-party virtual appliances, and is invisible to the source and destination of the traffic.
  • combines a transparent network gateway (that is, a single entry and exit point for all traffic) and distributes traffic while scaling the virtual appliances with the demand.
  • listens for all IP packets across all ports and forwards traffic to the target group that’s specified in the listener rule.
  • runs within one AZ and is recommended to be deployed in multiple AZs for greater availability. If all appliances fail in one AZ, scripts can be used to either add new appliances or direct traffic to a GWLB in a different AZ.
  • cannot add or remove availability zones after the GWLB is created.
  • is architected to handle millions of requests/second, volatile traffic patterns, and introduces extremely low latency.
  • does not perform TLS termination and does not maintain any application state. These functions are performed by the third-party virtual appliances it directs traffic to and receives traffic from.
  • maintains stickiness of flows to a specific target appliance using 5-tuple (TCP/UDP flows) or 3-tuple (for non-TCP/UDP flows).
  • supports a maximum transmission unit (MTU) size of 8500 bytes.
  • supports cross-zone load balancing, which is disabled by default. You pay charges for inter-AZ data transfer if enabled.

Gateway Load Balancer Endpoint – GWLBE

  • GWLB uses Gateway Load Balancer endpoints – GWLBE to exchange traffic across VPC boundaries securely.
  • A Gateway Load Balancer endpoint is a VPC endpoint that provides private connectivity between virtual appliances in the service provider VPC and application servers in the service consumer VPC.
  • One GWLB can be connected to many GWLBEs.
  • GWLB is deployed in the same VPC as the virtual appliances.
  • Virtual appliances are registered with a target group for the GWLB.
  • Traffic to and from a GWLBE is configured using route tables.
  • Traffic flows from the service consumer VPC over the GWLBE to the GWLB in the service provider VPC, and then returns to the service consumer VPC
  • GWLBE and the application servers must be created in different subnets. This enables you to configure the GWLBE as the next hop in the route table for the application subnet.

Gateway Load Balancer Flow

AWS Gateway Load Balancer GWLB

Traffic from the internet to the application (blue arrows)

  • Traffic enters the service consumer VPC through the internet gateway.
  • Traffic is sent to the GWLBE, as a result of VPC ingress routing.
  • Traffic is sent to the GWLB for inspection through the security appliance.
  • Traffic is sent back to the GWLBE after inspection.
  • Traffic is sent to the application servers (destination subnet).

Traffic from the application to the internet (orange arrows):

  • Traffic is sent to the Gateway Load Balancer endpoint due to the default route configured on the application server subnet.
  • Traffic is sent to the GWLB for inspection through the security appliance.
  • Traffic is sent back to the GWLBE after inspection.
  • Traffic is sent to the internet gateway based on the route table configuration.
  • Traffic is routed back to the internet.

Gateway Load Balancer High Availability

AWS Gateway Load Balancer HA

AWS Gateway Load Balancer vs Network Firewall

AWS Network Firewall vs Gateway Load Balancer

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_Gateway_Load_Balancer

AWS Shield

AWS Shield

  • AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS.
  • AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency
  • AWS Shield detects the following classes of attacks
    • Network Volumetric Attacks (Layer 3)
      • This is a sub category of infrastructure layer attack vectors.
      • These vectors attempt to saturate the capacity of the targeted network or resource, to deny service to legitimate users.
    • Network Protocol Attacks (Layer 4)
      • This is a sub category of infrastructure layer attack vectors.
      • These vectors abuse a protocol to deny service to the targeted resource.
      • A common example of a network protocol attack is a TCP SYN flood, which can exhaust connection state on resources like servers, load balancers, or firewalls.
      • A network protocol attack can also be volumetric for e.g., a larger TCP SYN flood may intend to saturate the capacity of a network while also exhausting the state of the targeted resource or intermediate resources.
    • Application Layer Attacks (Layer 7)
      • This category of attack vector attempts to deny service to legitimate users by flooding an application with queries that are valid for the target, such as web request floods.

AWS Shield Tiers

AWS Shield Standard

  • provides automatic protections to all customers at no additional charge
  • defends against the most common, frequently occurring network and transport layer DDoS attacks that target website or applications.
  • with CloudFront and Route 53 comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks is provided.
  • uses techniques such as deterministic packet filtering and priority-based traffic shaping to automatically mitigate basic network layer attacks.

AWS Shield Advanced

  • is a managed service that helps protect the application against external threats, like DDoS attacks, volumetric bots, and vulnerability exploitation attempts.
  • provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks,
  • provides integration with AWS WAF, a web application firewall, at no additional charge. It can create WAF rules in the WebACLs to automatically mitigate an attack, or activate them in count-only mode.
  • also gives 24×7 access to the AWS Shield Response Team (SRT) and protection against DDoS related spikes in the EC2, ELB, CloudFront, AWS Global Accelerator and Route 53 charges.
  • provides DDoS cost protection to safeguard against scaling charges resulting from DDoS-related usage spikes on protected resources.
  • in addition to the network and transport layer attacks, it also detects application layer (Layer 7) attacks such as HTTP floods or DNS query floods by baselining traffic on the application and identifying anomalies.
  • is available globally on all CloudFront, Global Accelerator, and Route 53 edge locations.
  • includes centralized protection management using Firewall Manager, that can automatically
    • configure policies covering multiple accounts and resources
    • audits accounts to find new or unprotected resources, and ensures that Shield Advanced and AWS WAF protections are universally applied.
  • provides complete visibility into DDoS attacks with near real-time notification through CloudWatch and detailed diagnostics on the AWS WAF and AWS Shield console or APIs.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which AWS service has inbuilt DDoS protection inbuilt?
    1. AWS Shield
    2. AWS CloudWatch
    3. AWS EC2
    4. AWS Inspector
  2. A media company has monetized their APIs to external third parties. During the last month, the platform has come under DDoS attacks multiple times leading to scaling of underlying instances and cost incurred. Which AWS service would help provide cost protection against such spikes, if such situations do occur in the future?
    1. AWS Systems Manager
    2. AWS WAF
    3. AWS Shield Advanced
    4. AWS Inspector
  3. A company is hosting an important revenue generating application. On the last few occasions, the application has come under large DDoS attacks. As a result of this, a lot of users were complaining about the slowness of the application. You need to now avoid these situations in the future and now require 24×7 support from AWS if such situations do occur in the future. Which of the following service can help in this regard?
    1. AWS Shield Advanced
    2. AWS Inspector
    3. AWS WAF
    4. AWS Systems Manager

References

AWS_Shield

AWS Certificate Manager – ACM

AWS Certificate Manager

AWS Certificate Manager – ACM

  • AWS Certificate Manager – ACM helps easily provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services and internal connected resources.
  • AWS Certificate Manager removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates.
  • AWS Certificate Manager can help quickly request a certificate, deploy it on ACM-integrated AWS resources, such as ELB, CloudFront distributions, and APIs on API Gateway, and handle certificate renewals.
  • ACM supports importing third-party certificates into the ACM management system.
  • ACM also supports the creation of private certificates for the internal resources and manages the certificate lifecycle centrally.
  • ACM certificates are regional resources.

AWS Certificate Manager

ACM Limitations

  • does not provide certificates for anything other than the SSL/TLS protocols.
  • cannot use certificates for email encryption.
  • cannot request certificates for Amazon-owned domain names such as those ending in amazonaws.com, cloudfront.net, or elasticbeanstalk.com.
  • cannot download the private key for an ACM certificate.
  • cannot directly install certificates on the EC2 website or application
  • are regional resources and cannot copy a certificate between regions. To use a certificate with ELB for the same FQDN or set of FQDNs in more than one AWS region, you must request or import a certificate for each region. For certificates provided by AWS Certificate Manager, you must revalidate each domain name in the certificate for each region
  • with CloudFront, you must request or import the certificate in the US East (N. Virginia) region.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company hosts an online shopping portal in the AWS Cloud. The portal provides HTTPS security by using a TLS certificate on an ELB. Recently, the portal suffered an outage because the TLS certificate expired. A SysOps administrator must create a solution to automatically renew certificates to avoid this issue in the future. What is the MOST operationally efficient solution that meets these requirements?
    1. Request a public certificate by using AWS Certificate Manager. Associate the certificate from ACM with the ELB. Write a scheduled AWS Lambda function to renew the certificate every 18 months.
    2. Request a public certificate by using AWS Certificate Manager. Associate the certificate from ACM with the ELB. ACM will automatically manage the renewal of the certificate.
    3. Register a certificate with a third-party certificate authority (CA). Import this certificate into AWS Certificate Manager. Associate the certificate from ACM with the ELB. ACM will automatically manage the renewal of the certificate.
    4. Register a certificate with a third-party certificate authority (CA). Configure the ELB to import the certificate directly from the CA. Set the certificate refresh cycle on the ELB to refresh when the certificate is within 3 months of the expiration date.

References

AWS_Certificate_Manager