AWS Global Accelerator

AWS Global Accelerator

  • AWS Global Accelerator optimizes the path to the application to keep packet loss, jitter, and latency consistently low.
  • Global Accelerator uses the vast, well-monitored, congestion-free, redundant AWS global network to route TCP and UDP traffic to a healthy application endpoint in the closest AWS Region to the user.
  • Global Accelerator is a global service that supports endpoints in multiple AWS Regions.
  • Global Accelerator provides two global static public IPs that act as a fixed entry point to the application hosted in one or more AWS Regions, improving availability.
  • Global Accelerator helps anycast the static IP addresses from the AWS edge network.
  • Global Accelerator’s IP addresses serve as the frontend interface of your applications.
  • Using static IP addresses ensures you don’t need to make any client-facing changes or update DNS records as you modify or replace endpoints.
  • Global Accelerator allows you to bring your own IP addresses (BYOIP) and use them as a fixed entry point to the application endpoints
  • Global Accelerator supports AWS application endpoints, such as ALBs, NLBs, EC2 Instances, and Elastic IPs without making user-facing changes.
  • AWS Global Accelerator continuously monitors the health of your application endpoints by using TCP, HTTP, and HTTPS health checks.
  • Global Accelerator automatically re-routes the traffic to the nearest healthy available endpoint to mitigate endpoint failure.
  • Global Accelerator allocates two static IPv4 addresses serviced by independent network zones which isolated units with their own set of physical infrastructure and service IP addresses from a unique IP subnet. If one IP address from a network zone becomes unavailable, due to network disruptions or IP address blocking by certain client networks, the client applications can retry using the healthy static IP address from the other isolated network zone.
  • Global Accelerator terminates TCP connections from clients at AWS edge locations and, almost concurrently, establishes a new TCP connection with your endpoints. This gives clients faster response times (lower latency) and increased throughput.
  • Global Accelerator supports Client Affinity which helps build stateful applications.
  • Global Accelerator integrates with AWS Shield Standard, which minimizes application downtime and latency from denial of service – DDoS attacks by using always-on network flow monitoring and automated in-line mitigation.

AWS Global Accelerator

Global Accelerator vs CloudFront

  • Global Accelerator and CloudFront both use the AWS global network and its edge locations around the world.
  • Both services integrate with AWS Shield for DDoS protection.
  • Performance
    • CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery).
    • Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions.
  • Use Cases
    • CloudFront is a good fit for HTTP use cases
    • Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or VoIP, as well as for HTTP use cases that require static IP addresses or deterministic, fast regional failover.
  • Caching
    • CloudFront supports Edge caching
    • Global Accelerator does not support Edge Caching.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What features does AWS Global Accelerator provide? (Select TWO)
    1. Improved security
    2. Improved durability
    3. Improved performance
    4. Improved cost optimization
    5. Improved availability
  2. A company that develops web applications has launched hundreds of Application Load Balancers (ALBs) in multiple Regions. The company wants to create an allow list for the IPs of all the load balancers on its firewall device. A solutions architect is looking for a one-time, highly available solution to address this request, which will also help reduce the number of IPs that need to be allowed by the firewall. What should the solutions architect recommend to meet these requirements?
    1. Create an AWS Lambda function to keep track of the IPs for all the ALBs in different Regions. Keep refreshing this list.
    2. Set up a Network Load Balancer (NLB) with Elastic IPs. Register the private IPs of all the ALBs as targets to this NLB.
    3. Launch AWS Global Accelerator and create endpoints for all the Regions. Register all the ALBs in different Regions to the corresponding endpoints.
    4. Set up an Amazon EC2 instance, assign an Elastic IP to this EC2 instance, and configure the instance as a proxy to forward traffic to all the ALBs.

References

AWS_Global_Accelerator

AWS Route 53 Alias vs CNAME

AWS Route 53 Alias vs CNAME

  • Route 53 Alias records are similar to CNAME records, but there are some important differences.
  • Supported Resources
    • Alias records support selected AWS resources
      • Elastic Load Balancers
      • CloudFront distributions
      • API Gateway
      • Elastic Beanstalk
      • S3 Website
      • Global Accelerator
      • VPC Interface Endpoints
      • Route 53 record in the same hosted zone
    • CNAME record can redirect DNS queries to any DNS record
  • Zone Apex or Root domain like example.com
    • Alias record supports mapping Zone Apex records
    • CNAME record does not support Zone Apex records
  • Charges
    • Route 53 doesn’t charge for alias queries to AWS resources
    • Route 53 charges for CNAME queries.
  • Record Type
    • Alias records only support A or AAAA record types
    • CNAME record redirects DNS queries for a record name regardless of the record type specified in the DNS query, such as A or AAAA.

Route 53 Alias vs CNAME

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following statements are true about Amazon Route 53 resource records? Choose 2 answers
    1. An Alias record can map one DNS name to another Amazon Route 53 DNS name.
    2. A CNAME record can be created for your zone apex.
    3. An Amazon Route 53 CNAME record can point to any DNS record hosted anywhere.
    4. TTL can be set for an Alias record in Amazon Route 53.
    5. An Amazon Route 53 Alias record can point to any DNS record hosted anywhere.
  2. How can the domain’s zone apex for example “myzoneapexdomain com” be pointed towards an Elastic Load Balancer?
    1. By using an AAAA record
    2. By using an A record
    3. By using an Amazon Route 53 CNAME record
    4. By using an Amazon Route 53 Alias record

References

AWS_Route_53_Alias_CNAME_Comparision

AWS CloudFront Edge Functions

AWS CloudFront Edge Functions

  • AWS CloudFront helps write your own code to customize how the CloudFront distributions process HTTP requests and responses.
  • The code runs close to the viewers (users) to minimize latency, and without having to manage servers or other infrastructure.
  • Custom code can manipulate the requests and responses that flow through CloudFront, perform basic authentication and authorization, generate HTTP responses at the edge, and more.
  • CloudFront Edge Functions currently supports two types
    • CloudFront Functions
    • Lambda@Edge

 

Architectural diagram.

CloudFront Functions

  • is a CloudFront native feature (code is managed entirely within CloudFront) and visible only on the CloudFront dashboard.
  • supports lightweight functions written only in JavaScript language
  • runs in Edge Locations
  • has process-based isolation
  • supports Viewer Request, Viewer Response trigger events only
    • Viewer Request: after CloudFront receives the request from the Viewer
    • Viewer Response: before CloudFront forwards the response to the Viewer
  • supports sub-millisecond execution time
  • scales to millions of requests/second
  • as they are built to be more scalable, performant, and cost-effective, they have the following limitations
    • no network access
    • no file system access
  • cannot access the request body
  • use-cases ideal for lightweight processing of web requests like
    • Cache-key manipulations and normalization
    • URL rewrites and redirects
    • HTTP header manipulation
    • Access authorization

Lambda@Edge

  • are Lambda functions and visible on the Lambda dashboard.
  • supports Node.js and Python languages, currently
  • runs in Regional Edge Caches
  • has VM based isolation
  • supports Viewer Request, Viewer Response, Origin Request, and Origin Response trigger events.
    • Viewer Request: after CloudFront receives the request from the Viewer
    • Viewer Response: before CloudFront forwards the response to the Viewer
    • Origin Request: before CloudFront forwards the request to the Origin
    • Origin Response: after CloudFront receives the response from the Origin
  • supports longer execution time, 5 seconds for viewer triggers and 30 seconds for origin triggers
  • scales to 1000s of requests/second
  • has network and file system access
  • can access the request body
  • use-cases
    • Functions that take several milliseconds or more to complete.
    • Functions that require adjustable CPU or memory.
    • Functions that depend on third-party libraries (including the AWS SDK, for integration with other AWS services).
    • Functions that require network access to use external services for processing.
    • Functions that require file system access or access to the body of HTTP requests.

CloudFront Functions vs Lambda@Edge

CloudFront Functions vs Lambda@Edge

CloudFront Edge Functions Restrictions

  • Each event type (viewer request, origin request, origin response, and viewer response) can have only one edge function association.
  • CloudFront Functions and Lambda@Edge in viewer events (viewer request and viewer response) cannot be combined
  • CloudFront does not invoke edge functions for viewer response events when the origin returns HTTP status code 400 or higher.
  • Edge functions for viewer response events cannot modify the HTTP status code of the response, regardless of whether the response came from the origin or the CloudFront cache.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You’ve been given the requirement to customize the content which is distributed to users via a CloudFront Distribution. The content origin is an S3 bucket. How could you achieve this?
    1. Add an event to the S3 bucket. Make the event invoke a Lambda function to customize the content before rendering
    2. Add a Step Function. Add a step with a Lambda function just before the content gets delivered to the users.
    3. Use Lambda@Edge
    4. Use a separate application on an EC2 Instance for this purpose.
  2. A company’s packaged application dynamically creates and returns single-use text files in response to user requests. The company is using Amazon CloudFront for distribution but wants to further reduce data transfer costs. The company cannot modify the application’s source code. What should a solutions architect do to reduce costs?
    1. Use Lambda@Edge to compress the files as they are sent to users.
    2. Enable Amazon S3 Transfer Acceleration to reduce the response times.
    3. Enable caching on the CloudFront distribution to store generated files at the edge.
    4. Use Amazon S3 multipart uploads to move the files to Amazon S3 before returning them to users.

References

AWS CloudFront Security

CloudFront Security

  • CloudFront supports both Encryptions at Rest and in Transit.
  • Restricting access to content
  • Configure HTTPS connections
  • Use signed URLs or cookies to restrict access for selected users
  • Restrict access to content in S3 buckets using origin access identity – OAI, to prevent users from using the direct URL of the file.
  • Set up field-level encryption for specific content fields
  • Use AWS WAF web ACLs to create a web access control list (web ACL) to restrict access to your content.
  • Use geo-restriction, also known as geoblocking, to prevent users in specific geographic locations from accessing content served through a CloudFront distribution.

Data Protection

  • CloudFront supports both Encryption at Rest and in Transit
  • CloudFront provides Encryption in Transit and can be configured
    • to require viewers to use HTTPS to request the files so that connections are encrypted when CloudFront communicates with viewers.
    • to use HTTPS to get files from the origin, so that connections are encrypted when CloudFront communicates with the origin.
    • HTTPS can be enforced using Viewer Protocol Policy and Origin Protocol Policy.
  • CloudFront provides Encryption at Rest
    • uses SSDs which are encrypted for edge location points of presence (POPs), and encrypted EBS volumes for Regional Edge Caches (RECs).
    • Function code and configuration are always stored in an encrypted format on the encrypted SSDs on the edge location POPs, and in other storage locations used by CloudFront.

Restrict Viewer Access

Serving Private Content

  • To securely serve private content using CloudFront
    • Require the users to access the private content by using special CloudFront signed URLs or signed cookies with the following restrictions
      • end date and time, after which the URL is no longer valid
      • start date-time, when the URL becomes valid
      • IP address or range of addresses to access the URLs
    • Require that users access the S3 content only using CloudFront URLs, not S3 URLs. Requiring CloudFront URLs isn’t required, but recommended to prevent users from bypassing the restrictions specified in signed URLs or signed cookies.
  • Signed URLs or Signed Cookies can be used with CloudFront using an HTTP server as an origin. It requires the content to be publicly accessible and care should be taken to not share the direct URL of the content
  • Restriction for Origin can be applied by
    • For S3, using Origin Access Identity – OAI to grant only CloudFront access using Bucket policies or Object ACL, to the content and removing any other access permissions
    • For an Load balancer OR HTTP server, custom headers can be added by CloudFront which can be used at Origin to verify the request has come from CloudFront.
    • Custom origins can also be configured to allow traffic from CloudFront IPs only. CloudFront managed prefix list can be used to allow inbound traffic to the origin only from CloudFront’s origin-facing servers, preventing any non-CloudFront traffic from reaching your origin
  • Trusted Signer
    • To create signed URLs or signed cookies, at least one AWS account (trusted signer) is needed that has an active CloudFront key pair
    • Once the AWS account is added as a trusted signer to the distribution, CloudFront starts to require that users use signed URLs or signed cookies to access the objects.
    • Private key from the trusted signer’s key pair to sign a portion of the URL or the cookie. When someone requests a restricted object, CloudFront compares the signed portion of the URL or cookie with the unsigned portion to verify that the URL or cookie hasn’t been tampered with. CloudFront also validates the URL or cookie is valid for e.g, that the expiration date and time hasn’t passed.
    • Each Trusted signer AWS account used to create CloudFront signed URLs or signed cookies must have its own active CloudFront key pair, which should be frequently rotated
    • A maximum of 5 trusted signers can be assigned for each cache behavior or RTMP distribution

Signed URLs vs Signed Cookies

  • CloudFront signed URLs and signed cookies help to secure the content and provide control to decide who can access the content.
  • Use signed URLs in the following cases:
    • for RTMP distribution as signed cookies aren’t supported
    • to restrict access to individual files, for e.g., an installation download for the application.
    • users using a client, for e.g. a custom HTTP client, that doesn’t support cookies
  • Use signed cookies in the following cases:
    • provide access to multiple restricted files, for e.g., all of the video files in HLS format or all of the files in the subscribers’ area of a website.
    • don’t want to change the current URLs.
  • Signed URLs take precedence over signed cookies, if both signed URLs and signed cookies are used to control access to the same files and a viewer uses a signed URL to request a file, CloudFront determines whether to return the file to the viewer based only on the signed URL.

Canned Policy vs Custom Policy

  • Canned policy or a custom policy is a policy statement, used by the Signed URLs, that helps define the restrictions for e.g. expiration date and timeCloudFront Signed URLs - Canned vs Custom Policy
  • CloudFront validates the expiration time at the start of the event.
  • If the user is downloading a large object, and the URL expires the download would still continue, and the same for RTMP distribution.
  • However, if the user is using range GET requests, or while streaming video skips to another position which might trigger another event, the request would fail.

S3 Origin Access Identity – OAI

CloudFront S3 Origin Access Identity - OAI

  • Origin Access Identity (OAI) can be used to prevent users from directly accessing objects from S3.
  • S3 origin objects must be granted public read permissions and hence the objects are accessible from both S3 as well as CloudFront.
  • Even though CloudFront does not expose the underlying S3 URL, it can be known to the user if shared directly or used by applications.
  • For using CloudFront signed URLs or signed cookies to provide access to the objects, it would be necessary to prevent users from having direct access to the S3 objects.
  • Users accessing S3 objects directly would
    • bypass the controls provided by CloudFront signed URLs or signed cookies, for e.g., control over the date-time that a user can no longer access the content and the IP addresses can be used to access content
    • CloudFront access logs are less useful because they’re incomplete.
  • Origin access identity, which is a special CloudFront user, can be created and associated with the distribution.
  • S3 bucket/object permissions need to be configured to only provide access to the Origin Access Identity.
  • When users access the object from CloudFront, it uses the OAI to fetch the content on the user’s behalf, while the S3 object’s direct access is restricted

Custom Headers

  • Custom headers can be added by CloudFront which can be used at Origin to verify the request has come from CloudFront

CloudFront Security - Custom headers

  • A viewer accesses your website or application and requests one or more files, such as an image file and an HTML file.
  • DNS routes the request to the CloudFront edge location that can best serve the request—typically the nearest CloudFront edge location in terms of latency.
  • At the edge location, AWS WAF inspects the incoming request according to configured web ACL rules.
  • At the edge location, CloudFront checks its cache for the requested content.
    • If the content is in the cache, CloudFront returns it to the user.
    • If the content isn’t in the cache, CloudFront adds the custom header, X-Origin-Verify, with the value of the secret from Secrets Manager, and forwards the request to the origin.
  • At the origin Application Load Balancer (ALB), AWS WAF inspects the incoming request header, X-Origin-Verify, and allows the request if the string value is valid. If the header isn’t valid, AWS WAF blocks the request.
  • At the configured interval, Secrets Manager automatically rotates the custom header value and updates the origin AWS WAF and CloudFront configurations.

Geo-Restriction – Geoblocking

  • Geo restriction can help allow or prevent users in selected countries from accessing the content,
  • CloudFront distribution can be configured either to allow users in
    • whitelist of specified countries to access the content or to
    • deny users in a blacklist of specified countries to access the content
  • Geo restriction can be used to restrict access to all of the files that are
    associated with distribution and to restrict access at the country level
  • CloudFront responds to a request from a viewer in a restricted country with an HTTP status code 403 (Forbidden)
  • Use a third-party geolocation service, if access is to be restricted to a subset of the files that are associated with a distribution or to restrict access at a finer granularity than the country level.

Field Level Encryption Config

  • CloudFront can enforce secure end-to-end connections to origin servers by using HTTPS.
  • Field-level encryption adds an additional layer of security that helps protect specific data throughout system processing so that only certain applications can see it.
  • Field-level encryption can be used to securely upload user-submitted sensitive information. The sensitive information provided by the clients is encrypted at the edge closer to the user and remains encrypted throughout the entire application stack, ensuring that only applications that need the data – and have the credentials to decrypt it – are able to do so.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are building a system to distribute confidential training videos to employees. Using CloudFront, what method could be used to serve content that is stored in S3, but not publically accessible from S3 directly?
    1. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.
    2. Add the CloudFront account security group “amazon-cf/amazon-cf-sg” to the appropriate S3 bucket policy.
    3. Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
    4. Create a S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).
  2. A media production company wants to deliver high-definition raw video for preproduction and dubbing to customer all around the world. They would like to use Amazon CloudFront for their scenario, and they require the ability to limit downloads per customer and video file to a configurable number. A CloudFront download distribution with TTL=0 was already setup to make sure all client HTTP requests hit an authentication backend on Amazon Elastic Compute Cloud (EC2)/Amazon RDS first, which is responsible for restricting the number of downloads. Content is stored in S3 and configured to be accessible only via CloudFront. What else needs to be done to achieve an architecture that meets the requirements? Choose 2 answers
    1. Enable URL parameter forwarding, let the authentication backend count the number of downloads per customer in RDS, and return the content S3 URL unless the download limit is reached.
    2. Enable CloudFront logging into an S3 bucket, leverage EMR to analyze CloudFront logs to determine the number of downloads per customer, and return the content S3 URL unless the download limit is reached. (CloudFront logs are logged periodically and EMR not being real time, hence not suitable)
    3. Enable URL parameter forwarding, let the authentication backend count the number of downloads per customer in RDS, and invalidate the CloudFront distribution as soon as the download limit is reached. (Distribution are not invalidated but Objects)
    4. Enable CloudFront logging into the S3 bucket, let the authentication backend determine the number of downloads per customer by parsing those logs, and return the content S3 URL unless the download limit is reached. (CloudFront logs are logged periodically and EMR not being real time, hence not suitable)
    5. Configure a list of trusted signers, let the authentication backend count the number of download requests per customer in RDS, and return a dynamically signed URL unless the download limit is reached.
  3. To enable end-to-end HTTPS connections from the user‘s browser to the origin via CloudFront, which of the following options are valid? Choose 2 answers
    1. Use self-signed certificate in the origin and CloudFront default certificate in CloudFront. (Origin cannot be self-signed)
    2. Use the CloudFront default certificate in both origin and CloudFront (CloudFront cert cannot be applied to origin)
    3. Use a 3rd-party CA certificate in the origin and CloudFront default certificate in CloudFront
    4. Use 3rd-party CA certificate in both origin and CloudFront
    5. Use a self-signed certificate in both the origin and CloudFront (Origin cannot be self-signed)

References

AWS_CloudFront_Security

AWS CloudFront with S3

AWS CloudFront with S3

  • CloudFront can be used to distribute the content from an S3 bucket
  • For an RTMP distribution, the S3 bucket is the only supported origin, and custom origins cannot be used
  • Using CloudFront over S3 has the following benefits
    • can be more cost-effective if the objects are frequently accessed as at higher usage, the price for CloudFront data transfer is much lower than the price for S3 data transfer.
    • downloads are faster with CloudFront than with S3 alone because the objects are stored closer to the users
  • When using S3 as the origin for distribution and the bucket is moved to a different region, CloudFront can take up to an hour to update its records to include the change of region when both of the following are true:
    • Origin Access Identity (OAI) is used to restrict access to the bucket
    • Bucket is moved to an S3 region that requires Signature Version 4 for authentication

Origin Access Identity

CloudFront S3 Origin Access Identity - OAI

  • S3 origin objects must be granted public read permissions and hence the objects are accessible from both S3 as well as CloudFront.
  • Even though CloudFront does not expose the underlying S3 URL, it can be known to the user if shared directly or used by applications.
  • For using CloudFront signed URLs or signed cookies to provide access to the objects, it would be necessary to prevent users from having direct access to the S3 objects.
  • Users accessing S3 objects directly would
    • bypass the controls provided by CloudFront signed URLs or signed cookies, for e.g., control over the date-time that a user can no longer access the content and the IP addresses can be used to access content
    • CloudFront access logs are less useful because they’re incomplete.
  • Origin Access Identity (OAI) can be used to prevent users from directly accessing objects from S3.
  • Origin access identity, which is a special CloudFront user, can be created and associated with the distribution.
  • S3 bucket/object permissions need to be configured to only provide access to the Origin Access Identity.
  • When users access the object from CloudFront, it uses the OAI to fetch the content on the user’s behalf, while the S3 object’s direct access is restricted

CloudFront with S3 Objects

  • CloudFront can be configured to include custom headers or modify existing headers whenever it forwards a request to the origin, to
    • validate the user is not accessing the origin directly, bypassing CDN
    • identify the CDN from which the request was forwarded, if more than one CloudFront distribution is configured to use the same origin
    • if users use viewers that don’t support CORS, configure CloudFront to forward the Origin header to the origin. That will cause the origin to return the Access-Control-Allow-Origin header for every request

Adding & Updating Objects

  • Objects just need to be added to the Origin and CloudFront would start distributing them when accessed
  • Objects served by CloudFront the Origin, can be updated either by
    • Overwriting the Original object
    • Create a different version and updating the links exposed to the user
  • For updating objects, its recommended to use versioning for e.g. have files or the entire folders with versions, so the the links can be changed when the objects are updated forcing a refresh
  • With versioning,
    • there is no time wait for an object to expire before CloudFront begins to serve a new version of it
    • there is no difference in consistency in the object served from the edge
    • no cost involved to pay for object invalidation.

Removing/Invalidating Objects

  • Objects, by default, would be removed upon expiry (TTL) and the latest object would be fetched from the Origin
  • Objects can also be removed from the edge cache before it expires
    • File or Object Versioning to serve a different version of the object that has a different name.
    • Invalidate the object from edge caches. For the next request, CloudFront returns to the Origin to fetch the object
  • Object or File Versioning is recommended over Invalidating objects
    • if the objects need to be updated frequently.
    • enables to control which object a request returns even when the user has a version cached either locally or behind a corporate caching proxy.
    • makes it easier to analyze the results of object changes as CloudFront access logs include the names of the objects
    • provides a way to serve different versions to different users.
    • simplifies rolling forward & back between object revisions.
    • is less expensive, as no charges for invalidating objects.
    • for e.g. change header-v1.jpg to header-v2.jpg
  • Invalidating objects from the cache
    • objects in the cache can be invalidated explicitly before they expire to force a refresh
    • allows to invalidate selected objects
    • allows to invalidate multiple objects for e.g. objects in a directory or all of the objects whose names begin with the same characters, you can include the * wildcard at the end of the invalidation path.
    • the user might continue to see the old version until it expires from those caches.
    • A specified number of invalidation paths can be submitted each month for free. Any invalidation requests more than the allotted no. per month, a fee is charged for each submitted invalidation path
    • The First 1,000 invalidation paths requests submitted per month are free; charges apply for each invalidation path over 1,000 in a month.
    • Invalidation path can be for a single object for e.g. /js/ab.js or for multiple objects for e.g. /js/* and is counted as a single request even if the * wildcard request may invalidate thousands of objects.
  • For RTMP distribution, objects served cannot be invalidated

Partial Requests (Range GETs)

  • Partial requests using Range headers in a GET request helps to download the object in smaller units, improving the efficiency of partial downloads and the recovery from partially failed transfers.
  • For a partial GET range request, CloudFront
    • checks the cache in the edge location for the requested range or the entire object and if exists, serves it immediately
    • if the requested range does not exist, it forwards the request to the origin and may request a larger range than the client requested to optimize performance
    • if the origin supports range header, it returns the requested object range and CloudFront returns the same to the viewer
    • if the origin does not support range header, it returns the complete object and CloudFront serves the entire object and caches it for future.
    • CloudFront uses the cached entire object to serve any future range GET header requests

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are building a system to distribute confidential training videos to employees. Using CloudFront, what method could be used to serve content that is stored in S3, but not publically accessible from S3 directly?
    1. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.
    2. Add the CloudFront account security group “amazon-cf/amazon-cf-sg” to the appropriate S3 bucket policy.
    3. Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
    4. Create an S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).

AWS Client VPN

AWS Client VPN

  • AWS Client VPN is a managed client-based VPN service that enables secure access to AWS resources and resources in the on-premises network
  • Client VPN allows accessing the resources from any location using an OpenVPN-based VPN client.
  • Client VPN establishes a secure TLS connection from any location using the OpenVPN client.
  • Client VPN automatically scales to the number of users connecting to the AWS resources and on-premises resources.
  • Client VPN supports client authentication using Active Directory, federated authentication, and certificate-based authentication.
  • Client VPN provides manageability with the ability to manage active client connections, with the ability to terminate active client connections and to view connection logs, which provide details on client connection attempts

AWS Client VPN

Client VPN Components

  • Client VPN endpoint
    • is the resource that is created and configured to enable and manage client VPN sessions.
    • is the resource where all client VPN sessions are terminated.
  • Target network
    • is the network associated with a Client VPN endpoint.
    • is a subnet from a VPC that enables establishing VPN sessions.
    • Multiple subnets can be associated with the Client VPN endpoint, however, each subnet must belong to a different Availability Zone.
  • Route
    • describes the available destination network routes.
    • Each route in the route table specifies the path for traffic to specific resources or networks.
  • Authorization rules
    • restrict the users who can access a network.
    • helps configure the AD or IdP group that is allowed access. Only users belonging to this group can access the specified network.
  • Client
    • end-user connecting to the Client VPN endpoint to establish a VPN session.
    • need to download an OpenVPN client and use the Client VPN configuration file to establish a VPN session.

Client VPN Authentication & Authorization

  • Client VPN provides authentication and authorization capabilities.
  • Authentication determines whether clients are allowed to connect to the Client VPN endpoint
  • Client VPN offers the following types of client authentication:
    • Active Directory authentication (user-based)
    • Mutual authentication (certificate-based)
    • Single sign-on (SAML-based federated authentication) (user-based)
  • Client VPN supports two types of authorization:
    • Security groups and
    • Network-based authorization (using authorization rules)
      • allows mapping of the Active Directory group or the SAML-based IdP group to the network they can have access to.

Client VPN Split Tunnel

  • Client VPN endpoint, by default, routes all traffic over the VPN tunnel.
  • Split-tunnel Client VPN endpoint helps when you do not want all user traffic to route through the Client VPN endpoint.
  • Split tunnel ensures only traffic with a destination to the network matching a route from the Client VPN endpoint route table is routed over the Client VPN tunnel.
  • Split-tunnel offers the following benefits:
    • Optimized routing of traffic from clients by having only the AWS destined traffic traverse the VPN tunnel.
    • Reduced volume of outgoing traffic from AWS, therefore reducing the data transfer cost.

Client VPN Limitations

  • Client CIDR ranges cannot overlap with the local CIDR of the VPC in which the associated subnet is located, or any routes manually added to the Client VPN endpoint’s route table.
  • Client CIDR ranges must have a block size between /22 and /12.
  • Client CIDR range cannot be changed after Client VPN endpoint creation.
  • Subnets associated with a Client VPN endpoint must be in the same VPC.
  • Multiple subnets from the same AZ cannot be associated with a Client VPN endpoint.
  • A Client VPN endpoint does not support subnet associations in a dedicated tenancy VPC.
  • Client VPN supports IPv4 traffic only.
  • Client VPN is not Federal Information Processing Standards (FIPS) compliant.
  • As Client VPN is a managed service and the IP address to which the DNS name resolves might change. Hence, it is not recommended to connect to the Client VPN endpoint by using IP addresses. Use DNS instead.
  • IP forwarding is currently disabled when using the AWS Client VPN Desktop Application.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is developing an application on AWS. For analysis, the application transmits log files to an Amazon Elasticsearch Service (Amazon ES) cluster. Each piece of data must be contained inside a VPC. A number of the company’s developers work remotely. Other developers are based at three distinct business locations. The developers must connect to Amazon ES directly from their local development computers in order to study and display logs. Which solution will satisfy these criteria?
    1. Configure and set up an AWS Client VPN endpoint. Associate the Client VPN endpoint with a subnet in the VPC. Configure a Client VPN self-service portal. Instruct the developers to connect by using the client for Client VPN.
    2. Create a transit gateway, and connect it to the VPC. Create an AWS Site-to-Site VPN. Create an attachment to the transit gateway. Instruct the developers to connect by using an OpenVPN client.
    3. Create a transit gateway, and connect it to the VPC. Order an AWS Direct Connect connection. Set up a public VIF on the Direct Connect connection. Associate the public VIF with the transit gateway. Instruct the developers to connect to the Direct Connect connection.
    4. Create and configure a bastion host in a public subnet of the VPC. Configure the bastion host security group to allow SSH access from the company CIDR ranges. Instruct the developers to connect by using SSH.

References

AWS_Client_VPN

AWS Transit Gateway – TGW

AWS Transit Gateway

  • AWS Transit Gateway – TGW is a highly available and scalable service to consolidate the AWS VPC routing configuration for a region with a hub-and-spoke architecture.
  • TGW acts as a Regional virtual router and is a network transit hub that can be used to interconnect VPCs and on-premises networks.
  • TGW traffic always stays on the global AWS backbone, data is automatically encrypted, and never traverses the public internet, thereby reducing threat vectors, such as common exploits and DDoS attacks.
  • TGW is a Regional resource and can connect VPCs within the same AWS Region.
  • Transit Gateways across different regions can peer with each other to enable VPC communications across regions.
  • Each spoke VPC only needs to connect to the Transit Gateway to gain access to other connected VPCs.
  • TGW provides simpler VPC-to-VPC communication management over VPC Peering with a large number of VPCs.
  • TGW scales elastically based on the volume of network traffic.
  • TGW routing operates at layer 3, where the packets are sent to a specific next-hop attachment, based on their destination IP addresses.

Transit Gateway

Transit Gateway High Availability

  • Transit Gateway must be enabled with multiple AZs to ensure availability and to route traffic to the resources in the VPC subnets.
  • AZ can be enabled by specifying exactly one subnet within the AZ
  • TGW places a network interface in that subnet using one IP address from the subnet.
  • TGW can route traffic to all the subnets and not just the specified subnet within the enabled AZ.
  • Resources that reside in AZs where there is no transit gateway attachment cannot reach the transit gateway.

Transit Gateway Attachments

  • Transit Gateway attachment is the connection between resources like VPC, VPN, Direct Connect, and the Transit Gateway.
  • Transit Gateway attachment is both a source and a destination of packets.
  • TGW supports the following attachments
    • One or more VPCs
    • One or more VPN connections
    • One or more AWS Direct Connect gateways
    • One or more Transit Gateway Connect attachments
    • One or more transit gateway peering connections
    • One of more Connect SD-WAN/third-party network appliance

Transit Gateway Routing

  • Transit Gateway routes IPv4 and IPv6 packets between attachments using transit gateway route tables.
  • Route tables can be configured to propagate routes from the route tables for the attached VPCs, VPN connections, and Direct Connect gateways.
  • When a packet comes from one attachment, it is routed to another attachment using the route that matches the destination IP address.
  • VPC attached to a transit gateway must be added a route to the subnet route table in order for traffic to route through the transit gateway.

Transit Gateway vs Transit VPC vs VPC Peering

VPC Peering vs Transit VPC vs Transit Gateway

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is using a VPC peering strategy to connect its VPCs in a single Region to allow for cross-communication. A recent increase in account creations and VPCs has made it difficult to maintain the VPC peering strategy, and the company expects to grow to hundreds of VPCs. There are also new requests to create site-to-site VPNs with some of the VPCs.
    A solutions architect has been tasked with creating a centrally managed networking setup for multiple accounts, VPCs, and VPNs.Which networking solution meets these requirements?

    1. Configure shared VPCs and VPNs and share with each other.
    2. Configure a hub-and-spoke VPC and route all traffic through VPC peering.
    3. Configure an AWS Direct Connect connection between all VPCs and VPNs.
    4. Configure a transit gateway with AWS Transit Gateway and connect all VPCs and VPNs
  2. A company hosts its core network services, including directory services and DNS, in its on-premises data center. The data center is connected to the AWS Cloud using AWS Direct Connect (DX). Additional AWS accounts are planned that will require quick, cost-effective, and consistent access to these network services. What should a solutions architect implement to meet these requirements with the LEAST amount of operational overhead?
    1. Create a DX connection in each new account. Route the network traffic to the on-premises servers.
    2. Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers.
    3. Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers.
    4. Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.

References

AWS_Transit_Gateway

AWS Transit VPC

AWS Transit VPC

  • Transit Gateway can be used instead of Transit VPC. AWS Transit Gateway offers the same advantages as transit VPC, but it is a managed service that scales elastically in a highly available product.
  • Transit VPC helps connect multiple, geographically disperse VPCs and remote networks in order to create a global network transit center.
  • Transit VPC can solve some of the shortcomings of VPC peering by introducing a hub and spoke design for inter-VPC connectivity.
  • A transit VPC simplifies network management and minimizes the number of connections required to connect multiple VPCs and remote networks.
  • Transit VPC allows an easy way to implement shared services or packet inspection/replication in a VPC.
  • Transit VPC can be used to support important use cases
    • Private Networking – build a private network that spans two or more AWS Regions.
    • Shared Connectivity – Multiple VPCs can share connections to data centers, partner networks, and other clouds.
    • Cross-Account AWS Usage – The VPCs and the AWS resources within them can reside in multiple AWS accounts.
  • Transit VPC design helps implement more complex routing rules, such as network address translation between overlapping network ranges, or to add additional network-level packet filtering or inspection

Transit VPC Configuration

  • Transit VPC network consists of a central VPC (the hub VPC) connecting with every other VPC (spoke VPC) through a VPN connection typically leveraging BGP over IPsec.
  • Central VPC contains EC2 instances running software appliances that route incoming traffic to their destinations using the VPN overlay.

Transit VPC Advantages & Disadvantages

  • supports Transitive routing using the overlay VPN network — allowing for a simpler hub and spoke design. Can be used to provide shared services for VPC Endpoints, Direct Connect connection, etc.
  • supports network address translation between overlapping network ranges.
  • supports vendor functionality around advanced security (layer 7 firewall/Intrusion Prevention System (IPS)/Intrusion Detection System (IDS) ) using third-party software on EC2
  • leverages instance-based routing that increases costs while lowering availability and limiting the bandwidth.
  • Customers are responsible for managing the HA and redundancy of EC2 instances running the third-party vendor virtual appliances

Transit VPC High Availability

Transit VPC High Availability

Transit VPC vs VPC Peering vs Transit Gateway

VPC Peering vs Transit VPC vs Transit Gateway

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Under increased cyber security concerns, a company is deploying a near real-time intrusion detection system (IDS) solution. A system must be put in place as soon as possible. The architecture consists of many AWS accounts, and all results must be delivered to a central location. Which solution will meet this requirement, while minimizing downtime and costs?
    1. Deploy a third-party vendor solution to perform deep packet inspection in a transit VPC.
    2. Enable VPC Flow Logs on each VPC. Set up a stream of the flow logs to a central Amazon Elasticsearch cluster.
    3. Enable Amazon Macie on each AWS account and configure central reporting.
    4. Enable Amazon GuardDuty on each account as members of a central account.
  2. Your company has set up a VPN connection between their on-premises infrastructure and AWS. They have multiple VPCs defined. They also need to ensure that all traffic flows through a security VPC from their on-premise infrastructure. How would you architect the solution? (Select TWO)
    1. Create a VPN connection between the On-premise environment and the Security VPC (Transit VPC pattern)
    2. Create a VPN connection between the On-premise environment to all other VPC’s
    3. Create a VPN connection between the Security VPC to all other VPC’s (Transit VPC pattern)
    4. Create a VPC peering connection between the Security VPC and all other VPC’s

References

AWS_Transit_VPC

Let’s Talk About…

Let’s Talk About Cloud Security

Guest post by Dustin Albertson – Manager of Cloud & Applications, Product Management -Veeam.

I want to discuss something that’s important to me, security. Far too often I have discussions with customers and other engineers where they’re discussing an architecture or problem they are running into, and I spot issues with the design or holes in the thought process. One of the best things about the cloud model is also one of its worst traits: it’s “easy.” What I mean by this is that it’s easy to log into AWS and set up an EC2 instance, connect it to the internet and configure basic settings. This usually leads to issues down the road because the basic security or architectural best practices were not followed. Therefore, I want to talk about a few things that everyone should be aware of.

The Well-Architected Framework

AWS Well-Architected Framework

AWS has done a great job at creating a framework for its customer to adhere to when planning and deploying workloads in AWS.    This framework is called the AWS Well-Architected Framework.   The framework has 6 pillars that helps you learn architectural best practices for designing and operating secure, reliable, efficient, cost-effective, and sustainable workloads in the AWS Cloud.   The pillars are :

  • Operational Excellence: The ability to support the development and run workloads effectively, gain insight into their operations, and continuously improve supporting processes and procedures to deliver business value.
  • Security: The security pillar describes how to take advantage of cloud technologies to protect data, systems, and assets in a way that can improve your security posture.
  • Reliability: The reliability pillar encompasses the ability of a workload to perform its intended function correctly and consistently when it’s expected to. This includes the ability to operate and test the workload through its total lifecycle. This paper provides in-depth, best practice guidance for implementing reliable workloads on AWS.
  • Performance Efficiency: The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
  • Cost Optimization: The ability to run systems to deliver business value at the lowest price point.
  • Sustainability: The ability to continually improve sustainability impacts by reducing energy consumption and increasing efficiency across all components of a workload by maximizing the benefits from the provisioned resources and minimizing the total resources required.

This framework is important to read and understand for not only a customer but a software vendor or a services provider as well. As a company that provides software in the AWS marketplace, Veeam must go through a few processes prior to listing in the marketplace. Those processes are what’s called a W.A.R (Well-Architected Review) and a T.F.R (Technical Foundation Review).   A W.A.R. is a deep dive into the product and APIs to make sure that the best practices are being used in the way the products not only interact with the APIs in AWS but also how the software is deployed and the architecture it uses.    The T.F.R. is a review to validate that all the appropriate documentation and help guides are in place so that a customer can easily find out how to deploy, protect, secure, and obtain support when using a product deployed via the AWS Marketplace. This can give customers peace of mind when deploying software from the marketplace because they’ll know that it has been rigorously tested and validated.

I have mostly been talking at a high level here and want to break this down into a real-world example. Veeam has a product in the AWS Marketplace called Veeam Backup for AWS. One of the best practices for this product is to deploy it into a separate AWS account than your production account.

Veeam Data Protection

The reason for this is that the software will reach into the production account and back up the instances you wish to protect into an isolated protection account where you can limit the number of people who have access.    It’s also a best practice to have your backup data stored away from production data.    Now here is where the story gets interesting, a lot of people like to use encryption on their EBS volumes.   But since it’s so easy to enable encryption, most people just turn it on and move on.    The root of the issue is that AWS has made it easy to encrypt a volume since they have a default key that you choose when creating an instance.

They have also made it easy to set a policy that every new volume is encrypted and the default choice is the default key.

This is where the problem begins. Now, this may be fine for now or for a lot of users, but what this does is create issues later down the road.    Default encryption keys cannot be shared outside of the account that the key resides in. This means that you would not be able to back that instance up to another account, you can’t rotate the keys, you can’t delete the keys, you can’t audit the keys, and more. Customer managed keys (CMK) give you the ability to create, rotate, disable, enable and audit the encryption key used to protect the data.   I don’t want to go too deep here but this is an example that I run into a lot and people don’t realize the impact of this setting until it’s too late. To change from a default key to a CMK requires downtime of the instance and is a very manual process, although it can be scripted out, it still can be a very cumbersome task if we are talking about hundreds to thousands of instances.

Don’t just take my word for it, Trend Micro also lists this as a Medium Risk.

Aqua Vulnerability Database also lists this as a threat.

Conclusion

I’ am not trying to scare people or shame people for not knowing this information. A lot of the time in the field, we are so busy and just get things working and move on.   My goal here is to try to get you to stop for a second and think about if the choices you are making are the best ones for your security.   Take advantage of the resources and help that companies like AWS and Veeam are offering and learn about data protection and security best practices.   Take a step back from time to time and evaluate the architecture or design that you are implementing.   Get a second set of eyes on the project.   It may sound complicated or confusing, but I promise it’s not that hard and the best bet is to just ask others. Also, don’t forget to check the “Choose Your Cloud Adventure” interactive e-book to learn how to manage your AWS data like a hero.

Thank you for reading.

Black Friday & Cyber Monday Deals

Udemy – 2nd Dec to 3rd Dec

DolfinEd Courses

DolfinEd Cyber Monday

Braincert – Black Friday Sale – ends 26th Nov

Use Coupon Code – BLACK_FRIDAY_2021

AWS Certifications

GCP Certifications

Whizlabs – Black Friday Sale – 19th Nov to 2nd Dec

Both Subscriptions (W970 H90).png

Coursera

Limited-time offer: Get Coursera Plus Monthly for $1!

CCNF – Cyber Monday