Amazon OpenSearch

Amazon OpenSearch

  • Amazon OpenSearch Service is a managed service that makes it easy to deploy, operate, and scale OpenSearch clusters in the AWS Cloud.
  • is the successor to Elasticsearch Service and supports OpenSearch and legacy Elasticsearch OSS.
  • is a fully open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analysis.
  • OpenSearch provides
    • instance types with numerous configurations of CPU, memory, and storage capacity, including cost-effective Graviton instances
    • Up to 3 PB of attached storage
    • Cost-effective UltraWarm and cold storage for read-only data
    • Integration with AWS IAM, VPC, VPC Security Groups
    • Encryption at Rest and in Transit
    • Authentication with Cognito, HTTP basic, or SAML authentication for OpenSearch Dashboards
    • Index-level, document-level, and field-level security
    • Multi-AZ setup with node allocation across two or three AZs in the same AWS Region
    • Dedicated master nodes to offload cluster management tasks
    • Automated snapshots to back up and restore OpenSearch Service domains
    • Integration with CloudWatch for monitoring, CloudTrail for auditing, S3, Kinesis, and DynamoDB for loading streaming data into OpenSearch Service.

OpenSearch Service Domain

  • An OpenSearch Service domain is synonymous with an OpenSearch cluster.
  • Domains are clusters with specified settings, instance types, instance counts, and storage resources.
  • automates common administrative tasks, such as performing backups, monitoring instances and patching software once the domain is running.
  • uses a blue/green deployment process when updating domains. Blue/green typically refers to the practice of running two production environments, one live and one idle, and switching the two as software changes are made.
  • All domains configured for multiple AZs have zone awareness enabled to ensure shards are distributed across AZs.

OpenSearch Security

  • OpenSearch Service domains support encryption at rest through AWS Key Management Service (KMS), node-to-node encryption over TLS, and the ability to require clients to communicate with HTTPS.
  • supports only symmetric encryption KMS keys, not asymmetric ones.
  • encrypts all indices, log files, swap files, and automated snapshots.
  • does not encrypt Manual snapshots and slow & error logs.
  • can be configured to be accessible with an endpoint within the VPC or a public endpoint accessible to the internet.
  • Network access for VPC endpoints is controlled by security groups and for public endpoints, access can be granted or restricted by IP address.
  • supports integration with Cognito, to allow the end-users to log-in to OpenSearch dashboards through enterprise identity providers such as Microsoft Active Directory using SAML 2.0, Cognito User Pools, and more.

OpenSearch Storage Tiers

  • OpenSearch Service supports three integrated storage tiers, Hot, UltraWarm and Cold.
  • Hot tier is powered by data nodes which are used for indexing, updating, and providing the fastest access to data.
  • UltraWarm nodes complement the hot tier by providing a fully managed, low-cost, read-only, warm storage tier for older and less-frequently accessed data.
  • UltraWarm uses S3 for storage and removes the need to configure a replica for the warm data.
  • Cold storage is a fully-managed lowest cost storage tier that makes it easy to securely store and analyze the historical logs on-demand.
  • Cold storage helps fully detach storage from compute when they are not actively performing analysis of their data and keep the data readily available at low cost.

OpenSearch Cross-Cluster Replication

  • Cross-cluster replication helps automate copying and synchronizing indices from one cluster to another at low latency in the same or different AWS Regions.
  • Domains participating in cross-cluster replications need to meet the following criteria:
    • Participating domains should be on Elasticsearch version 7.10
    • Participating domains need to have encryption in transit enabled
    • Participating domains need to have Fine-Grained Access Control (FGAC) enabled
    • Participating domains versions should adhere to the same rules as rolling version upgrade
  • Current implementation of cross-cluster replication does not support Ultrawarm or Cold Storage.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

Amazon_OpenSearch

AWS ElastiCache Redis vs Memcached

AWS ElastiCache Redis vs Memcached

AWS ElastiCache Redis vs Memcached

ElastiCache supports the Memcached and Redis cache engines. Each engine provides some advantages and ElastiCache Redis vs Memcached provides the key differences between the two engines.

AWS ElastiCache Redis vs Memcached

Memcached

  • Need the simplest model possible.
  • Need to run large nodes with multiple cores or threads.
  • Need the ability to scale out and in, adding and removing nodes as demand on the system increases and decreases.
  • Need to cache objects.

Redis

  • Need complex data types, such as strings, hashes, lists, sets, sorted sets, and bitmaps.
  • Need to sort or rank in-memory datasets.
  • Need persistence of the key store.
  • Need to replicate the data from the primary to one or more read replicas for read-intensive applications.
  • Need automatic failover if the primary node fails.
  • Need publish and subscribe (pub/sub) capabilities – to inform clients about events on the server.
  • Need backup and restore capabilities.
  • Need to support multiple databases.
  • Need Encryption at Rest and in Transit.
  • Need the ability to dynamically add or remove shards from the Redis (cluster mode enabled) cluster.
  • Need to authenticate users with role-based access control.
  • Need geospatial indexing (clustered mode or non-clustered mode)
  • Need to meet compliance requirements – HIPPA, FedRAMP, PCI-DSS.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants to implement best practices on AWS. Which implementation would help eliminate “Single Point of failure”?
    1. ElastiCache Memcached deployment
    2. RDS Single-AZ deployment
    3. EC2 instances on a single AZ
    4. ElastiCache Redis deployment
  2. You are launching your first ElastiCache cache cluster, and start using Memcached. Which of the following is NOT a key feature of Memcached?
    1. You need the ability to scale your Cache horizontally as you grow.
    2. You use more advanced data types, such as lists, hashes, and sets.
    3. You need a simple caching model as possible.
    4. Object caching is your primary goal to offload your database.

References

Amazon_ElastiCache_Selecting_Engine

AWS Transfer Family

AWS Transfer Family

  • AWS Transfer Family is a secure transfer service that helps transfer files into and out of AWS storage services.
  • AWS Transfer Family supports transferring data from or to S3 and EFS.
  • AWS Transfer Family supports transferring data over the following protocols:
    • Secure Shell (SSH) File Transfer Protocol (SFTP)
    • File Transfer Protocol Secure (FTPS)
    • File Transfer Protocol (FTP)
  • Transfer Family provides the following benefits
    • A fully managed service that scales in real-time
    • Compatible with existing applications with no need to modify them or run any file transfer protocol infrastructure.
    • A fully managed, serverless File Transfer Workflow service that makes it easy to set up, run, automate, and monitor the processing of files.
  • Transfer Family supports up to 3 AZs and is backed by an auto-scaling, redundant fleet for your connection and transfer requests.
  • Transfer Family supports three identity provider options:
    • Service Managed, where you store user identities within the service. It is supported for server endpoints that are enabled for SFTP only.
    • Microsoft Active Directory, and,
    • Custom Identity Providers, which helps to integrate an identity provider of your choice.

File Transfer Workflows – MFTW

  • File Transfer Workflows – MFTW is a fully managed, serverless File Transfer Workflow service that makes it easy to set up, run, automate, and monitor the processing of uploaded files.
  • MFTW can be used to automate various processing steps such as copying, tagging, scanning, filtering, compressing/decompressing, and encrypting/decrypting the data that is transferred using Transfer Family.
  • MFTW provides end-to-end visibility for tracking and auditability.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A solutions architect must provide a fully managed replacement for an on-premises solution that allows employees and partners to exchange files. The solution must be easily accessible to employees connecting from on-premises systems, remote employees, and external partners. Which solution meets these requirements?
    1. Use AWS Transfer for SFTP to transfer files into and out of Amazon S3.
    2. Use AWS Snowball Edge for local storage and large-scale data transfers.
    3. Use Amazon FSx to store and transfer files to make them available remotely.
    4. Use AWS Storage Gateway to create a volume gateway to store and transfer files to Amazon S3.

References

AWS_Transfer_Family

AWS DataSync

AWS DataSync

  • AWS DataSync is an online data transfer service that simplifies, automates, and accelerates moving data between storage systems and services.
  • DataSync provides end-to-end security, including encryption and integrity validation.
  • DataSync automates both the management of data-transfer processes and the infrastructure required for high-performance and secure data transfer.
  • DataSync uses a purpose-built network protocol and a parallel, multi-threaded architecture to accelerate the transfers.
  • A DataSync agent is a VM or EC2 instance that AWS DataSync uses to read from or write to a storage system. Agents are commonly used when copying data from on-premises storage to AWS.
  • DataSync transfer is described by a Task and a Task Execution is an individual run of a DataSync task.
  • A task can be configured for locations (source and destination), schedule and how it treats metadata, deleted files, and permissions.
  • Task scheduling automatically runs tasks on the configured schedule with hourly, daily, or weekly options.
  • Each time a task is started it performs an incremental copy, transferring only the changes from the source to the destination.
  • If a task is interrupted, for instance, if the network connection goes down or the agent is restarted, the next run of the task will transfer missing files, and the data will be complete and consistent at the end of this run.
  • AWS DataSync can be used with the Direct Connect link to access public service endpoints or private VPC endpoints.
  • The amount of network bandwidth that AWS DataSync will use can be controlled by configuring the built-in bandwidth throttle.

DataSync Supported Locations

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is migrating its applications to AWS. Currently, applications that run on-premises generate hundreds of terabytes of data that is stored on a shared file system. The company is running an analytics application in the cloud that runs hourly to generate insights from this data. The company needs a solution to handle the ongoing data transfer between the on-premises shared file system and Amazon S3. The solution also must be able to handle occasional interruptions in internet connectivity. Which solutions should the company use for the data transfer to meet these requirements?
    1. AWS DataSync
    2. AWS Migration Hub
    3. AWS Snowball Edge Storage Optimized
    4. AWS Transfer for SFTP

References

AWS_DataSync

AWS CloudFront vs Global Accelerator

AWS CloudFront vs Global Accelerator

AWS CloudFront vs Global Accelerator

  • Global Accelerator and CloudFront both use the AWS global network and its edge locations around the world.
  • Both services integrate with AWS Shield for DDoS protection.
  • Performance
    • CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery).
    • Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions.
  • Use Cases
    • CloudFront is a good fit for HTTP use cases
    • Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or VoIP, as well as for HTTP use cases that require static IP addresses or deterministic, fast regional failover.
  • Caching
    • CloudFront supports Edge caching
    • Global Accelerator does not support Edge Caching.

AWS CloudFront vs Global Accelerator

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants to improve the availability and performance of its stateless UDP-based workload. The workload is deployed on Amazon EC2 instances in multiple AWS Regions. What should a solutions architect recommend to accomplish this?
    1. Place the EC2 instances behind Network Load Balancers (NLBs) in each Region. Create an accelerator using AWS Global Accelerator. Use the NLBs as endpoints for the accelerator.
    2. Place the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create an accelerator using AWS Global Accelerator. Use the ALBs as endpoints for the accelerator.
    3. Place the EC2 instances behind Network Load Balancers (NLBs) in each Region. Create a CloudFront distribution with an origin that uses  Route 53 latency-based routing to route requests to the NLBs.
    4. Place the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create a CloudFront distribution with an origin that uses Route 53 latency-based routing to route requests to the ALBs.

References

AWS_Global_Accelerator_FAQs

AWS Gateway Load Balancer – GWLB

AWS Gateway Load Balancer GWLB

AWS Gateway Load Balancer – GWLB

  • Gateway Load Balancer helps deploy, scale, and manage virtual appliances, such as firewalls, intrusion detection and prevention systems (IDS/IPS), and deep packet inspection systems.
  • GWLB and its registered virtual appliance instances exchange application traffic using the GENEVE (Generic Network Virtualization Encapsulation) protocol on port 6081.
  • operates at Layer 3 of the OSI model, the network layer.
  • transparently passes all Layer 3 traffic through third-party virtual appliances, and is invisible to the source and destination of the traffic.
  • combines a transparent network gateway (that is, a single entry and exit point for all traffic) and distributes traffic while scaling the virtual appliances with the demand.
  • listens for all IP packets across all ports and forwards traffic to the target group that’s specified in the listener rule.
  • runs within one AZ and is recommended to be deployed in multiple AZs for greater availability. If all appliances fail in one AZ, scripts can be used to either add new appliances or direct traffic to a GWLB in a different AZ.
  • cannot add or remove availability zones after the GWLB is created.
  • is architected to handle millions of requests/second, volatile traffic patterns, and introduces extremely low latency.
  • does not perform TLS termination and does not maintain any application state. These functions are performed by the third-party virtual appliances it directs traffic to and receives traffic from.
  • maintains stickiness of flows to a specific target appliance using 5-tuple (TCP/UDP flows) or 3-tuple (for non-TCP/UDP flows).
  • supports a maximum transmission unit (MTU) size of 8500 bytes.
  • supports cross-zone load balancing, which is disabled by default. You pay charges for inter-AZ data transfer if enabled.

Gateway Load Balancer Endpoint – GWLBE

  • GWLB uses Gateway Load Balancer endpoints – GWLBE to exchange traffic across VPC boundaries securely.
  • A Gateway Load Balancer endpoint is a VPC endpoint that provides private connectivity between virtual appliances in the service provider VPC and application servers in the service consumer VPC.
  • One GWLB can be connected to many GWLBEs.
  • GWLB is deployed in the same VPC as the virtual appliances.
  • Virtual appliances are registered with a target group for the GWLB.
  • Traffic to and from a GWLBE is configured using route tables.
  • Traffic flows from the service consumer VPC over the GWLBE to the GWLB in the service provider VPC, and then returns to the service consumer VPC
  • GWLBE and the application servers must be created in different subnets. This enables you to configure the GWLBE as the next hop in the route table for the application subnet.

Gateway Load Balancer Flow

AWS Gateway Load Balancer GWLB

Traffic from the internet to the application (blue arrows)

  • Traffic enters the service consumer VPC through the internet gateway.
  • Traffic is sent to the GWLBE, as a result of VPC ingress routing.
  • Traffic is sent to the GWLB for inspection through the security appliance.
  • Traffic is sent back to the GWLBE after inspection.
  • Traffic is sent to the application servers (destination subnet).

Traffic from the application to the internet (orange arrows):

  • Traffic is sent to the Gateway Load Balancer endpoint due to the default route configured on the application server subnet.
  • Traffic is sent to the GWLB for inspection through the security appliance.
  • Traffic is sent back to the GWLBE after inspection.
  • Traffic is sent to the internet gateway based on the route table configuration.
  • Traffic is routed back to the internet.

Gateway Load Balancer High Availability

AWS Gateway Load Balancer HA

AWS Gateway Load Balancer vs Network Firewall

AWS Network Firewall vs Gateway Load Balancer

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_Gateway_Load_Balancer

AWS Shield

AWS Shield

  • AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS.
  • AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency
  • AWS Shield detects the following classes of attacks
    • Network Volumetric Attacks (Layer 3)
      • This is a sub category of infrastructure layer attack vectors.
      • These vectors attempt to saturate the capacity of the targeted network or resource, to deny service to legitimate users.
    • Network Protocol Attacks (Layer 4)
      • This is a sub category of infrastructure layer attack vectors.
      • These vectors abuse a protocol to deny service to the targeted resource.
      • A common example of a network protocol attack is a TCP SYN flood, which can exhaust connection state on resources like servers, load balancers, or firewalls.
      • A network protocol attack can also be volumetric for e.g., a larger TCP SYN flood may intend to saturate the capacity of a network while also exhausting the state of the targeted resource or intermediate resources.
    • Application Layer Attacks (Layer 7)
      • This category of attack vector attempts to deny service to legitimate users by flooding an application with queries that are valid for the target, such as web request floods.

AWS Shield Tiers

AWS Shield Standard

  • provides automatic protections to all customers at no additional charge
  • defends against the most common, frequently occurring network and transport layer DDoS attacks that target website or applications.
  • with CloudFront and Route 53 comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks is provided.
  • uses techniques such as deterministic packet filtering and priority-based traffic shaping to automatically mitigate basic network layer attacks.

AWS Shield Advanced

  • is a managed service that helps protect the application against external threats, like DDoS attacks, volumetric bots, and vulnerability exploitation attempts.
  • provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks,
  • provides integration with AWS WAF, a web application firewall, at no additional charge. It can create WAF rules in the WebACLs to automatically mitigate an attack, or activate them in count-only mode.
  • also gives 24×7 access to the AWS Shield Response Team (SRT) and protection against DDoS related spikes in the EC2, ELB, CloudFront, AWS Global Accelerator and Route 53 charges.
  • provides DDoS cost protection to safeguard against scaling charges resulting from DDoS-related usage spikes on protected resources.
  • in addition to the network and transport layer attacks, it also detects application layer (Layer 7) attacks such as HTTP floods or DNS query floods by baselining traffic on the application and identifying anomalies.
  • is available globally on all CloudFront, Global Accelerator, and Route 53 edge locations.
  • includes centralized protection management using Firewall Manager, that can automatically
    • configure policies covering multiple accounts and resources
    • audits accounts to find new or unprotected resources, and ensures that Shield Advanced and AWS WAF protections are universally applied.
  • provides complete visibility into DDoS attacks with near real-time notification through CloudWatch and detailed diagnostics on the AWS WAF and AWS Shield console or APIs.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which AWS service has inbuilt DDoS protection inbuilt?
    1. AWS Shield
    2. AWS CloudWatch
    3. AWS EC2
    4. AWS Inspector
  2. A media company has monetized their APIs to external third parties. During the last month, the platform has come under DDoS attacks multiple times leading to scaling of underlying instances and cost incurred. Which AWS service would help provide cost protection against such spikes, if such situations do occur in the future?
    1. AWS Systems Manager
    2. AWS WAF
    3. AWS Shield Advanced
    4. AWS Inspector
  3. A company is hosting an important revenue generating application. On the last few occasions, the application has come under large DDoS attacks. As a result of this, a lot of users were complaining about the slowness of the application. You need to now avoid these situations in the future and now require 24×7 support from AWS if such situations do occur in the future. Which of the following service can help in this regard?
    1. AWS Shield Advanced
    2. AWS Inspector
    3. AWS WAF
    4. AWS Systems Manager

References

AWS_Shield

AWS Certificate Manager – ACM

AWS Certificate Manager

AWS Certificate Manager – ACM

  • AWS Certificate Manager – ACM helps easily provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services and internal connected resources.
  • AWS Certificate Manager removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates.
  • AWS Certificate Manager can help quickly request a certificate, deploy it on ACM-integrated AWS resources, such as ELB, CloudFront distributions, and APIs on API Gateway, and handle certificate renewals.
  • ACM supports importing third-party certificates into the ACM management system.
  • ACM also supports the creation of private certificates for the internal resources and manages the certificate lifecycle centrally.
  • ACM certificates are regional resources.

AWS Certificate Manager

ACM Limitations

  • does not provide certificates for anything other than the SSL/TLS protocols.
  • cannot use certificates for email encryption.
  • cannot request certificates for Amazon-owned domain names such as those ending in amazonaws.com, cloudfront.net, or elasticbeanstalk.com.
  • cannot download the private key for an ACM certificate.
  • cannot directly install certificates on the EC2 website or application
  • are regional resources and cannot copy a certificate between regions. To use a certificate with ELB for the same FQDN or set of FQDNs in more than one AWS region, you must request or import a certificate for each region. For certificates provided by AWS Certificate Manager, you must revalidate each domain name in the certificate for each region
  • with CloudFront, you must request or import the certificate in the US East (N. Virginia) region.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company hosts an online shopping portal in the AWS Cloud. The portal provides HTTPS security by using a TLS certificate on an ELB. Recently, the portal suffered an outage because the TLS certificate expired. A SysOps administrator must create a solution to automatically renew certificates to avoid this issue in the future. What is the MOST operationally efficient solution that meets these requirements?
    1. Request a public certificate by using AWS Certificate Manager. Associate the certificate from ACM with the ELB. Write a scheduled AWS Lambda function to renew the certificate every 18 months.
    2. Request a public certificate by using AWS Certificate Manager. Associate the certificate from ACM with the ELB. ACM will automatically manage the renewal of the certificate.
    3. Register a certificate with a third-party certificate authority (CA). Import this certificate into AWS Certificate Manager. Associate the certificate from ACM with the ELB. ACM will automatically manage the renewal of the certificate.
    4. Register a certificate with a third-party certificate authority (CA). Configure the ELB to import the certificate directly from the CA. Set the certificate refresh cycle on the ELB to refresh when the certificate is within 3 months of the expiration date.

References

AWS_Certificate_Manager

AWS Secrets Manager

AWS Secrets Manager

AWS Secrets Manager

  • AWS Secrets Manager helps protect secrets needed to access applications, services, and IT resources.
  • enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
  • secure secrets by encrypting them with encryption keys managed using AWS KMS.
  • offers native secret rotation with built-in integration for RDS, Redshift, and DocumentDB.
  • supports Lambda functions to extend secret rotation to other types of secrets, including API keys and OAuth tokens.
  • supports IAM and resource-based policies for fine-grained access control to secrets and centralized secret rotation audit for resources in the AWS Cloud, third-party services, and on-premises.
  • enables secret replication in multiple AWS regions to support multi-region applications and disaster recovery scenarios.
  • supports private access using VPC Interface endpoints

AWS Secrets Manager

Secrets Manager with KMS

  • Encryption
    • encrypts a new version of the protected secret data by requesting AWS KMS to generate a new data key from the KMS key.
    • uses this data key for envelope encryption.
    • stores the encrypted data key with the protected secret data.
  • Decryption
    • requests AWS KMS to decrypt the encrypted data key
    • uses the plain text data key to decrypt the protected secret data.
    • never stores the data key in unencrypted form, and always disposes of the data key immediately after use.

Secrets Manager Rotation

  • AWS Secrets Manager enables database credential rotation on a schedule.
  • When Secrets Manager initiates a rotation
    • it uses the provided super database credentials to create a clone user with the same privileges, but with a different password.
    • communicates the clone user information to databases and applications retrieving the database credentials.
    • integrates with CloudWatch Events to send a notification when it rotates a secret.
  • Credentials rotation does not impact the already open connections as they are not re-authenticated. Authentication happens when a connection is established.

Secrets Manager vs Systems Parameter Store

AWS Secrets Manager vs Systems Parameter Store

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which AWS service makes it easy for you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle?
    1. AWS WAF
    2. AWS Secrets Manager
    3. AWS Systems Manager
    4. AWS Shield
  2. A company uses Amazon RDS for PostgreSQL databases for its data tier. The company must implement password rotation for the databases. Which solution meets this requirement with the LEAST operational overhead?
    1. Store the password in AWS Secrets Manager. Enable automatic rotation on the secret.
    2. Store the password in AWS Systems Manager Parameter Store. Enable automatic rotation on the parameter.
    3. Store the password in AWS Systems Manager Parameter Store. Write an AWS Lambda function that rotates the password.
    4. Store the password in AWS Key Management Service (AWS KMS). Enable automatic rotation on the customer master key (CMK).

References

AWS_Secrets_Manager

Envelope Encryption

Envelope Encryption

  • AWS KMS and Google Cloud KMS use a method called envelope encryption to protect the data.
  • Envelope encryption is an optimized method for encrypting data that uses two different keys (Master key and Data key)
  • Master key is also known as Key Encryption Key – KEK and Data key is also known as Data Encryption Key – DEK.
  • Envelope encryption reduces the network load for the application or the cloud service as only the request and fulfillment of the much smaller data key through KMS must go over the network.
  • The data key is used locally by the encrypting service, avoiding the need to send the entire block of data to KMS and suffer network latency.

 Envelope encryption

Root Key

  • A root key is an encryption key that is used to encrypt other encryption keys, such as data keys and key encryption keys. Unlike data keys and key encryption keys, root keys must be kept in plaintext so they can be used to decrypt the keys that they encrypted.
  • Key Management Service (KMS) generates and protects the root keys.

Key Encryption Key – KEK

  • A key encryption key is an encryption key that is used to encrypt a data key or another key encryption key. To protect the
  • Key encryption key is encrypted by using a root key.

Data Encryption Key – DEK

  • A data key or data encryption key is an encryption key that is used to protect data.
  • Data keys differ from root keys and key encryption keys, which are typically used to encrypt other encryption keys.

Envelop Encryption Process

  • Encryption
    • Create a Master Key
    • Generate data key.
    • KMS returns the plaintext and ciphertext of the data key.
    • The plaintext data key is used to encrypt each piece of data or resource.
    • The data key is encrypted under a master key defined in KMS.
    • The encrypted data key and encrypted data are then stored by the service.
  • Decryption
    • Retrieve the ciphertext data key and encrypted files from the persistent storage device or service.
    • Decrypt the ciphertext data key using the Master key. The plaintext data key is returned.
    • Use the plaintext data key to decrypt the files.
  • Encrypting the data key is more efficient than reencrypting the data under the new key because it is quicker and produces a much smaller ciphertext.

Envelope Encryption Benefits

  • Protecting data keys
    • When you encrypt a data key, you don’t have to worry about storing the encrypted data key, because the data key is inherently protected by encryption. You can safely store the encrypted data key alongside the encrypted data.
  • Encrypting the same data under multiple keys
    • Encryption operations can be time-consuming, particularly when the data being encrypted are large objects.
    • Instead of re-encrypting raw data multiple times with different keys, you can re-encrypt only the data keys that protect the raw data.
  • Combining the strengths of multiple algorithms
    • In general, symmetric key algorithms are faster and produce smaller ciphertexts than public key algorithms.
    • But public key algorithms provide inherent separation of roles and easier key management.
    • Envelope encryption lets you combine the strengths of each strategy.