AWS Secrets Manager vs Systems Manager Parameter Store

AWS Secrets Manager vs Systems Parameter Store

AWS Secrets Manager vs Systems Manager Parameter Store

  • AWS Secrets Manager helps protect secrets needed to access applications, services, and IT resources and can easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
  • AWS Systems Manager Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management and can store data such as passwords, database strings, etc.

AWS Secrets Manager vs Systems Parameter Store

  • Storage (Limits keep on upgrading)
    • AWS Systems Manager Parameter Store allows us to store up to
      • Standard tier – 10,000 parameters, each of which can be up to 4KB
      • Advanced tier – 100,000 parameters, each of which can be up to 8KB
    • AWS Secrets Manager will enable us to store up to 40,000 parameters, each of which can be up to 64kb.
  • Encryption
    • Encryption is optional for Systems Parameter Store
    • Encryption is mandatory for Secrets Manager and you cannot opt out.
  • Automated Secret Rotation
    • System Parameter Store does not support out-of-the-box secrets rotation.
    • AWS Secrets Manager enables database credential rotation on a schedule.
  • Cross-account Access
    • System Parameter Store does not support cross-account access
    • AWS Secrets Manager supports resource-based IAM policies that grant cross-account access.
  • Cost (keeps on changing)
    • Secrets Manager is comparatively costlier than the System Parameter store.
    • AWS Systems Manager Parameter Store comes with no additional cost for the Standard tier.
    • AWS Secrets Manager costs $0.40 per secret per month, and data retrieval costs $0.05 per 10,000 API calls.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company uses Amazon RDS for PostgreSQL databases for its data tier. The company must implement password rotation for the databases. Which solution meets this requirement with the LEAST operational overhead?
    1. Store the password in AWS Secrets Manager. Enable automatic rotation on the secret.
    2. Store the password in AWS Systems Manager Parameter Store. Enable automatic rotation on the parameter.
    3. Store the password in AWS Systems Manager Parameter Store. Write an AWS Lambda function that rotates the password.
    4. Store the password in AWS Key Management Service (AWS KMS). Enable automatic rotation on the customer master key (CMK).

References

AWS EC2 Image Builder

AWS EC2 Image Builder

  • EC2 Image Builder is a fully managed AWS service that makes it easier to automate the creation, management, and deployment of customized, secure, and up-to-date server images that are pre-installed and pre-configured with software and settings to meet specific IT standards
  • EC2 Image Builder simplifies the building, testing, and deployment of Virtual Machine and container images for use on AWS or on-premises.
  • Image Builder significantly reduces the effort of keeping images up-to-date and secure by providing a simple graphical interface, built-in automation, and AWS-provided security settings.
  • Image Builder removes any manual steps for updating an image without to need to build your own automation pipeline.
  • Image Builder provides a one-stop-shop to build, secure, and test up-to-date Virtual Machine and container images using common workflows.
  • Image Builder allows image validation for functionality, compatibility, and security compliance with AWS-provided tests and your own tests before using them in production.
  • Image Builder is offered at no cost, other than the cost of the underlying AWS resources used to create, store, and share the images.

EC2 Image Builder

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is running a website on Amazon EC2 instances that are in an Auto Scaling group. When the website traffic increases, additional instances take several minutes to become available because of a long-running user data script that installs software. An AWS engineer must decrease the time that is required for new instances to become available. Which action should the engineer take to meet this requirement?
    1. Reduce the scaling thresholds so that instances are added before traffic increases.
    2. Purchase Reserved Instances to cover 100% of the maximum capacity of the Auto Scaling group.
    3. Update the Auto Scaling group to launch instances that have a storage optimized instance type.
    4. Use EC2 Image Builder to prepare an Amazon Machine Image (AMI) that has pre-installed software.

References

AWS_EC2_Image_Builder

AWS RDS Aurora Serverless

Aurora Serverless

  • Amazon Aurora Serverless is an on-demand, autoscaling configuration for the MySQL-compatible and PostgreSQL-compatible editions of Aurora.
  • An Aurora Serverless DB cluster automatically starts up, shuts down, and scales capacity up or down based on the application’s needs.
  • enables running database in the cloud without managing any database instances.
  • provides a relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads.
  • use Cases include
    • Infrequently-Used Applications
    • New Applications – where the needs and instance size is yet to be determined.
    • Variable and Unpredictable Workloads – scale as per the needs
    • Development and Test Databases
    • Multi-tenant Applications
  • DB cluster does not have a public IP address and can be accessed only from within a VPC based on the VPC service.

Aurora Architecture

 Aurora Serverless Architecture

  • Aurora Serverless separates Storage and Compute, so it can scale down to zero processing and you pay only for storage.
  • A database endpoint is created without specifying the DB instance class size.
  • Minimum and maximum capacity is set in terms of Aurora capacity units (ACUs). Each ACU is a combination of processing and memory capacity.
  • Database storage automatically scales from 10 GiB to 64 TiB, the same as storage in a standard Aurora DB cluster.
  • The minimum Aurora capacity unit is the lowest ACU to which the DB cluster can scale down. The maximum Aurora capacity unit is the highest ACU to which the DB cluster can scale up. Based on the settings, Aurora Serverless automatically creates scaling rules for thresholds for CPU utilization, connections, and available memory.
  • Database endpoint connects to a proxy fleet that routes the workload to a fleet of resources that are automatically scaled.
  • Aurora Serverless manages the connections automatically.
  • Proxy fleet enables continuous connections as Aurora Serverless scales the resources automatically based on the minimum and maximum capacity specifications.
  • Database client applications don’t need to change to use the proxy fleet.
  • Scaling is rapid because it uses a pool of “warm” resources that are always ready to service requests.
  • Aurora Serverless supports Automatic Pause where the DB cluster can be paused after a given amount of time with no activity. The default inactvity timeout is five minutes. Pausing the DB cluster can be disabled.
  • Automatic Pause reduces the compute charges to zero and only storage is charged. If database connections are requested when an Aurora Serverless DB cluster is paused, the DB cluster automatically resumes and services the connection requests.
  • When the DB cluster resumes activity, it has the same capacity as it had when Aurora paused the cluster. The number of ACUs depends on how much Aurora scaled the cluster up or down before pausing it.

Aurora Serverless and Failover

  • Aurora Serverless compute layer is placed in a Single AZ
  • separates computation capacity and storage, and the storage volume for the cluster is spread across multiple AZs. The data remains available even if outages affect the DB instance or the associated AZ.
  • supports automatic multi-AZ failover where if the DB instance for a DB cluster becomes unavailable or the Availability Zone (AZ) it is in fails, Aurora recreates the DB instance in a different AZ.
  • failover mechanism takes longer than for an Aurora Provisioned cluster.
  • failover time is currently undefined because it depends on demand and capacity available in other AZs within the given AWS Region

Aurora Serverless Auto Scaling

  • Aurora Serverless automatically scales based on the active database workload ( CPU or connections), in some cases, capacity might not scale fast enough to meet a sudden workload change, such as a large number of new transactions.
  • Once a scaling operation is initiated, Aurora Serverless attempts to find a scaling point, which is a point in time at which the database can safely complete scaling.
  • might not be able to find a scaling point and will not scale if there are
    • long-running queries or transactions in progress, or
    • temporary tables or table locks in use.
  • Supports cooldown period
    • After Scale up, it has a 15 minutes cooldown period for subsequent scale down
    • After Scale down, it has a 310 secs cooldown period for subsequent scale down
  • has no cooldown period for scaling up activities and scales as and when necessary

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_Aurora_Serverless

AWS RDS Monitoring & Notification

AWS RDS Monitoring & Notification

  • RDS integrates with CloudWatch and provides metrics for monitoring
  • CloudWatch alarms can be created over a single metric that sends an SNS message when the alarm changes state
  • RDS also provides SNS notification whenever any RDS event occurs
  • RDS Performance Insights is a database performance tuning and monitoring feature that helps illustrate the database’s performance and help analyze any issues that affect it
  • RDS Recommendations provides automated recommendations for database resources.

 RDS CloudWatch Monitoring

  • RDS DB instance can be monitored using CloudWatch, which collects and processes raw data from RDS into readable, near real-time metrics.
  • Statistics are recorded so that you can access historical information and gain a better perspective on how the service is performing.
  • By default, RDS metric data is automatically sent to CloudWatch in 1-minute periods
  • CloudWatch RDS Metrics
    • BinLogDiskUsage – Amount of disk space occupied by binary logs on the master. Applies to MySQL read replicas.
    • CPUUtilization – Percentage of CPU utilization.
    • DatabaseConnections – Number of database connections in use.
    • DiskQueueDepth – The number of outstanding IOs (read/write requests) waiting to access the disk.
    • FreeableMemory – Amount of available random access memory.
    • FreeStorageSpace – Amount of available storage space.
    • ReplicaLag – Amount of time a Read Replica DB instance lags behind the source DB instance.
    • SwapUsage – Amount of swap space used on the DB instance.
    • ReadIOPS – Average number of disk I/O operations per second.
    • WriteIOPS – Average number of disk I/O operations per second.
    • ReadLatency – Average amount of time taken per disk I/O operation.
    • WriteLatency – Average amount of time taken per disk I/O operation.
    • ReadThroughput – Average number of bytes read from disk per second.
    • WriteThroughput – Average number of bytes written to disk per second.
    • NetworkReceiveThroughput – Incoming (Receive) network traffic on the DB instance, including both customer database traffic and Amazon RDS traffic used for monitoring and replication.
    • NetworkTransmitThroughput – Outgoing (Transmit) network traffic on the DB instance, including both customer database traffic and Amazon RDS traffic used for monitoring and replication.

RDS Enhanced Monitoring

  • RDS provides metrics in real-time for the operating system (OS) that the DB instance runs on.
  • By default, Enhanced Monitoring metrics are stored for 30 days in the CloudWatch Logs, which are different from typical CloudWatch metrics.

CloudWatch vs Enhanced Monitoring Metrics

  • CloudWatch gathers metrics about CPU utilization from the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance.
  • Enhanced Monitoring metrics are useful to understand how different processes or threads on a DB instance use the CPU.
  • There might be differences between the measurements because the hypervisor layer performs a small amount of work. The differences can be greater if the DB instances use smaller instance classes because then there are likely more virtual machines (VMs) that are managed by the hypervisor layer on a single physical instance.

RDS Performance Insights

  • Performance Insights is a database performance tuning and monitoring feature that helps check the database’s performance and helps analyze any issues that affect it.
  • Database load is measured using a metric called Average Active Sessions or AAS which is calculated by sampling memory to determine the state of each active database connection.
  • AAS is the total number of sessions divided by the total number of samples for a specific time period.
  • Performance Insights help visualize the database load and filter the load by waits, SQL statements, hosts, or users.

RDS CloudTrail Logs

  • CloudTrail provides a record of actions taken by a user, role, or an AWS service in RDS.
  • CloudTrail captures all API calls for RDS as events, including calls from the console and from code calls to RDS API operations.
  • CloudTrail can help determine the request that was made to RDS, the IP address from which the request was made, who made the request, when it was made, and additional details.

RDS Recommendations

  • RDS provides automated recommendations for database resources.
  • The recommendations provide best practice guidance by analyzing DB instance configuration, usage, and performance data.

RDS Event Notification

  • RDS uses the SNS to provide notification when an RDS event occurs
  • RDS groups the events into categories, which can be subscribed so that a notification is sent when an event in that category occurs.
  • Event category for a DB instance, DB cluster, DB snapshot, DB cluster snapshot, DB security group, or for a DB parameter group can be subscribed
  • Event notifications are sent to the email addresses provided during subscription creation
  • Subscriptions can be easily turned off without deleting a subscription by setting the Enabled radio button to No in the RDS console or by setting the Enabled parameter to false using the CLI or RDS API.

RDS Trusted Advisor

  • Trusted Advisor inspects the AWS environment and then makes recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps.
  • Trusted Advisor has the following RDS-related checks:
    • RDS Idle DB Instances
    • RDS Security Group Access Risk
    • RDS Backups
    • RDS Multi-AZ

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You run a web application with the following components Elastic Load Balancer (ELB), 3 Web/Application servers, 1 MySQL RDS database with read replicas, and Amazon Simple Storage Service (Amazon S3) for static content. Average response time for users is increasing slowly. What three CloudWatch RDS metrics will allow you to identify if the database is the bottleneck? Choose 3 answers
    1. The number of outstanding IOs waiting to access the disk
    2. The amount of write latency
    3. The amount of disk space occupied by binary logs on the master.
    4. The amount of time a Read Replica DB Instance lags behind the source DB Instance
    5. The average number of disk I/O operations per second.
  2. Typically, you want your application to check whether a request generated an error before you spend any time processing results. The easiest way to find out if an error occurred is to look for an __________ node in the response from the Amazon RDS API.
    1. Incorrect
    2. Error
    3. FALSE
  3. In the Amazon CloudWatch, which metric should I be checking to ensure that your DB Instance has enough free storage space?
    1. FreeStorage
    2. FreeStorageSpace
    3. FreeStorageVolume
    4. FreeDBStorageSpace
  4. A user is receiving a notification from the RDS DB whenever there is a change in the DB security group. The user does not want to receive these notifications for only a month. Thus, he does not want to delete the notification. How can the user configure this?
    1. Change the Disable button for notification to “Yes” in the RDS console
    2. Set the send mail flag to false in the DB event notification console
    3. The only option is to delete the notification from the console
    4. Change the Enable button for notification to “No” in the RDS console
  5. A sys admin is planning to subscribe to the RDS event notifications. For which of the below mentioned source categories the subscription cannot be configured?
    1. DB security group
    2. DB snapshot
    3. DB options group
    4. DB parameter group
  6. A user is planning to setup notifications on the RDS DB for a snapshot. Which of the below mentioned event categories is not supported by RDS for this snapshot source type?
    1. Backup (Refer link)
    2. Creation
    3. Deletion
    4. Restoration
  7. A system admin is planning to setup event notifications on RDS. Which of the below mentioned services will help the admin setup notifications?
    1. AWS SES
    2. AWS Cloudtrail
    3. AWS CloudWatch
    4. AWS SNS
  8. A user has setup an RDS DB with Oracle. The user wants to get notifications when someone modifies the security group of that DB. How can the user configure that?
    1. It is not possible to get the notifications on a change in the security group
    2. Configure SNS to monitor security group changes
    3. Configure event notification on the DB security group
    4. Configure the CloudWatch alarm on the DB for a change in the security group
  9. It is advised that you watch the Amazon CloudWatch “_____” metric (available via the AWS Management Console or Amazon Cloud Watch APIs) carefully and recreate the Read Replica should it fall behind due to replication errors.
    1. Write Lag
    2. Read Replica
    3. Replica Lag
    4. Single Replica

AWS ElastiCache

AWS ElastiCache

  • AWS ElastiCache is a managed web service that helps deploy and run Memcached or Redis protocol-compliant cache clusters in the cloud easily.
  • ElastiCache is available in two flavours: Memcached and Redis
  • ElastiCache helps
    • simplify and offload the management, monitoring, and operation of in-memory cache environments, enabling the engineering resources to focus on developing applications.
    • automate common administrative tasks required to operate a distributed cache environment.
    • improves the performance of web applications by allowing retrieval of information from a fast, managed, in-memory caching system, instead of relying entirely on slower disk-based databases.
    • helps improve load & response times to user actions and queries, but also reduces the cost associated with scaling web applications.
    • helps automatically detect and replace failed cache nodes, providing a resilient system that mitigates the risk of overloaded databases, which can slow website and application load times.
    • provides enhanced visibility into key performance metrics associated with the cache nodes through integration with CloudWatch.
    • code, applications, and popular tools already using Memcached or Redis environments work seamlessly, with being protocol-compliant with Memcached and Redis environments
  • ElastiCache provides in-memory caching which can
    • significantly lower latency and improve throughput for many
      • read-heavy application workloads e.g. social networking, gaming, media sharing, and Q&A portals.
      • compute-intensive workloads such as a recommendation engine.
    • improve application performance by storing critical pieces of data in memory for low-latency access.
    • be used to cache the results of I/O-intensive database queries or the results of computationally-intensive calculations.
  • ElastiCache currently allows access only from the EC2 network and cannot be accessed from outside networks like on-premises servers.

ElastiCache Redis vs Memcached

AWS ElastiCache Redis vs Memcached

Redis

  • Redis is an open source, BSD licensed, advanced key-value cache & store.
  • ElastiCache enables the management, monitoring, and operation of a Redis node; creation, deletion, and modification of the node.
  • ElastiCache for Redis can be used as a primary in-memory key-value data store, providing fast, sub-millisecond data performance, high availability and scalability up to 16 nodes plus up to 5 read replicas, each of up to 3.55 TiB of in-memory data.
  • ElastiCache for Redis supports (similar to RDS features)
    • Redis Master/Slave replication.
    • Multi-AZ operation by creating read replicas in another AZ
    • Backup and Restore feature for persistence using snapshots
  • ElastiCache for Redis can be vertically scaled upwards by selecting a larger node type or by adding shards (with cluster mode enabled).
  • Parameter group can be specified for Redis during installation, which acts as a “container” for Redis configuration values that can be applied to one or more Redis primary clusters.
  • Append Only File – AOF
    • provides persistence and can be enabled for recovery scenarios.
    • if a node restarts or service crashes, Redis will replay the updates from an AOF file, thereby recovering the data lost due to the restart or crash.
    • cannot protect against all failure scenarios, cause if the underlying hardware fails, a new server would be provisioned and the AOF file will no longer be available to recover the data.
  • ElastiCache for Redis doesn’t support the AOF feature but you can achieve persistence by snapshotting the Redis data using the Backup and Restore feature.
  • Enabling Redis Multi-AZ is a Better Approach to Fault Tolerance, as failing over to a read replica is much faster than rebuilding the primary from an AOF file.

Redis Features

  • High Availability, Fault Tolerance & Auto Recovery
    • Multi-AZ for a failed primary cluster to a read replica, in Redis clusters that support replication.
    • Fault Tolerance – Flexible AZ placement of nodes and clusters
    • High Availability – Primary instance and a synchronous secondary instance to fail over when problems occur. You can also use read replicas to increase read scaling.
    • Auto-Recovery – Automatic detection of and recovery from cache node failures.
    • Backup & Restore – Automated backups or manual snapshots can be performed. Redis restore process works reliably and efficiently.
  • Performance
    • Data Partitioning – Redis (cluster mode enabled) supports partitioning the data across up to 500 shards.
    • Data Tiering – Provides a price-performance option for Redis workloads by utilizing lower-cost solid state drives (SSDs) in each cluster node in addition to storing data in memory. It is ideal for workloads that access up to 20% of their overall dataset regularly, and for applications that can tolerate additional latency when accessing data on SSD.
  • Security
    • Encryption – Supports encryption in transit and encryption at rest encryption with authentication. This support helps you build HIPAA-compliant applications.
    • Access Control – Control access to the ElastiCache for Redis clusters by using AWS IAM to define users and permissions.
    • Supports Redis AUTH or Managed Role-Based Access Control (RBAC).
  • Administration
    • Low Administration – ElastiCache for Redis manages backups, software patching, automatic failure detection, and recovery.
    • Integration with other AWS services such as EC2, CloudWatch, CloudTrail, and SNS.
    • Global Datastore for Redis feature provides a fully managed, fast, reliable, and secure replication across AWS Regions. Cross-Region read replica clusters for ElastiCache for Redis can be created to enable low-latency reads and disaster recovery across AWS Regions.

Redis Read Replica

  • Read Replicas help provide Read scaling and handling failures
  • Read Replicas are kept in sync with the Primary node using Redis’s asynchronous replication technology
  • Redis Read Replicas provides
    • Horizontal scaling beyond the compute or I/O capacity of a single primary node for read-heavy workloads.
    • Serving read traffic while the primary is unavailable either being down due to failure or maintenance
    • Data protection scenarios to promote a Read Replica as the primary node, in case the primary node or the AZ of the primary node fails.
  • ElastiCache supports initiated or forced failover where it flips the DNS record for the primary node to point at the read replica, which is in turn promoted to become the new primary.
  • Read replica cannot span across regions and may only be provisioned in the same or different AZ of the same Region as the cache node primary.

Redis Multi-AZ

  • ElastiCache for Redis shard consists of a primary and up to 5 read replicas
  • Redis asynchronously replicates the data from the primary node to the read replicas
  • ElastiCache for Redis Multi-AZ mode
    • provides enhanced availability and a smaller need for administration as the node failover is automatic.
    • impact on the ability to read/write to the primary is limited to the time it takes for automatic failover to complete.
    • no longer needs monitoring of Redis nodes and manually initiating a recovery in the event of a primary node disruption.
  • During certain types of planned maintenance, or in the unlikely event of ElastiCache node failure or AZ failure,
    • it automatically detects the failure,
    • selects a replica, depending upon the read replica with the smallest asynchronous replication lag to the primary, and promotes it to become the new primary node
    • it will also propagate the DNS changes so that the primary endpoint remains the same
  • If Multi-AZ is not enabled,
    • ElastiCache monitors the primary node.
    • in case the node becomes unavailable or unresponsive, it will repair the node by acquiring new service resources.
    • it propagates the DNS endpoint changes to redirect the node’s existing DNS name to point to the new service resources.
    • If the primary node cannot be healed and you will have the choice to promote one of the read replicas to be the new primary.

Redis Backup & Restore

  • Backup and Restore allow users to create snapshots of the Redis clusters.
  • Snapshots can be used for recovery, restoration, archiving purposes, or warm start an ElastiCache for Redis cluster with preloaded data
  • Snapshots can be created on a cluster basis and use Redis’ native mechanism to create and store an RDB file as the snapshot.
  • Increased latencies for a brief period at the node might be encountered while taking a snapshot and is recommended to be taken from a Read Replica minimizing performance impact
  • Snapshots can be created either automatically (if configured) or manually
  • ElastiCache for Redis cluster when deleted removes the automatic snapshots. However, manual snapshots are retained.

Redis Cluster Mode

ElastiCache Redis provides the ability to create distinct types of Redis clusters

  • A Redis (cluster mode disabled) cluster
    • always has a single shard with up to 5 read replica nodes.
  • A Redis (cluster mode enabled) cluster
    • has up to 500 shards with 1 to 5 read replica nodes in each.

ElastiCache Redis Cluster Mode

  • Scaling vs Partitioning
    • Redis (cluster mode disabled) supports Horizontal scaling for read capacity by adding or deleting replica nodes, or vertical scaling by scaling up to a larger node type.
    • Redis (cluster mode enabled) supports partitioning the data across up to 500 node groups. The number of shards can be changed dynamically as the demand changes. It also helps spread the load over a greater number of endpoints, which reduces access bottlenecks during peak demand.
  • Node Size vs Number of Nodes
    • Redis (cluster mode disabled) cluster has only one shard and the node type must be large enough to accommodate all the cluster’s data plus necessary overhead.
    • Redis (cluster mode enabled) cluster can have smaller node types as the data can be spread across partitions.
  • Reads vs Writes
    • Redis (cluster mode disabled) cluster can be scaled for reads by adding more read replicas (5 max)
    • Redis (cluster mode disabled) cluster can be scaled for both reads and writes by adding read replicas and multiple shards.

Memcached

  • Memcached is an in-memory key-value store for small chunks of arbitrary data.
  • ElastiCache for Memcached can be used to cache a variety of objects
    • from the content in persistent data stores such as RDS, DynamoDB, or self-managed databases hosted on EC2)
    • dynamically generated web pages e.g. with Nginx
    • transient session data that may not require a persistent backing store
  • ElastiCache for Memcached
    • can be scaled Vertically by increasing the node type size
    • can be scaled Horizontally by adding and removing nodes
    • does not support the persistence of data
  • ElastiCache for Memcached cluster can have
    • nodes that can span across multiple AZs within the same region
    • maximum of 20 nodes per cluster with a maximum of 100 nodes per region (soft limit and can be extended).
  • ElastiCache for Memcached supports auto-discovery, which enables the automatic discovery of cache nodes by clients when they are added to or removed from an ElastiCache cluster.

ElastiCache Mitigating Failures

  • ElastiCache should be designed to plan so that failures have a minimal impact on the application and data.
  • Mitigating Failures when Running Memcached
    • Mitigating Node Failures
      • spread the cached data over more nodes
      • as Memcached does not support replication, a node failure will always result in some data loss from the cluster
      • having more nodes will reduce the proportion of cache data lost
    • Mitigating Availability Zone Failures
      • locate the nodes in as many availability zones as possible, only the data cached in that AZ is lost, not the data cached in the other AZs
  • Mitigating Failures when Running Redis
    • Mitigating Cluster Failures
      • Redis Append Only Files (AOF)
        • enable AOF so whenever data is written to the Redis cluster, a corresponding transaction record is written to a Redis AOF.
        • when Redis process restarts, ElastiCache creates a replacement cluster and provisions it and repopulates it with data from AOF.
        • It is time-consuming
        • AOF can get big.
        • Using AOF cannot protect you from all failure scenarios.
      • Redis Replication Groups
        • A Redis replication group is comprised of a single primary cluster which the application can both read from and write to, and from 1 to 5 read-only replica clusters.
        • Data written to the primary cluster is also asynchronously updated on the read replica clusters.
        • When a Read Replica fails, ElastiCache detects the failure, replaces the instance in the same AZ, and synchronizes with the Primary Cluster.
        • Redis Multi-AZ with Automatic Failover, ElastiCache detects Primary cluster failure and promotes a read replica with the least replication lag to primary.
        • Multi-AZ with Auto Failover is disabled, ElastiCache detects Primary cluster failure, creates a new one and syncs the new Primary with one of the existing replicas.
    • Mitigating Availability Zone Failures
      • locate the clusters in as many availability zones as possible

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What does Amazon ElastiCache provide?
    1. A service by this name doesn’t exist. Perhaps you mean Amazon CloudCache.
    2. A virtual server with a huge amount of memory.
    3. A managed In-memory cache service
    4. An Amazon EC2 instance with the Memcached software already pre-installed.
  2. You are developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? Choose 3 answers.
    1. Elastic Load Balancing
    2. Amazon Relational Database Service (RDS)
    3. Amazon CloudWatch
    4. Amazon ElastiCache
    5. Amazon DynamoDB
    6. AWS Storage Gateway
  3. Which statement best describes ElastiCache?
    1. Reduces the latency by splitting the workload across multiple AZs
    2. A simple web services interface to create and store multiple data sets, query your data easily, and return the results
    3. Offload the read traffic from your database in order to reduce latency caused by read-heavy workload
    4. Managed service that makes it easy to set up, operate and scale a relational database in the cloud
  4. Our company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)
    1. Deploy ElastiCache in-memory cache running in each availability zone
    2. Implement sharding to distribute load to multiple RDS MySQL instances
    3. Increase the RDS MySQL Instance size and Implement provisioned IOPS
    4. Add an RDS MySQL read replica in each availability zone
  5. You are using ElastiCache Memcached to store session state and cache database queries in your infrastructure. You notice in CloudWatch that Evictions and Get Misses are both very high. What two actions could you take to rectify this? Choose 2 answers
    1. Increase the number of nodes in your cluster
    2. Tweak the max_item_size parameter
    3. Shrink the number of nodes in your cluster
    4. Increase the size of the nodes in the cluster
  6. You have been tasked with moving an ecommerce web application from a customer’s datacenter into a VPC. The application must be fault tolerant and well as highly scalable. Moreover, the customer is adamant that service interruptions not affect the user experience. As you near launch, you discover that the application currently uses multicast to share session state between web servers, In order to handle session state within the VPC, you choose to:
    1. Store session state in Amazon ElastiCache for Redis (scalable and makes the web applications stateless)
    2. Create a mesh VPN between instances and allow multicast on it
    3. Store session state in Amazon Relational Database Service (RDS solution not highly scalable)
    4. Enable session stickiness via Elastic Load Balancing (affects user experience if the instance goes down)
  7. When you are designing to support a 24-hour flash sale, which one of the following methods best describes a strategy to lower the latency while keeping up with unusually heavy traffic?
    1. Launch enhanced networking instances in a placement group to support the heavy traffic (only improves internal communication)
    2. Apply Service Oriented Architecture (SOA) principles instead of a 3-tier architecture (just simplifies architecture)
    3. Use Elastic Beanstalk to enable blue-green deployment (only minimizes download for applications and ease of rollback)
    4. Use ElastiCache as in-memory storage on top of DynamoDB to store user sessions (scalable, faster read/writes and in memory storage)
  8. You are configuring your company’s application to use Auto Scaling and need to move user state information. Which of the following AWS services provides a shared data store with durability and low latency?
    1. AWS ElastiCache Memcached (does not provide durability as if the node is gone the data is gone)
    2. Amazon Simple Storage Service
    3. Amazon EC2 instance storage
    4. Amazon DynamoDB
  9. Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence. The database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve performance you recently added a single-node Memcached ElastiCache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%. Do you need to change anything in the architecture to maintain the high availability for the application with the anticipated additional load and Why?
    1. You should deploy two Memcached ElastiCache Clusters in different AZs because the RDS Instance will not be able to handle the load if the cache node fails.
    2. If the cache node fails the automated ElastiCache node recovery feature will prevent any availability impact. (does not provide high availability, as data is lost if the node is lost)
    3. Yes you should deploy the Memcached ElastiCache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails. (Single AZ affects availability as DB is Multi AZ and would be overloaded is the AZ goes down)
    4. No if the cache node fails you can always get the same data from the DB without having any availability impact. (Will overload the database affecting availability)
  10. A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements?
    1. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and RDS with read replicas.
    2. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas (Stateful instances will not allow for scaling)
    3. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and multi-AZ RDS (Stateful instances will allow not for scaling & multi-AZ is for high availability and not scaling)
    4. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS (multi-AZ is for high availability and not scaling)
  11. You have written an application that uses the Elastic Load Balancing service to spread traffic to several web servers. Your users complain that they are sometimes forced to login again in the middle of using your application, after they have already logged in. This is not behavior you have designed. What is a possible solution to prevent this happening?
    1. Use instance memory to save session state.
    2. Use instance storage to save session state.
    3. Use EBS to save session state.
    4. Use ElastiCache to save session state.
    5. Use Glacier to save session slate.

AWS DynamoDB Secondary Indexes

DynamoDB Secondary Indexes - GSI vs LSI

AWS DynamoDB Secondary Indexes

  • DynamoDB provides fast access to items in a table by specifying primary key values
  • DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
  • DynamoDB Secondary indexes
    • is a data structure that contains a subset of attributes from a table.
    • is associated with exactly one table, from which it obtains its data.
    • requires an alternate key for the index partition key and sort key.
    • additionally can define projected attributes that are copied from the base table into the index along with the primary key attributes.
    • is automatically maintained by DynamoDB.
    • indexes on that table are also updated for any addition, modification, or deletion of items in the base table.
    • helps reduce the size of the data as compared to the main table, depending upon the project attributes, and hence helps improve provisioned throughput performance
    • are automatically maintained as sparse objects. Items will only appear in an index if they exist in the table on which the index is defined, making queries an index very efficient
  • DynamoDB Secondary indexes support two types
    • Global secondary index – an index with a partition key and a sort key that can be different from those on the base table.
    • Local secondary index – an index that has the same partition key as the base table, but a different sort key.

Global Secondary Indexes – GSI

  • DynamoDB creates and maintains indexes for the primary key attributes for efficient access to data in the table, which allows applications to quickly retrieve data by specifying primary key values.
  • Global Secondary Indexes – GSI are indexes that contain partition or composite partition-and-sort keys that can be different from the keys in the table on which the index is based.
  • Global secondary index is considered “global” because queries on the index can span all items in a table, across all partitions.
  • Multiple secondary indexes can be created on a table, and queries issued against these indexes.
  • Applications benefit from having one or more secondary keys available to allow efficient access to data with attributes other than the primary key.
  • GSIs support non-unique attributes, which increases query flexibility by enabling queries against any non-key attribute in the table
  • GSIs support eventual consistency. DynamoDB automatically handles item additions, updates, and deletes in a GSI when corresponding changes are made to the table asynchronously
  • Data in a secondary index consists of GSI alternate key, primary key and attributes that are projected, or copied, from the table into the index.
  • Attributes that are part of an item in a table, but not part of the GSI key, the primary key of the table, or projected attributes are not returned on querying the GSI index.
  • GSIs manage throughput independently of the table they are based on and the provisioned throughput for the table and each associated GSI needs to be specified at the creation time.
    • Read provisioned throughput
      • provides one Read Capacity Unit with two eventually consistent reads per second for items < 4KB in size.
      • provides one Write Capacity Unit with one write per second for items < 1KB in size.
    • Write provisioned throughput
      • consumes 1 write capacity unit if,
        • a new item is inserted into the table
        • existing item is deleted from the table
        • existing items are updated for projected attributes
      • consumes 2 write capacity units if
        • existing item is updated for key attributes, which results in deletion and addition of the new item into the index
  • Throttling on a GSI affects the base table depending on whether the throttling is for read or write activity:
    • When a GSI has insufficient read capacity, the base table isn’t affected.
    • When a GSI has insufficient write capacity, write operations won’t succeed on the base table or any of its GSIs.

Local Secondary Indexes (LSI)

  • Local secondary indexes are indexes that have the same partition key as the table, but a different sort key.
  • Local secondary index is “local” cause every partition of a local secondary index is scoped to a table partition that has the same partition key.
  • LSI allows search using a secondary index in place of the sort key, thus expanding the number of attributes that can be used for queries that can be conducted efficiently
  • LSI is updated automatically when the primary index is updated and reads support strong, eventual, and transactional consistency options.
  • LSIs can only be queried via the Query API
  • LSIs cannot be added to existing tables at this time
  • LSIs cannot be modified once it is created at this time
  • LSI cannot be removed from a table once they are created at this time
  • LSI consumes provisioned throughput capacity as part of the table with which it is associated
    • Read Provisioned throughput
      • if data read is indexed and projected attributes
        • provides one Read Capacity Unit with one strongly consistent read (or two eventually consistent reads) per second for items < 4KB
        • data size includes the index and projected attributes only
      • if data read is indexed and a non-projected attribute
        • consumes double the read capacity, with one to read from the index and one to read from the table with the entire data and not just the non-projected attribute
    • Write provisioned throughput
      • consumes 1 write capacity unit if,
        • a new item is inserted into the table
        • existing item is deleted from the table
        • existing items are updated for project attributes
      • consumes 2 write capacity units if
        • existing item is updated for key attributes, which results in deletion and addition of the new item into the index

Global Secondary Index vs Local Secondary Index

DynamoDB Secondary Indexes - GSI vs LSI

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. In DynamoDB, a secondary index is a data structure that contains a subset of attributes from a table, along with an alternate key to support ____ operations.
    1. None of the above
    2. Both
    3. Query
    4. Scan
  2. In regard to DynamoDB, what is the Global secondary index?
    1. An index with a partition and sort key that can be different from those on the table
    2. An index that has the same sort key as the table, but a different partition key
    3. An index that has the same partition key and sort key as the table
    4. An index that has the same partition key as the table, but a different sort key
  3. In regard to DynamoDB, can I modify the index once it is created?
    1. Yes, if it is a primary hash key index
    2. Yes, if it is a Global secondary index (AWS now allows you to modify global secondary indexes after creation)
    3. No
    4. Yes, if it is a local secondary index
  4. When thinking of DynamoDB, what is true of Global Secondary Key properties?
    1. Both the partition key and sort key can be different from the table.
    2. Only the partition key can be different from the table.
    3. Either the partition key or the sort key can be different from the table, but not both.
    4. Only the sort key can be different from the table.

References

AWS IAM Access Management

IAM Access Policies

IAM Access Management

  • IAM Access Management is all about Permissions and Policies.
  • Permission help define who has access & what actions can they perform.
  • IAM Policy helps to fine-tune the permissions granted to the policy owner
  • IAM Policy is a document that formally states one or more permissions.
  • Most restrictive Policy always wins
  • IAM Policy is defined in the JSON (JavaScript Object Notation) format

IAM policy basically states “Principal A is allowed or denied (Effect) to perform Action B on Resource C given Conditions D are satisfied”

IAM Access Policies

  • An Entity can be associated with Multiple Policies and a Policy can have multiple statements where each statement in a policy refers to a single permission.
  • If the policy includes multiple statements, a logical OR is applied across the statements at evaluation time. Similarly, if multiple policies are applicable to a request, a logical OR is applied across the policies at evaluation time.
  • Principal can either be specified within the Policy for Resource based policies while for Identity based policies the principal is the user, group, or role to which the policy is attached.

Identity-Based vs Resource-Based Permissions

Identity-based, or IAM permissions

  • Identity-based or IAM permissions are attached to an IAM user, group, or role and specify what the user, group, or role can do.
  • User, group, or the role itself acts as a Principal.
  • IAM permissions can be applied to almost all the AWS services.
  • IAM Policies can either be inline or managed (AWS or Customer).
  • IAM Policy’s current version is 2012-10-17.

Resource-based permissions

  • Resource-based permissions are attached to a resource for e.g. S3, SNS 
  • Resource-based permissions specify both who has access to the resource (Principal) and what actions they can perform on it (Actions)
  • Resource-based policies are inline only, not managed.
  • Resource-based permissions are supported only by some AWS services
  • Resource-based policies can be defined with version 2012-10-17 or 2008-10-17

Managed Policies and Inline Policies

  • Managed policies
    • Managed policies are Standalone policies that can be attached to multiple users, groups, and roles in an AWS account.
    • Managed policies apply only to identities (users, groups, and roles) but not to resources.
    • Managed policies allow reusability
    • Managed policy changes are implemented as versions (limited to 5), an new change to the existing policy creates a new version which is useful to compare the changes and revert back, if needed
    • Managed policies have their own ARN
    • Two types of managed policies:
      • AWS managed policies
        • Managed policies that are created and managed by AWS.
        • AWS maintains and can upgrades these policies for e.g. if a new service is introduced, the changes automatically effects all the existing principals attached to the policy
        • AWS takes care of not breaking the policies for e.g. adding an restriction of removal of permission
        • Managed policies cannot be modified
      • Customer managed policies
        • Managed policies are standalone and custom policies created and administered by you.
        • Customer managed policies allows more precise control over the policies than when using AWS managed policies.
  • Inline policies
    • Inline policies are created and managed by you, and are embedded directly into a single user, group, or role.
    • Deletion of the Entity (User, Group or Role) or Resource deletes the In-Line policy as well

ABAC – Attribute-Based Access Control

  • ABAC – Attribute-based access control is an authorization strategy that defines permissions based on attributes called tags.
  • ABAC policies can be designed to allow operations when the principal’s tag matches the resource tag.
  • ABAC is helpful in environments that are growing rapidly and help with situations where policy management becomes cumbersome.
  • ABAC policies are easier to manage as different policies for different job functions need not be created.
  • Complements RBAC for granular permissions, with RBAC allowing access to only specific resources and ABAC can allow actions on all resources, but only if the resource tag matches the principal’s tag.
  • ABAC can help use employee attributes from the corporate directory with federation where attributes are applied to their resulting principal.

IAM Permissions Boundaries

  • Permissions boundary allows using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity.
  • Permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries.
  • Permissions boundary supports both the AWS-managed policy and the customer-managed policy to set the boundary for an IAM entity.
  • Permissions boundary can be applied to an IAM entity (user or role ) but is not supported for IAM Group.
  • Permissions boundary does not grant permissions on its own.

IAM Policy Simulator

  • IAM Policy Simulator helps test and troubleshoot IAM and resource-based policies
  • IAM Policy Simulator can help test the following ways:-
    • Test IAM based policies. If multiple policies are attached, you can test all the policies or select individual policies to test. You can test which actions are allowed or denied by the selected policies for specific resources.
    • Test Resource based policies. However, Resource-based policies cannot be tested standalone and have to be attached to the Resource
    • Test new IAM policies that are not yet attached to a user, group, or role by typing or copying them into the simulator. These are used only in the simulation and are not saved.
    • Test the policies with selected services, actions, and resources
    • Simulate real-world scenarios by providing context keys, such as an IP address or date, that are included in Condition elements in the policies being tested.
    • Identify which specific statement in a policy results in allowing or denying access to a particular resource or action.
  • IAM Policy Simulator does not make an actual AWS service request and hence does not make unwanted changes to the AWS live environment
  • IAM Policy Simulator just reports the result Allowed or Denied
  • IAM Policy Simulator allows to you modify the policy and test. These changes are not propagated to the actual policies attached to the entities
  • Introductory Video for Policy Simulator

IAM Policy Evaluation

When determining if permission is allowed, the hierarchy is followed

IAM Permission Policy Evaluation

  1. Decision allows starts with Deny.
  2. IAM combines and evaluates all the policies.
  3. Explicit Deny
    • First IAM checks for an explicit denial policy.
    • Explicit Deny overrides everything and if something is explicitly denied it can never be allowed.
  4. Explicit Allow
    • If one does not exist, it then checks for an explicit allow policy.
    • For granting the User any permission, the permission must be explicitly allowed
  5. Implicit Deny
    • If neither an explicit deny nor explicit allow policy exists, it reverts to the default: implicit deny.
    • All permissions are implicity denied by default

IAM Policy Variables

  • Policy variables provide a feature to specify placeholders in a policy.
  • When the policy is evaluated, the policy variables are replaced with values that come from the request itself
  • Policy variables allow a single policy to be applied to a group of users to control access for e.g. all user having access to S3 bucket folder with their name only
  • Policy variable is marked using a $ prefix followed by a pair of curly braces ({ }). Inside the ${ } characters, with the name of the value from the request that you want to use in the policy
  • Policy variables work only with policies defined with Version 2012-10-17
  • Policy variables can only be used in the Resource element and in string comparisons in the Condition element
  • Policy variables are case sensitive and include variables like aws:username, aws:userid, aws:SourceIp, aws:CurrentTime etc.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. IAM’s Policy Evaluation Logic always starts with a default ____________ for every request, except for those that use the AWS account’s root security credentials b
    1. Permit
    2. Deny
    3. Cancel
  2. An organization has created 10 IAM users. The organization wants each of the IAM users to have access to a separate DynamoDB table. All the users are added to the same group and the organization wants to setup a group level policy for this. How can the organization achieve this?
    1. Define the group policy and add a condition which allows the access based on the IAM name
    2. Create a DynamoDB table with the same name as the IAM user name and define the policy rule which grants access based on the DynamoDB ARN using a variable
    3. Create a separate DynamoDB database for each user and configure a policy in the group based on the DB variable
    4. It is not possible to have a group level policy which allows different IAM users to different DynamoDB Tables
  3. An organization has setup multiple IAM users. The organization wants that each IAM user accesses the IAM console only within the organization and not from outside. How can it achieve this?
    1. Create an IAM policy with the security group and use that security group for AWS console login
    2. Create an IAM policy with a condition which denies access when the IP address range is not from the organization
    3. Configure the EC2 instance security group which allows traffic only from the organization’s IP range
    4. Create an IAM policy with VPC and allow a secure gateway between the organization and AWS Console
  4. Can I attach more than one policy to a particular entity?
    1. Yes always
    2. Only if within GovCloud
    3. No
    4. Only if within VPC
  5. A __________ is a document that provides a formal statement of one or more permissions.
    1. policy
    2. permission
    3. Role
    4. resource
  6. A __________ is the concept of allowing (or disallowing) an entity such as a user, group, or role some type of access to one or more resources.
    1. user
    2. AWS Account
    3. resource
    4. permission
  7. True or False: When using IAM to control access to your RDS resources, the key names that can be used are case sensitive. For example, aws:CurrentTime is NOT equivalent to AWS:currenttime.
    1. TRUE
    2. FALSE (Refer link)
  8. A user has set an IAM policy where it allows all requests if a request from IP 10.10.10.1/32. Another policy allows all the requests between 5 PM to 7 PM. What will happen when a user is requesting access from IP 10.10.10.1/32 at 6 PM?
    1. IAM will throw an error for policy conflict
    2. It is not possible to set a policy based on the time or IP
    3. It will deny access
    4. It will allow access
  9. Which of the following are correct statements with policy evaluation logic in AWS Identity and Access Management? Choose 2 answers.
    1. By default, all requests are denied
    2. An explicit allow overrides an explicit deny
    3. An explicit allow overrides default deny
    4. An explicit deny does not override an explicit allow
    5. By default, all request are allowed
  10. A web design company currently runs several FTP servers that their 250 customers use to upload and download large graphic files. They wish to move this system to AWS to make it more scalable, but they wish to maintain customer privacy and keep costs to a minimum. What AWS architecture would you recommend? [PROFESSIONAL]
    1. Ask their customers to use an S3 client instead of an FTP client. Create a single S3 bucket. Create an IAM user for each customer. Put the IAM Users in a Group that has an IAM policy that permits access to subdirectories within the bucket via use of the ‘username’ Policy variable.
    2. Create a single S3 bucket with Reduced Redundancy Storage turned on and ask their customers to use an S3 client instead of an FTP client. Create a bucket for each customer with a Bucket Policy that permits access only to that one customer. (Creating bucket for each user is not a scalable model, also 100 buckets are a limit earlier without extending which has since changed link)
    3. Create an auto-scaling group of FTP servers with a scaling policy to automatically scale-in when minimum network traffic on the auto-scaling group is below a given threshold. Load a central list of ftp users from S3 as part of the user Data startup script on each Instance (Expensive)
    4. Create a single S3 bucket with Requester Pays turned on and ask their customers to use an S3 client instead of an FTP client. Create a bucket tor each customer with a Bucket Policy that permits access only to that one customer. (Creating bucket for each user is not a scalable model, also 100 buckets are a limit earlier without extending which has since changed link)

AWS DynamoDB Advanced Features

AWS DynamoDB Advanced Features

  • DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
  • DynamoDB Time to Live – TTL enables a per-item timestamp to determine when an item is no longer needed.
  • DynamoDB cross-region replication allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
  • DynamoDB Global Tables is a new multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table.
  • DynamoDB Triggers (just like database triggers) are a feature that allows the execution of custom actions based on item-level updates on a table.
  • DynamoDB Accelerator – DAX is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from ms to µs – even at millions of requests per second.
  • VPC Gateway Endpoints provide private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.

DynamoDB Secondary Indexes

  • DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
  • Global secondary index – an index with a partition key and a sort key that can be different from those on the base table.
  • Local secondary index – an index that has the same partition key as the base table, but a different sort key.

DynamoDB TTL

  • DynamoDB Time to Live (TTL) enables a per-item timestamp to determine when an item is no longer needed.
  • After the date and time of the specified timestamp, DynamoDB deletes the item from the table without consuming any write throughput.
  • DynamoDB TTL is provided at no extra cost and can help reduce data storage by retaining only required data.
  • Items that are deleted from the table are also removed from any local secondary index and global secondary index in the same way as a DeleteItem operation.
  • Expired items get removed from the table and indexes within about 48 hours.
  • DynamoDB Stream tracks the delete operation as a system delete and not a regular delete.
  • TTL is useful if the stored items lose relevance after a specific time. for e.g.
    • Remove user or sensor data after a year of inactivity in an application
    • Archive expired items to an S3 data lake via DynamoDB Streams and AWS Lambda.
    • Retain sensitive data for a certain amount of time according to contractual or regulatory obligations.

DynamoDB Cross-region Replication

  • DynamoDB cross-region replication allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
  • Writes to the table will be automatically propagated to all replicas.
  • Cross-region replication currently supports a single master mode. A single master has one master table and one or more replica tables.
  • Read replicas are updated asynchronously as DynamoDB acknowledges a write operation as successful once it has been accepted by the master table. The write will then be propagated to each replica with a slight delay.
  • Cross-region replication can be helpful in scenarios
    • Efficient disaster recovery, in case a data center failure occurs.
    • Faster reads, for customers in multiple regions by delivering data faster by reading a DynamoDB table from the closest AWS data center.
    • Easier traffic management, to distribute the read workload across tables and thereby consume less read capacity in the master table.
    • Easy regional migration, by promoting a read replica to master
    • Live data migration, to replicate data and when the tables are in sync, switch the application to write to the destination region
  • Cross-region replication costing depends on
    • Provisioned throughput (Writes and Reads)
    • Storage for the replica tables.
    • Data Transfer across regions
    • Reading data from DynamoDB Streams to keep the tables in sync.
    • Cost of EC2 instances provisioned, depending upon the instance types and region, to host the replication process.
  • NOTE : Cross Region replication on DynamoDB was performed defining AWS Data Pipeline job which used EMR internally to transfer data before the DynamoDB streams and out-of-box cross-region replication support.

DynamoDB Global Tables

  • DynamoDB Global Tables is a multi-master, active-active, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • Applications can now perform reads and writes to DynamoDB in AWS regions around the world, with changes in any region propagated to every region where a table is replicated.
  • Global Tables help in building applications to advantage of data locality to reduce overall latency.
  • Global Tables supports eventual consistency & strong consistency for same region reads, but only eventual consistency for cross-region reads.
  • Global Tables replicates data among regions within a single AWS account and currently does not support cross-account access.
  • Global Tables uses the Last Write Wins approach for conflict resolution.
  • Global Tables requires DynamoDB streams enabled with New and Old image settings.

DynamoDB Streams

  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table.
  • DynamoDB Streams stores the data for the last 24 hours, after which they are erased.
  • DynamoDB Streams maintains an ordered sequence of the events per item however, sequence across items is not maintained.
  • Example
    • For e.g., suppose that you have a DynamoDB table tracking high scores for a game and that each item in the table represents an individual player. If you make the following three updates in this order:
      • Update 1: Change Player 1’s high score to 100 points
      • Update 2: Change Player 2’s high score to 50 points
      • Update 3: Change Player 1’s high score to 125 points
    • DynamoDB Streams will maintain the order for Player 1 score events. However, it would not maintain order across the players. So Player 2 score event is not guaranteed between the 2 Player 1 events
  • DynamoDB Streams APIs help developers consume updates and receive the item-level data before and after items are changed.
  • DynamoDB Streams allow reads at up to twice the rate of the provisioned write capacity of the DynamoDB table.
  • DynamoDB Streams have to be enabled on a per-table basis.
  • DynamoDB streams support Encryption at rest to encrypt the data.
  • DynamoDB Streams is designed for No Duplicates so that every update made to the table will be represented exactly once in the stream.
  • DynamoDB Streams writes stream records in near-real time so that applications can consume these streams and take action based on the contents.
  • DynamoDB streams can be used for multi-region replication to keep other data stores up-to-date with the latest changes to DynamoDB or to take actions based on the changes made to the table
  • DynamoDB steam records can be processed using Kinesis Data Streams, Lambda, or KCL application.

DynamoDB Triggers

  • DynamoDB Triggers (just like database triggers) are a feature that allows the execution of custom actions based on item-level updates on a table.
  • DynamoDB triggers can be used in scenarios like sending notifications, updating an aggregate table, and connecting DynamoDB tables to other data sources.
  • DynamoDB Trigger flow
    • Custom logic for a DynamoDB trigger is stored in an AWS Lambda function as code.
    • A trigger for a given table can be created by associating an AWS Lambda function to the stream (via DynamoDB Streams) on a table.
    • When the table is updated, the updates are published to DynamoDB Streams.
    • In turn, AWS Lambda reads the updates from the associated stream and executes the code in the function.

DynamoDB Backup and Restore

  • DynamoDB on-demand backup helps create full backups of the tables for long-term retention, and archiving for regulatory compliance needs.
  • Backup and restore actions run with no impact on table performance or availability.
  • Backups are preserved regardless of table deletion and retained until they are explicitly deleted.
  • On-demand backups are cataloged, and discoverable.
  • On-demand backups can be created using
    • DynamoDB
      • DynamoDB on-demand backups cannot be copied to a different account or Region.
    • AWS Backup (Recommended)
      • is a fully managed data protection service that makes it easy to centralize and automate backups across AWS services, in the cloud, and on-premises
      • provides enhanced backup features
      • can configure backup schedule, policies and monitor activity for the AWS resources and on-premises workloads in one place.
      • can copy the on-demand backups across AWS accounts and Regions,
      • encryption using an AWS KMS key that is independent of the DynamoDB table encryption key.
      • apply write-once-read-many (WORM) setting for the backups using the AWS Backup Vault Lock policy.
      • add cost allocation tags to on-demand backups, and
      • transition on-demand backups to cold storage for lower costs.

DynamoDB PITR – Point-In-Time Recovery

  • DynamoDB point-in-time recovery – PITR enables automatic, continuous, incremental backup of the table with per-second granularity.
  • PITR-enabled tables that were deleted can be recovered in the preceding 35 days and restored to their state just before they were deleted.
  • PITR helps protect against accidental writes and deletes.
  • PITR can back up tables with hundreds of terabytes of data with no impact on the performance or availability of the production applications.

DynamoDB Accelerator – DAX

  • DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second.
  • DAX is intended for high-performance read applications. As a write-through cache, DAX writes directly so that the writes are immediately reflected in the item cache.
  • DAX as a managed service handles the cache invalidation, data population, or cluster management.
  • DAX provides API-compatible with DynamoDB. Therefore, it requires only minimal functional changes to use with an existing application.
  • DAX saves costs by reducing the read load (RCU) on DynamoDB.
  • DAX helps prevent hot partitions.
  • DAX only supports eventual consistency, and strong consistency requests are passed-through to DynamoDB.
  • DAX is fault-tolerant and scalable.
  • DAX cluster has a primary node and zero or more read-replica nodes. Upon a failure for a primary node, DAX will automatically failover and elect a new primary. For scaling, add or remove read replicas.
  • DAX supports server-side encryption.
  • DAX also supports encryption in transit, ensuring that all requests and responses between the application and the cluster are encrypted by TLS, and connections to the cluster can be authenticated by verification of a cluster x509 certificate

DynamoDB Accelerator - DAX

VPC Endpoints

  • VPC endpoints for DynamoDB improve privacy and security, especially those dealing with sensitive workloads with compliance and audit requirements, by enabling private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.
  • VPC endpoints for DynamoDB support IAM policies to simplify DynamoDB access control, where access can be restricted to a specific VPC endpoint.
  • VPC endpoints can be created only for Amazon DynamoDB tables in the same AWS Region as the VPC
  • DynamoDB Streams cannot be accessed using VPC endpoints for DynamoDB.

VPC Gateway Endpoints

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What are the services supported by VPC endpoints, using Gateway endpoint type? Choose 2 answers
    1. Amazon S3
    2. Amazon EFS
    3. Amazon DynamoDB
    4. Amazon Glacier
    5. Amazon SQS
  2. A company has setup an application in AWS that interacts with DynamoDB. DynamoDB is currently responding in milliseconds, but the application response guidelines require it to respond within microseconds. How can the performance of DynamoDB be further improved? [SAA-C01]
    1. Use ElastiCache in front of DynamoDB
    2. Use DynamoDB inbuilt caching
    3. Use DynamoDB Accelerator
    4. Use RDS with ElastiCache instead

References

AWS RDS DB Maintenance & Upgrades

RDS DB Maintenance and Upgrades

  • Changes to a DB instance can occur when a DB instance is manually modified for e.g. DB engine version is upgraded, or when RDS performs maintenance on an instance

RDS Maintenance

  • RDS performs periodic maintenance on RDS resources, such as DB instances, and most often involves updates to the DB instance’s operating system (OS).
  • Maintenance items can either
    • be applied manually on a DB instance at one’s convenience
    • or wait for the automatic maintenance process initiated by RDS during the defined weekly maintenance window.
  • Maintenance window only determines when pending operations start but does not limit the total execution time of these operations.
  • Maintenance operations are not guaranteed to finish before the maintenance window ends and can continue beyond the specified end time.
  • Maintenance update availability can be checked both on the RDS console and by using the RDS API. And if an update is available, one can
    • Defer the maintenance items.
    • Apply the maintenance items immediately.
    • Schedule them to start during the next defined maintenance window
  • Maintenance items marked as
    • Required cannot be deferred indefinitely, if deferred AWS will send a notify the time when the update will be performed next
    • Available and can be deferred indefinitely and the update will not be applied to the DB instance.
  • Required patching is automatically scheduled only for patches that are related to security and instance reliability. Such patching occurs infrequently (typically once every few months) and seldom requires more than a fraction of your maintenance window.
  • Maintenance items require that RDS take the DB instance offline for a short time. Maintenance that requires DB instances to be offline includes scale compute operations, which generally take only a few minutes from start to finish, and required operating system or database patching.
  • Multi-AZ deployment for the DB instance reduces the impact of a maintenance event by following these steps:
    • Perform maintenance on standby.
    • Promote the standby to primary.
    • Perform maintenance on the old primary, which becomes the new standby.
  • When the database engine for the DB instance is modified in a Multi-AZ deployment, RDS upgrades both the primary and secondary DB instances at the same time. In this case, the database engine for the entire Multi-AZ deployment is shut down during the upgrade.

Operating System Updates

  • Upgrades to the operating system are most often for security issues and should be done as soon as possible.
  • OS updates on a DB instance can be applied at one’s convenience or can wait for the maintenance process initiated by RDS to apply the update during the defined maintenance window
  • DB instance is not automatically backed up when an OS update is applied and should be backup up before the update is applied

Database Engine Version Upgrade

  • DB instance engine version can be upgraded when a new DB engine version is supported by RDS.
  • Database version upgrades consist of major and minor version upgrades.
    • Major database version upgrades
      • can contain changes that are not backward-compatible
      • RDS doesn’t apply major version upgrades automatically
      • DB instance should be manually modified and thoroughly tested before applying it to the production instances.
    • Minor version upgrades
      • Each DB engine handles minor version upgrade slightly differently
        for e.g. RDS automatically apply minor version upgrades to a DB instance running PostgreSQL, but must be manually applied to a DB instance running Oracle.
  • Amazon posts an announcement to the forums announcement page and sends a customer e-mail notification before upgrading an DB instance
  • Amazon schedule the upgrades at specific times through the year, to help plan around them, because downtime is required to upgrade a DB engine version, even for Multi-AZ instances.
  • RDS takes two DB snapshots during the upgrade process.
    • First DB snapshot is of the DB instance before any upgrade changes have been made. If the upgrade fails, it can be restored from the snapshot to create a DB instance running the old version.
    • Second DB snapshot is taken when the upgrade completes. After the upgrade is complete, database engine can’t be reverted to the previous version. For returning to the previous version, restore the first DB snapshot taken to create a new DB instance.
  • If the DB instance is using read replication, all of the Read Replicas must be upgraded before upgrading the source instance.
  • If the DB instance is in a Multi-AZ deployment, both the primary and standby replicas are upgraded at the same time and would result in an outage. The time for the outage varies based on your database engine, version, and the size of your DB instance.

RDS Maintenance Window

  • Every DB instance has a weekly maintenance window defined during which any system changes are applied.
  • Maintenance window is an opportunity to control when DB instance modifications and software patching occur, in the event either are requested or required.
  • If a maintenance event is scheduled for a given week, it will be initiated during the 30-minute maintenance window as defined
  • Maintenance events mostly complete during the 30-minute maintenance window, although larger maintenance events may take more time
  • 30-minute maintenance window is selected at random from an 8-hour block of time per region. If you don’t specify a preferred maintenance window when you create the DB instance, Amazon RDS assigns a 30-minute maintenance window on a randomly selected day of the week.
  • RDS will consume some of the resources on the DB instance while maintenance is being applied, minimally affecting performance.
  • For some maintenance events, a Multi-AZ failover may be required for a maintenance update to be complete.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A user has launched an RDS MySQL DB with the Multi AZ feature. The user has scheduled the scaling of instance storage during maintenance window. What is the correct order of events during maintenance window? 1. Perform maintenance on standby 2. Promote standby to primary 3. Perform maintenance on original primary 4. Promote original master back as primary
    1. 1, 2, 3, 4
    2. 1, 2, 3
    3. 2, 3, 4, 1
  2. Can I control if and when MySQL based RDS Instance is upgraded to new supported versions?
    1. No
    2. Only in VPC
    3. Yes
  3. A user has scheduled the maintenance window of an RDS DB on Monday at 3 AM. Which of the below mentioned events may force to take the DB instance offline during the maintenance window?
    1. Enabling Read Replica
    2. Making the DB Multi AZ
    3. DB password change
    4. Security patching
  4. A user has launched an RDS postgreSQL DB with AWS. The user did not specify the maintenance window during creation. The user has configured RDS to update the DB instance type from micro to large. If the user wants to have it during the maintenance window, what will AWS do?
    1. AWS will not allow to update the DB until the maintenance window is configured
    2. AWS will select the default maintenance window if the user has not provided it
    3. AWS will ask the user to specify the maintenance window during the update
    4. It is not possible to change the DB size from micro to large with RDS
  5. Can I test my DB Instance against a new version before upgrading?
    1. No
    2. Yes
    3. Only in VPC

References

AWS RDS Storage

AWS RDS Storage

  • RDS storage uses Elastic Block Store – EBS volumes for database and log storage.
  • RDS automatically stripes across multiple EBS volumes to enhance IOPS performance, depending on the amount of storage requested

RDS Storage Types

  • RDS storage provides three storage types: General Purpose (SSD), Provisioned IOPS (input/output operations per second), and Magnetic.
  • These storage types differ in performance characteristics and price, which allows tailoring of storage performance and cost to the database needs
  • MySQL, MariaDB, PostgreSQL, and Oracle RDS DB instances can be created with up to 64TB of storage, and SQL Server RDS DB instances with up to 16TB of storage when using the Provisioned IOPS and General Purpose (SSD) storage types.
  • Existing MySQL, PostgreSQL, and Oracle RDS database instances can be scaled to these new database storage limits without any downtime.

Magnetic (Standard)

  • Magnetic storage, also called standard storage, offers cost-effective storage that is ideal for applications with light or burst I/O requirements.
  • They deliver approximately 100 IOPS on average, with burst capability of up to hundreds of IOPS, and they can range in size from 5 GB to 3 TB, depending on the DB instance engine.
  • Magnetic storage is not reserved for a single DB instance, so performance can vary greatly depending on the demands placed on shared resources by other customers.

General Purpose (SSD)

  • General purpose, SSD-backed storage, also called gp2, can provide faster access than disk-based storage.
  • They can deliver single-digit millisecond latencies, with a base performance of 3 IOPS per Gigabyte (GB) and the ability to burst to 3,000 IOPS for extended periods of time up to a maximum of 10,000 PIOPS.
  • General Purpose volumes can range in size from 5 GB to 6 TB for MySQL, MariaDB, PostgreSQL, and Oracle DB instances, and from 20 GB to 4 TB for SQL Server DB instances.
  • General Purpose is excellent for small to medium-sized databases.

Provisioned IOPS

  • Provisioned IOPS storage is designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency in random access I/O throughput.
  • Provisioned IOPS storage is a storage type that delivers fast, predictable, and consistent throughput performance.
  • For any production application that requires fast and consistent I/O performance, Amazon recommends Provisioned IOPS (input/output operations per second) storage.
  • Provisioned IOPS storage is optimized for I/O intensive, online transaction processing (OLTP) workloads that have consistent performance requirements.
  • Provisioned IOPS helps with performance tuning.
  • Dedicated IOPS rate and storage space allocation is specified, when a DB instance is created. RDS provisions that IOPS rate and storage for the lifetime of the DB instance or until it is changed.
  • RDS delivers within 10 percent of the provisioned IOPS performance 99.9 percent of the time over a given year.

Adding Storage and Changing Storage Type

  • DB instance can be modified to use additional storage and converted to a different storage type.
  • However, storage allocated for a DB instance cannot be decreased
  • MySQL, MariaDB, PostgreSQL, and Oracle DB instances can be scaled up for storage, which helps improve I/O capacity.
  • Storage capacity nor the type of storage for a SQL Server DB instance can be changed due to the extensibility limitations of striped storage attached to a Windows Server environment.
  • During the scaling process, the DB instance will be available for reads and writes, but may experience performance degradation
  • Adding storage may take several hours; the duration of the process depends on several factors such as load, storage size, storage type, amount of IOPS provisioned (if any), and number of prior scale storage operations.
  • While storage is being added, nightly backups are suspended and no other RDS operations can take place, including modify, reboot, delete, create Read Replica, and create DB Snapshot

Performance Metrics

  • Amazon RDS provides several metrics that can be used to determine how the DB instance is performing.
    • IOPS
      • the number of I/O operations completed per second.
      • it is reported as the average IOPS for a given time interval.
      • RDS reports read and write IOPS separately on one minute intervals.
      • Total IOPS is the sum of the read and write IOPS.
      • Typical values for IOPS range from zero to tens of thousands per second.
    • Latency
      • the elapsed time between the submission of an I/O request and its completion
      • it is reported as the average latency for a given time interval.
      • RDS reports read and write latency separately on one minute intervals in units of seconds.
      • Typical values for latency are in the millisecond (ms)
    • Throughput
      • the number of bytes per second transferred to or from disk
      • it is reported as the average throughput for a given time interval.
      • RDS reports read and write throughput separately on one minute intervals using units of megabytes per second (MB/s).
      • Typical values for throughput range from zero to the I/O channel’s maximum bandwidth.
    • Queue Depth
      • the number of I/O requests in the queue waiting to be serviced.
      • these are I/O requests that have been submitted by the application but have not been sent to the device because the device is busy servicing other I/O requests.
      • it is reported as the average queue depth for a given time interval.
      • RDS reports queue depth in one minute intervals. Typical values for queue depth range from zero to several hundred.
      • Time spent waiting in the queue is a component of Latency and
        Service Time (not available as a metric).

RDS Storage Facts

  • First time a DB instance is started and accesses an area of disk for the first time, the process can take longer than all subsequent accesses to the same disk area. This is known as the “first touch penalty”. Once an area of disk has incurred the first touch penalty, that area of disk does not incur the penalty again for the life of the instance, even if the DB instance is rebooted, restarted, or the DB instance class changes. Note that a DB instance created from a snapshot, a point-in-time restore, or a read replica is a new instance and does incur this first touch penalty.
  • RDS manages the DB instance and it reserves overhead space on the instance. While the amount of reserved storage varies by DB instance class and other factors, this reserved space can be as much as one or two percent of the total storage
  • Provisioned IOPS provides a way to reserve I/O capacity by specifying IOPS. Like any other system capacity attribute, maximum throughput under load will be constrained by the resource that is consumed first, which could be IOPS, channel bandwidth, CPU, memory, or database internal resources.
  • Current maximum channel bandwidth available is 4000 megabits per second (Mbps) full duplex. In terms of the read and write throughput metrics, this equates to about 210 megabytes per second (MB/s) in each direction. A perfectly balanced workload of 50% reads and 50% writes may attain a maximum combined throughput of 420 MB/s, which includes protocol overhead, so the actual data throughput may be less.
  • Provisioned IOPS works with an I/O request size of 32 KB. Provisioned IOPS consumption is a linear function of I/O request size above 32 KB. An I/O request smaller than 32 KB is handled as one I/O; for e.g. 1000 16 KB I/O requests are treated the same as 1000 32 KB requests. I/O requests larger than 32 KB consume more than one I/O request; while, a 48 KB I/O request consumes 1.5 I/O requests of storage capacity; a 64 KB I/O request consumes 2 I/O requests

Factors That Impact RDS Storage Performance

  • Several factors can affect the performance of a DB instance, such as instance configuration, I/O characteristics, and workload demand.
  • System related activities also consume I/O capacity and may reduce database instance performance while in progress:
    • DB snapshot creation
    • Nightly backups
    • Multi-AZ peer creation
    • Read replica creation
    • Scaling storage
  • System resources can constrain the throughput of a DB instance, but there can be other reasons for a bottleneck. Database could be the issue if :-
    • Channel throughput limit is not reached
    • Queue depths are consistently low
    • CPU utilization is under 80%
    • Free memory available
    • No swap activity
    • Plenty of free disk space
    • Application has dozens of threads all submitting transactions as fast as the database will take them, but there is clearly unused I/O capacity

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. When should I choose Provisioned IOPS over Standard RDS storage?
    1. If you have batch-oriented workloads
    2. If you use production online transaction processing (OLTP) workloads
    3. If you have workloads that are not sensitive to consistent performance
  2. Is decreasing the storage size of a DB Instance permitted?
    1. Depends on the RDMS used
    2. Yes
    3. No
  3. Because of the extensibility limitations of striped storage attached to Windows Server, Amazon RDS does not currently support increasing storage on a _____ DB Instance.
    1. SQL Server
    2. MySQL
    3. Oracle
  4. If I want to run a database in an Amazon instance, which is the most recommended Amazon storage option?
    1. Amazon Instance Storage
    2. Amazon EBS
    3. You can’t run a database inside an Amazon instance.
    4. Amazon S3
  5. For each DB Instance class, what is the maximum size of associated storage capacity?
    1. 1TiB
    2. 2TiB
    3. 8TiB
    4. 16TiB (The limit keeps on changing so please check the latest always)

References