AWS Certified Alexa Skill Builder – Specialty (AXS-C01) Exam Learning Path

AWS Certified Alexa Skill Builder - Specialty Certificate

Finally All Down for AWS (for now) …

Continuing on my AWS journey with the last AWS certification, I took another step by clearing the AWS Certified Alexa Skill Builder – Specialty (AXS-C01) certification. It is amazing to know and learn how Voice first experiences are making an impact and changing how we think about technology and use cases.

AWS Certified Alexa Skill Builder – Specialty (AXS-C01) exam basically validates your ability to build, test, publish and certify Alexa skills.

AWS Certified Alexa Skill Builder – Specialty (AXS-C01) Exam Summary

  • AWS Certified Alexa Skill Builder – Specialty exam focuses only on Alexa and how to build skills.
  • AWS Certified Alexa Skill Builder – Specialty exam has 65 questions with a time limit of 170 minutes
  • Compared to the other professional and specialty exams, the question and answers are not long and similar to associate exams. So if you are prepared well, it should not need the 170 minutes.
  • As the exam was online from home, there was no access to paper and pen but the trick remains the same, read the question and draw a rough architecture and focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach to the right answer or atleast have a 50% chance of getting it right.

AWS Certified Alexa Skill Builder – Specialty (AXS-C01) Exam Topic Summary

Refer AWS Alexa Cheat Sheet

Domain 1: Voice-First Design Practices and Capabilities

1.1 Describe how users interact with skills

1.2 Map features and capabilities to use cases

  • Alexa supports display cards to display text (Simple card) and text with image (Standard card)
  • Alexa Alexa Skill Kits supports APIs
    • Alexa Settings APIs allow developers to retrieve customer preferences for the settings like time zone, distance measuring unit, and temperature measurement unit 
    • Device services – a skill can request the customer’s permission to their address information, which is a static data filled by customer and includes the country/region, postal code and full address
    • Customer Profile services – a skill can request the customer’s permission to their contact information, which includes name, email address and phone number
    • With Location services, a skill can ask a user’s permission to obtain the real-time location of their Alexa-enabled device, specifically at the time of the user’s request to Alexa, so that the skill can provide enhanced services.
  • Alexa Skill Kit APIs need apiAccessToken and deviceId to access the ASK APIs
  • Progressive Response API allows you to keep the user engaged while the skill prepares a full response to the user’s request.
  • Personalization can be provided using userId and state persistence

Domain 2: Skill Design

2.1 Design and develop an interaction model

  • Alexa interaction model includes skill, Invocation name, utterances, slots, Intents
  • A skill is ‘an app for Alexa’, however they are not downloadable but just need to be enabled.
  • Wakeword – Amazon offers a choice of wakewords like ‘Alexa’, ‘Amazon’, ‘Echo’, ‘skill’, ‘app’ or ‘Computer’, with the default being ‘Alexa’.
  • Launch phrases include “run,” “start,” “play,” “resume,” “use,” “launch,” “ask,” “open,” “tell,” “load,” “begin,” and “enable.”
  • Connecting words include “to,” “from,” “in,” “using,” “with,” “about,” “for,” “that,” “by,” “if,” “and,” “whether.”
  • Invocation name
    • is the word or phrase used to trigger the skill for custom skills and the invocation name should adhere to the requirements
    • must not infringe upon the intellectual property rights of an entity or person
    • must be compound of two or more works.
    • One-word invocation names are allowed only for brand/intellectual property.
    • must not include names of people or places
    • if two-word invocation names, one of the words cannot be a definite article (“the”), indefinite article (“a”, “an”) or preposition (“for”, “to”, “of,” “about,” “up,” “by,” “at,” “off,” “with”).
    • must not contain any of the Alexa skill launch phrases, connecting words and wake words
    • must contain only lower-case alphabetic characters, spaces between words, and possessive apostrophes
    • must spell characters like numbers for e.g., twenty one
    • can have periods in the invocation names containing acronyms or abbreviations that are pronounced as a series of individual letters, for e.g. NASA as n. a. s. a.
    • cannot spell out phonemes for e.g., a skill titled “AWS Facts” would need “AWS” represented as “a. w. s. ” and NOT “ay double u ess.”
    • must not create confusion with existing Alexa features.
    • must be written in each supported language
  • An intent is what a user is trying to accomplish.
    • Amazon provides standard built-in intents which can be extended
    • Intents need to have a unique utterance
  • Utterances are the specific phrases that people will use when making a request to Alexa.
  • A slot is a variable that relates to an intent allowing Alexa to understand information about the request
    • Amazon provides standard built-in slots which can be extended
  • Entity resolution improves the way Alexa matches possible slot values in a user’s utterance with the slots defined in your interaction model

2.2 Design a multi-turn conversation

  • Alexa Dialog management model identifies the prompts and utterances to collect, validate, and confirm the slot values and intents.
  • Alexa supports
    • Auto Delegation where Alexa completes all of the dialog steps based on the dialog model.
    • Manual delegation using Dialog.Delegate where Alexa sends the skill an IntentRequest for each turn of the conversation and provides more flexibility.
  • AMAZON.FallbackIntent will not be triggered in the middle of a dialog

2.3 Use built-in intents and slots

  • Standard built-in intents cannot include any slots. If slots are needed, create a custom intent and write your own sample utterances.
  • Alexa recommends using and extending standard built-in intents like Alexa.HelpIntent, Alexa.YesIntent with additional utterances as per the skill requirements
  • Alexa provides Alexa.FallbackIntent for handling any unmatched utterances and can be used to improve the interaction model accuracy.
  • Standard built-in intents cannot include any slots. If slots are needed, create a custom intent and write your own sample utterances.
  • Alexa provides slot which helps capture variables and can be either be a Amazon predefined slot such as dates, numbers, durations, time, etc. or a custom one specific to the skill
  • Predefined slots can be extended to add additional values

2.4 Handle unexpected conversational requests or responses

  • Alexa provides Alexa.FallbackIntent for handling any unmatched utterances and can be used to improve the interaction model accuracy.
  • Alexa also provides Intent History  which provides  a consolidate view with aggregated, anonymized frequent utterances and the resolved intents. These can be used to map the utterances to correct intents

2.5 Design multi-modal skills using one or more service interfaces (for example, audio, video, and gadgets)

  • Alexa enabled devices with a screen handles Page and Scroll intents. Do not handle Next and Previous.
  • Alexa skill with AudioPlayer interface
    • must handle AMAZON.ResumeIntent and AMAZON.PauseIntent
    • PlaybackController events to track AudioPlayer status changes initiated from the device buttons

Domain 3: Skill Architecture

3.1 Identify AWS services for extending Alexa skill functionality (Amazon CloudFront, Amazon S3, Amazon CloudWatch, and Amazon DynamoDB)

  • Focus on standard skill architecture using Lambda for backend, DynamoDB for persistence, S3 for severing static assets, and CloudWatch for monitoring and logs.
  • Lambda provide serverless handling for the Alexa requests, but remember the following limits
    • default concurrency soft limit of 1000 can be increased by raising a support request
    • default timeout of 3 secs, and should be increased to atleast 7 secs to be inline with Alexa timeout of 8 secs
    • default memory of 128mb, increase to improve performance
  • S3 performance can be improved by exposing it through CloudFront esp. for images, audio and video files

3.2 Use AWS Lambda to build Alexa skills

  • Lambda integrates with CloudWatch to provide logs and should be the first thing to check in case of any issues or errors.
  • Alexa allows any http endpoint to act as a backend, but needs to meet following requirements
    • must be accessible over the internet.
    • must accept HTTP requests on port 443.
    • must support HTTP over SSL/TLS, using an Amazon-trusted certificate.

3.3 Follow AWS and Alexa security and privacy best practices

  • Alexa requires the backend to verify that incoming requests come from Alexa using Skill ID verification
  • Child-directed skills cannot use personal and location information
  • Skills cannot be used to capture health information
  • Alexa Skills Kit uses the OAuth 2.0 authentication framework for Account linking, which defines a means by which the service can allow Alexa, with the user’s permission, to access information from the account that the user has set up with you.
  • Alexa smart home skills must have OAuth authorization code grant implementation while custom skills can have authorization code grant or impact grant implementation.

Domain 4: Skill Development

4.1 Implement in-skill purchasing and Amazon Pay for Alexa Skills

  • In-skill purchasing enables selling premium content such as game features and interactive stories in skills with a custom interaction model.
  • In-skill purchasing is handled by Alexa when the skill sends a Upsell directive. As the skill session ends when a Upsell directive is sent, be sure to save any relevant user data in a persistent data store so that the skill can continue where the user left off after the purchase flow is completed and the endpoint is back in control of the user experience.
  • Skill can handle the Connections.Response request that indicates the result of a purchase flow and resume the skill

4.2 Use Speech Synthesis Markup Language (SSML) for expression and MP3 audio

  • SSML is a markup language that provides a standard way to mark up text for the generation of synthetic speech.
  • Alexa supports a subset of SSML tags including
    • say-as to interpret text as telephone, date, time etc.
    • phonemeprovides a phonemic/phonetic pronunciation
    • prosody modifies the volume, pitch, and rate of the tagged speech.
    • audioallows playing MP3 player while rendering a response
      • must be in valid MP3 file (MPEG version 2) format
      • must be hosted at an Internet-accessible HTTPS endpoint.
      • For speech response, the audio file cannot be longer than 240 seconds.
        • combined total time for all audio files in the outputSpeech property of the response cannot be more than 240 seconds.
        • combined total time for all audio files in the reprompt property of the response cannot be more than 90 seconds.
      • bit rate must be 48 kbps.
      • sample rate must be 22050Hz, 24000Hz, or 16000Hz.

4.3 Implement state management

  • Alexa Skill state persistence can be handled using session attributes during the session and externally using services like DynamoDB, RDS across sessions.

4.4 Implement Alexa service interfaces (audio player, video player, and screens)

4.5 Parse Alexa JSON requests and provide responses

  • All requests include the session (optional), context, and request objects at the top level.
    •  session object provides additional context associated with the request.
      • session attributes can be used to store data
      • user containing userId to uniquely define an user and accessToken to access other services.
      • system object provides apiAccessToken and device object provides deviceId to access ASK APIs
      • application provide applicationId
      • device object provides supportedInterfaces to list each interface that the device supports
      • user containing userId to uniquely define an user and accessToken to access other services.
    • request object that provides the details of the user’s request.
  • Response includes
    • outputSpeech contains the speech to render to the user.
    • reprompt contains the outputSpeech to use if a re-prompt is necessary.
    • shouldEndSession provides a boolean value that indicates what should happen after Alexa speaks the response.

Domain 5: Test, Validate, and Troubleshoot

5.1 Debug and troubleshoot using Amazon CloudWatch or other tools

  • Lambda integrates with CloudWatch for metric and logs and can be check for any errors and metrics.

5.2 Use the Alexa developer testing tools

  • Utterance profiles – test utterances to know what intent they resolve to 
  • Alexa Skill simulator
    • provides an ability to Interact with Alexa with either your voice or text, without an actual device.
    • maintains the skill session, so the interaction model and dialog flow can be tested.
    • supports multiple languages testing by selecting locale
    • has limitations in testing audio, video, Alexa settings and Device API
  • Manual Json
    • enter a JSON request directly and see the skill returned JSON response
    • does not maintain the skill session and is similar to testing a JSON request in the Lambda console.
  • Voice & Tone – enter plain text or SSML and hear how Alexa speaks the text in a selected language
  • Alexa device – test with an Alexa-enabled device.
  • Alexa app – test the skill with the Alexa app for Android/iOS
  • Lambda Test console – to test Lambda functions

5.3 Perform beta testing

  • Skill beta testing tool can be used to test the Alexa skill in beta before releasing it to production
  • Beat testing allows testing changes to an existing skill, while still keeping the currently live version of the skill available for the general public.
  • Members can be invited using their Alexa email address. Alexa device used by the beta tester must be associated with the email address in the tester’s invitation.

5.4 Troubleshoot errors in the interaction model

Domain 6: Publishing, Operations, and Lifecycle Management

6.1 Describe the skill publishing process

  • Alexa skill needs to go through certification process before the Skill is live and made available to the users
  • Alexa creates an in development version of the skill, once the skill becomes live
  • Alexa Skill live version cannot be edited, and it is recommended to edit the in development skill, test and then re-certify for publishing.
  • Backend changes like changes in Lambda functions or response output from the function, however, can be made on live version and do not require re-certification. However, it is recommended to use Lambda versioning or alias to do such changes.
  • Alexa for Business allows skill to be made private and available to select users within the company

6.2 Add and remove users in the developer console

  • Alexa Skill Developer console access can be shared across multiple users for collaboration
  • Administrator and Analyst roles will also have access to the Earnings and Payments sections.
  • Administrator and Marketer roles will also have access to edit the content associated with apps (i.e. Descriptions, Images & Multimedia) and IAPs
  • Administrator and Developer roles will have access to create, modify and delete Alexa skills using ASK CLI and SMAPI.
  • Administrator, Analyst and Marketer roles have access to sales report

6.3 Perform analysis of skill analytics in the developer console

  • Intent History – View aggregated, anonymized frequent utterances and the resolved intents. You cannot track the user intent history as they are anonymized.
  • Actions – Unique customers per action, total actions, and total utterances per action.
  • Customers – Total number of unique customers who accessed the skill.
  • Intents – Unique customers per intent, total utterances per intent, total intents, and failed intents.
  • Interaction Path – Paths users take when interacting with the skill.
  • Plays Total number of times that a user played the skill content.
  • Retention (live skills only) Usage of the skill over time by groups of customers or cohorts. View the number or percentage of customers who returned to your skill over a 12-week period.
  • Sessions Total sessions, successful session types (sessions that didn’t end due to an error), average sessions per customer. Includes a breakdown of successful, failed, and no-response sessions as a percentage of total sessions. Custom
  • Utterances Metrics for utterances depend on the skill category.

6.4 Differentiate among the statuses/versions of skills (for example, In Development, In Certification, and Live)

  • In Development – skill available for development, testing
  • In Review – A certification review is in progress and the skill cannot be edited
  • Certified – Skill passed certification review, and is not yet available to users
  • Live – skill has been published and is available to users. You cannot edit the configuration for live skills
  • Hidden – skill was previously published, but has since been hidden. Existing users can access the skill. New users cannot discover the skill.
  • Removed – skill was previously published, but has since been removed. Users cannot enable or use the skill.

AWS Certified Alexa Skill Builder – Specialty (AXS-C01) Exam Resources

AWS Certified Database – Specialty (DBS-C01) Exam Learning Path

AWS Certified Database Specialty Certificate

11 Down !!! Continuing on my AWS journey which has lasted for over 3 years now, validating and re-validating the certs multiple times, I took another step and have passed the AWS Certified Database – Specialty (DBS-C01) certification

AWS Certified Database – Specialty (DBS-C01) exam basically validates

  • Understand and differentiate the key features of AWS database services.
  • Analyze needs and requirements to design and recommend appropriate database solutions using AWS services

AWS Certified Database – Specialty (DBS-C01) Exam Summary

  • AWS Certified Database – Specialty exam focuses on Data services from relational, non-relational, graph, caching and data warehousing. It also focuses on data migration.
  • AWS Certified Database – Specialty exam has 65 questions with a time limit of 170 minutes
  • Questions and answer options are pretty long, so you need time to read through each of them to make sense of the requirements and filter out the answers
  • As the exam was online from home, there was no access to paper and pen but the trick remains the same, read the question and draw a rough architecture and focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach to the right answer or atleast have a 50% chance of getting it right.
  • Be sure to cover the following topics
    • Whitepapers and articles
    • Database
      • Make sure you know and cover all the services in depth, as 80% of the exam is focused on topics like Aurora, RDS, DynamoDB
      • Aurora
        • Understand Aurora in depth
        • Know Aurora DR & HA using Read Replicas an
          • Aurora promotes read replicas as per the priority tier (tier 0 is the highest), largest size if the tier matches
        • Know  Aurora Global Database
          • Aurora provides Global Database with cross region read replicas for low latency reads. remember it is not multi-master and would not provide low latency writes.
        • Know Aurora Connection endpoints
          • cluster for primary read/write
          • reader for read replicas
          • custom for specific group of instances
          • Instance for specific single instance – Not recommended
        • Know Aurora Fast Failover techniques
          • set TCP keepalives low
          • set Java DNS caching timeouts low
          • Set the timeout variables used in the JDBC connection string as low
          • Use the provided read and write Aurora endpoints
          • Use cluster cache management for Aurora PostgreSQL. Cluster cache management ensures that application performance is maintained if there’s a failover.
        • Know Aurora Serverless
        • Know Aurora Backtrack feature which rewinds the DB cluster to the specified time. It is not a replacement for backups.
        • Supports Server Auditing Events for different activities that covers log in, DML, permission changes DCL, schema changes DDL etc.
        • Know Aurora Cluster Cache management feature which helps fast failover
        • Know Aurora Clone feature which allows you to create quick and cost-effective clones
        • Aurora supports fault injection queries to simulate various failover like node down, primary failover etc.
        • RDS PostgreSQL and MySQL can be migrated to Aurora, by creating an Aurora Read Replica from the instance. Once the replica lag is zero, switch the DNS with no data loss
        • Supports Database Activity Streams to stream audit logs to external services like Kinesis
        • Supports stored procedures calling lambda functions
      • DynamoDB
      • RDS
        • Know Relational Database Service (RDS) in depth
        • Understand RDS Snapshots, Backups and Restore
          • restoring a DB from snapshot does not retain the parameter group and security group
          • automated snapshots cannot be shared.
        • Understand RDS Read Replicas
        • Understand RDS Multi-AZ
        • Understand RDS Multi-AZ vs Read Replicas (hint: cross region replication and availability of data)
          • Multi-AZ failover can be simulated using Reboot with Failure option
          • Read Replicas require automated backups enabled
        • Understand DB components esp. DB parameter group, DB options groups
          • Dynamic parameters are applied immediately
          • Static parameters need manual reboot.
          • Default parameter group cannot be modified. Need to create custom parameter group and associate to RDS
          • Know max connections also depends on DB instance size
        • Understand RDS Security
          • RDS supports security groups to control who can access RDS instances
          • RDS supports data at rest encryption and SSL for data in transit encryption
          • RDS also support IAM database authentication
          • Existing RDS instance cannot be encrypted, create a snapshot -> encrypt it -> restore as encrypted DB
          • RDS PosgreSQL requires rds.force_ssl=1 and sslmode=ca/verify-full to enable SSL encryption
          • Know RDS Encrypted Database limitations
        • Understand RDS Monitoring and Notification
          • Know RDS supports notification events through SNS for events like database creation, deletion, snapshot creation etc.
          • CloudWatch gathers metrics about CPU utilization from the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance.
          • Enhanced Monitoring metrics are useful to understand how different processes or threads on a DB instance use the CPU.
          • RDS Performance Insights is a database performance tuning and monitoring feature that helps illustrate the database’s performance and help analyze any issues that affect it
        • RDS instance cannot be stopped if with read replicas
      • Neptune
        • provides Neptune loader to quick import data from S3
        • supports VPC endpoints
      • Redshift
        • Understand Redshift at high level. Exam does not cover Redshift if depth.
        • Know Redshift Best Practices w.r.t selection of Distribution style, Sort key, importing/exporting data
          • COPY command which allows parallelism, and performs better than multiple COPY commands
          • COPY command can use manifest files to load data
          • COPY command handles encrypted data
        • Know Redshift cross region encrypted snapshot copy
          • Create a new key in destination region
          • Use CreateSnapshotCopyGrant to allow Amazon Redshift to use the KMS key from the destination region.
          • In the source region, enable cross-region replication and specify the name of the copy grant created.
        • Know Redshift supports Audit logging which covers authentication attempts, connections and disconnections usually for compliance reasons.
      • Data Migration Service (DMS)
        • Understand Data Migration Service in depth for migration homogeneous and heterogeneous database
        • DMS with Full load plus CDC migration capability can be used to migration databases with zero downtime and no data loss.
        • DMS with SCT (Schema Conversion Tool) can be used to migration heterogeneous database.
        • DMS support validation after the migration to ensure data was migrated correctly
        • DMS supports LOB migration as a 2-step process. It can do a full or limited LOB migration
          • In full LOB mode AWS DMS migrates all LOBs from source to target regardless of size. Full LOB mode can be quite slow.
          • In limited LOB mode, a maximum LOB size can be set that AWS DMS should accept. Doing so allows AWS DMS to pre-allocate memory and load the LOB data in bulk. LOBs that exceed the maximum LOB size are truncated and a warning is issued to the log file. In limited LOB mode, you get significant performance gains over full LOB mode.
          • Recommended to use limited LOB mode whenever possible.
    • Security, Identity & Compliance
      • Data security is a key concept controlled in the Database – Specialty exam
      • Identity and Access Management (IAM)
      • Trusted Advisor provides RDS Idle instances
    • Management & Governance Tools
      • Understand AWS CloudWatch for Logs and Metrics.
        • CloudWatch Events more real time alerts as compared to CloudTrail
        • CloudWatch can be used used to store RDS logs with custom retention period, which is indefinite by default.
        • CloudWatch Application Insights support .Net and SQL Server monitoring
      • Know CloudFormation for provisioning, in terms of
        • Stack drifts – to understand difference between current state and on actual environment with any manual changes
        • Change Set – allows you to verify the changes before being propagated
        • parameters – allows you to configure variables or environment specific values
        • Stack policy defines the update actions that can be performed on designated resources.
        • Deletion policy for RDS allows you to configure if the resources is retained, snapshot or deleted once destroy is initiated
        • Supports secrets manager for DB credentials generation, storage and easy rotation
        • System parameter store for environments specific parameters

AWS Certified Database – Specialty (DBS-C01) Exam Resources

AWS DynamoDB Accelerator – DAX

AWS DynamoDB Accelerator DAX

 Workflow diagram showing interaction of application, DAX client, and DAX cluster in a VPC.

  • DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second.
  • DAX is intended for high-performance reads application. As a write-through cache, DAX writes directly so that the writes are immediately reflected in the item cache.
  • DAX as a managed service handles the cache invalidation, data population, or cluster management.
  • DAX saves cost reducing the read load (RCU) on DynamoDB
  • DAX helps prevent hot partitions
  • DAX only supports eventual consistency, and strong consistency requests are passed-through to DynamoDB.
  • DAX is fault-tolerant and scalable.
  • DAX cluster has a primary node and zero or more read-replica nodes. Upon a failure for a primary node, DAX will automatically fail over and elect a new primary. For scaling, add or remove read replicas.
  • DAX supports server-side encryption.
  • DAX does not support Transport Layer Security (TLS).

DynamoDB Accelerator Operations

  • Eventual Read operations
    • If DAX has the item available (a cache hit), DAX returns the item without accessing DynamoDB.
    • If DAX does not have the item available (a cache miss), DAX passes the request through to DynamoDB. When it receives the response from DynamoDB, DAX returns the results to the application. But it also writes the results to the cache on the primary node.
  • Strongly Consistent Read operations
    • DAX passes the request through to DynamoDB. The results from DynamoDB are not cached in DAX. but simply returned
  • For Write operations
    • Data is first written to the DynamoDB table, and then to the DAX cluster.
    • Operation is successful only if the data is successfully written to both the table and to DAX.

DynamoDB Accelerator Caches

DAX cluster has two distinct caches – Item cache and Query cache

  • Item cache
    • Item remains in the DAX item cache, subject to the Time to Live (TTL) setting and the least recently used (LRU) algorithm for the cache
    • DAX provides write-through cache, keeping the DAX item cache consistent with the underlying DynamoDB tables.
  • Query cache
    • DAX caches the results from Query and Scan requests in its query cache
    • Query and Scan results don’t affect the item cache at all, as the result set is saved in the query cache – not in the item cache.
    • Writes to Item cache don’t affect the Query cache
  • Item and Query cache has a default 5 minutes TTL setting.
  • DAX assigns a timestamp to every entry it writes to the cache. The entry expires if it has remained in the cache for longer than the TTL setting
  • DAX maintains an LRU list for both Item and Query cache. LRU list tracks the item addition and last read time. If the cache becomes full, DAX evicts older items (even if they haven’t expired yet) to make room for new entries
  • LRU algorithm is always enabled for both the item and query cache and is not user-configurable.

DynamoDB Accelerator Write Strategies

Write-Through

  • DAX item cache implements a write-through policy
  • For write operations, DAX ensures that the cached item is synchronized with the item as it exists in DynamoDB.

Write-Around

  • Write-around strategy reduces write latency
  • Ideal for bulk uploads or writing large quantities of data
  • Item cache doesn’t remain in sync with the data in DynamoDB.

DynamoDB Accelerator Scenarios

  • As an in-memory cache, DAX increases performance and reduces the response times of eventually consistent read workloads by an order of magnitude from single-digit milliseconds to microseconds.
  • DAX reduces operational and application complexity by providing a managed service that is API-compatible with DynamoDB. It requires only minimal functional changes to use with an existing application.
  • For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to overprovision read capacity units.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company has setup an application in AWS that interacts with DynamoDB. DynamoDB is currently responding in milliseconds, but the application response guidelines require it to respond within microseconds. How can the performance of DynamoDB be further improved?
    1. Use ElastiCache in front of DynamoDB
    2. Use DynamoDB inbuilt caching
    3. Use DynamoDB Accelerator
    4. Use RDS with ElastiCache instead

AWS DynamoDB Best Practices

AWS DynamoDB Best Practices

Primary Key Design

  • Primary key uniquely identifies each item in a DynamoDB table and can be simple (a partition key only) or composite (a partition key combined with a sort key).
  • Partition key portion of a table’s primary key determines the logical partitions in which a table’s data is stored, which in turn affects the underlying physical partitions.
  • Avoid hot keys and hot partitions – a partition key design that doesn’t distribute I/O requests evenly can create “hot” partitions that result in throttling and use the provisioned I/O capacity inefficiently.
  • Partition key should have many unique values
  • Distribute reads / writes uniformly across partitions to avoid hot partitions
  • Store hot and cold data in separate tables
  • Consider all possible query patterns to eliminate use of scans and filters
  • Choose sort key depending on the application’s needs

Secondary Indexes

  • Use indexes based on when the application’s query patterns
  • LSIs
    • Use primary key or LSIs when strong consistency is desired
    • Watch for expanding item collections (10 GB size limit!)
  • GSIs
    • Use GSIs for finer control over throughput or when your application needs to query using a different partition key
    • Can be used for eventually consistent read replicas – set up a global secondary index that has the same key schema as the parent table, with some or all of the non-key attributes projected into it.
  • Project fewer attributes – As secondary indexes consume storage and provisioned throughput, keep the size of the index as small as possible. as it would provide greater performance.
  • Keep the number of indexes to a minimum – don’t create secondary indexes on attributes that aren’t queried often. Indexes that are seldom used contribute to increased storage and I/O costs without improving application performance.

Large Items and Attributes

  • DynamoDB currently limits the size of each item that is stored in a table
  • Use shorter (yet intuitive!) attribute names
  • Keep item size small
  • Use compression (GZIP)
  • Split large attributes across multiple items
  • Store metadata in DynamoDB and large BLOBs or attributes in S3

Querying and Scanning Data

  • Avoid scans and filters – Scan operations are less efficient than other operations in DynamoDB. A Scan operation always scans the entire table or secondary index. It then filters out values to provide the result, essentially adding the extra step of removing data from the result set.
  • Use eventual consistency for reads

Time Series Data

  • Use table per day, week, month etc for storing time series data – create one table per period, provisioned with the required read and write capacity and the required indexes.
  • Before the end of each period, prebuild the table for the next period. Just as the current period ends, direct event traffic to the new table. Assign names to the tables that specify the periods they have recorded.
  • As soon as a table is no longer being written to, reduce its provisioned write capacity to a lower value (for example, 1 WCU), and provision whatever read capacity is appropriate. Reduce the provisioned read capacity of earlier tables as they age. Archive or delete the tables whose contents are rarely or never needed.

Reference

DynamoDB Best Practices

AWS RDS Aurora Serverless

Aurora Serverless

  • Amazon Aurora Serverless is an on-demand, autoscaling configuration for the MySQL-compatible and PostgreSQL-compatible editions of Aurora.
  • An Aurora Serverless DB cluster automatically starts up, shuts down, and scales capacity up or down based on the application’s needs.
  • Aurora Serverless enables running database in the cloud without managing any database instances.
  • Aurora Serverless provides a relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads.
  • Aurora Serverless Use Cases include
    • Infrequently-Used Applications
    • New Applications – where the needs and instance size is yet to be determined.
    • Variable and Unpredictable Workloads – scale as per the needs
    • Development and Test Databases
    • Multitenant Applications
  • Aurora Serverless DB cluster does not have a public IP address and can be accessed only from within a VPC based on the VPC service.

Aurora Architecture

 Aurora Serverless Architecture

  • Aurora Serverless separates Storage and Compute, so it can scale down to zero processing and you pay only for storage.
  • With Aurora Serverless, a database endpoint is created without specifying the DB instance class size.
  • Minimum and maximum capacity is set in terms of Aurora capacity units (ACUs). Each ACU is a combination of processing and memory capacity.
  • Database storage automatically scales from 10 GiB to 64 TiB, the same as storage in a standard Aurora DB cluster.
  • The minimum Aurora capacity unit is the lowest ACU to which the DB cluster can scale down. The maximum Aurora capacity unit is the highest ACU to which the DB cluster can scale up. Based on the settings, Aurora Serverless automatically creates scaling rules for thresholds for CPU utilization, connections, and available memory.
  • Database endpoint connects to a proxy fleet that routes the workload to a fleet of resources that are automatically scaled.
  • Aurora Serverless manages the connections automatically.
  • Proxy fleet enables continuous connections as Aurora Serverless scales the resources automatically based on the minimum and maximum capacity specifications.
  • Database client applications don’t need to change to use the proxy fleet.
  • Scaling is rapid because it uses a pool of “warm” resources that are always ready to service requests.
  • Aurora Serverless supports Automatic Pause where the DB cluster can be paused after a given amount of time with no activity. The default inactvity timeout is five minutes. Pausing the DB cluster can be disabled.
  • Automatic Pause reduces the compute charges to zero and only storage is charged. If database connections are requested when an Aurora Serverless DB cluster is paused, the DB cluster automatically resumes and services the connection requests.
  • When the DB cluster resumes activity, it has the same capacity as it had when Aurora paused the cluster. The number of ACUs depends on how much Aurora scaled the cluster up or down before pausing it.

Aurora Serverless and Failover

  • Aurora Serverless compute layer is placed in a Single AZ
  • Aurora separates computation capacity and storage, the storage volume for the cluster is spread across multiple AZs. The data remains available even if outages affect the DB instance or the associated AZ.
  • Aurora Serverless supports automatic multi-AZ failover where if the DB instance for an DB cluster becomes unavailable or the Availability Zone (AZ) it is in fails, Aurora recreates the DB instance in a different AZ.
  • Auroa Serverless failover mechanism takes longer than for an Aurora Provisioned cluster.
  • Aurora Serverless failover time is currently undefined because it depends on demand and capacity availability in other AZs within the given AWS Region

Aurora Serverless Auto Scaling

  • Aurora Serverless automatically scales based on the active database workload ( CPU or connections), in some cases, capacity might not scale fast enough to meet a sudden workload change, such as a large number of new transactions.
  • Once a scaling operation is initiated, Aurora Serverless attempts to find a scaling point, which is a point in time at which the database can safely complete scaling.
  • Aurora Serverless might not be able to find a scaling point and will not scale if there are
    • long-running queries or transactions in progress, or
    • temporary tables or table locks in use.
  • Supports cooldown period
    • After Scale up, Aurora Serverless has a 15 minutes cooldown period for subsequent scale down
    • After Scale down, Aurora Serverless has a 310 secs cooldown period for subsequent scale down
  • Aurora Serverless has not cooldown period for scale up activities and scales as and when necessary

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

AWS RDS Multi-AZ Deployment

RDS Multi-AZ Deployment Overview

  • Multi-AZ deployments provides high availability and automatic failover support for DB instances
  • Multi-AZ helps improve the durability and availability of a critical system, enhancing availability during planned system maintenance, DB instance failure and Availability Zone disruption.
  • Multi-AZ is a High Availability feature is not a scaling solution for read-only scenarios; standby replica can’t be used to serve read traffic. To service read-only traffic, use a Read Replica.
  • Multi-AZ deployments for Oracle, PostgreSQL, MySQL, and MariaDB DB instances use Amazon technology, while SQL Server DB instances use SQL Server Mirroring.
  • In a Multi-AZ deployment,
    • RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone.
    • Copies of data are stored in different Availability Zones for greater levels of data durability.
    • Primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide
      • data redundancy,
      • eliminate I/O freezes during snapshots and backups
      • and minimize latency spikes during system backups.
    • DB instances may have increased write and commit latency compared to a Single AZ deployment, due to the synchronous data replication
    • Transaction success is returned only if the commit is successful both on the primary and the standby DB
    • There might be a change in latency if the deployment fails over to the standby replica, although AWS is engineered with low-latency network connectivity between Availability Zones.
  • When using the BYOL licensing model, a license for both the primary instance and the standby replica is required
  • For production workloads, it is recommended to use Multi-AZ deployment with Provisioned IOPS and DB instance classes (m1.large and larger), optimized for Provisioned IOPS for fast, consistent performance.
  • When Single-AZ deployment is modified to a Multi-AZ deployment (for engines other than SQL Server or Amazon Aurora)
    • RDS takes a snapshot of the primary DB instance from the deployment and restores the snapshot into another Availability Zone.
    • RDS then sets up synchronous replication between the primary DB instance and the new instance.
    • This avoids downtime when conversion from Single AZ to Multi-AZ

RDS Multi-AZ Failover Process

  • In the event of a planned or unplanned outage of the DB instance,
    • RDS automatically switches to a standby replica in another AZ, if enabled for Multi-AZ.
    • Time it takes for the failover to complete depends on the database activity and other conditions at the time the primary DB instance became unavailable.
    • Failover times are typically 60-120 secs. However, large transactions or a lengthy recovery process can increase failover time.
    • Failover mechanism automatically changes the DNS record of the DB instance to point to the standby DB instance.
    • Multi-AZ switch is seamless to the applications as there is no change in the endpoint URLs but just needs to re-establish any existing connections to the DB instance.
  • RDS handles failover automatically so that database operations can be resumed as quickly as possible without administrative intervention.
  • Primary DB instance switches over automatically to the standby replica if any of the following conditions occur:
    • An Availability Zone outage
    • Primary DB instance fails
    • DB instance’s server type is changed
    • Operating system of the DB instance is undergoing software patching
    • A manual failover of the DB instance was initiated using Reboot with failover (also referred to as Forced Failover)
  • If the Multi-AZ DB instance has failed over, can be determined by
    • DB event subscriptions can be setup to notify you via email or SMS that a failover has been initiated.
    • DB events can be viewed via the Amazon RDS console or APIs.
    • Current state of your Multi-AZ deployment can be viewed via the RDS console and APIs.

[do_widget id=text-15]

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the application requires complex queries and table joins. Which configuration provides the solution for the company’s requirements?
    1. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone (does not provide High Availability out of the box)
    2. Amazon RDS for MySQL with Multi-AZ
    3. Amazon ElastiCache (Just a caching solution)
    4. Amazon DynamoDB (Not suitable for complex queries and joins)
  2. What would happen to an RDS (Relational Database Service) multi-Availability Zone deployment if the primary DB instance fails?
    1. IP of the primary DB Instance is switched to the standby DB Instance.
    2. A new DB instance is created in the standby availability zone.
    3. The canonical name record (CNAME) is changed from primary to standby.
    4. The RDS (Relational Database Service) DB instance reboots.
  3. Will my standby RDS instance be in the same Availability Zone as my primary?
    1. Only for Oracle RDS types
    2. Yes
    3. Only if configured at launch
    4. No
  4. Is creating a Read Replica of another Read Replica supported?
    1. Only in certain regions
    2. Only with MySQL based RDS
    3. Only for Oracle RDS types
    4. No
  5. A user is planning to set up the Multi-AZ feature of RDS. Which of the below mentioned conditions won’t take advantage of the Multi-AZ feature?
    1. Availability zone outage
    2. A manual failover of the DB instance using Reboot with failover option
    3. Region outage
    4. When the user changes the DB instance’s server type
  6. When you run a DB Instance as a Multi-AZ deployment, the “_____” serves database writes and reads
    1. secondary
    2. backup
    3. stand by
    4. primary
  7. When running my DB Instance as a Multi-AZ deployment, can I use the standby for read or write operations?
    1. Yes
    2. Only with MSSQL based RDS
    3. Only for Oracle RDS instances
    4. No
  8. Read Replicas require a transactional storage engine and are only supported for the _________ storage engine
    1. OracleISAM
    2. MSSQLDB
    3. InnoDB
    4. MyISAM
  9. A user is configuring the Multi-AZ feature of an RDS DB. The user came to know that this RDS DB does not use the AWS technology, but uses server mirroring to achieve replication. Which DB is the user using right now?
    1. MySQL
    2. Oracle
    3. MS SQL
    4. PostgreSQL
  10. If you have chosen Multi-AZ deployment, in the event of a planned or unplanned outage of your primary DB Instance, Amazon RDS automatically switches to the standby replica. The automatic failover mechanism simply changes the ______ record of the main DB Instance to point to the standby DB Instance.
    1. DNAME
    2. CNAME
    3. TXT
    4. MX
  11. When automatic failover occurs, Amazon RDS will emit a DB Instance event to inform you that automatic failover occurred. You can use the _____ to return information about events related to your DB Instance
    1. FetchFailure
    2. DescriveFailure
    3. DescribeEvents
    4. FetchEvents
  12. The new DB Instance that is created when you promote a Read Replica retains the backup window period.
    1. TRUE
    2. FALSE
  13. Will I be alerted when automatic failover occurs?
    1. Only if SNS configured
    2. No
    3. Yes
    4. Only if Cloudwatch configured
  14. Can I initiate a “forced failover” for my MySQL Multi-AZ DB Instance deployment?
    1. Only in certain regions
    2. Only in VPC
    3. Yes
    4. No
  15. A user is accessing RDS from an application. The user has enabled the Multi-AZ feature with the MS SQL RDS DB. During a planned outage how will AWS ensure that a switch from DB to a standby replica will not affect access to the application?
    1. RDS will have an internal IP which will redirect all requests to the new DB
    2. RDS uses DNS to switch over to standby replica for seamless transition
    3. The switch over changes Hardware so RDS does not need to worry about access
    4. RDS will have both the DBs running independently and the user has to manually switch over
  16. Which of the following is part of the failover process for a Multi-AZ Amazon Relational Database Service (RDS) instance?
    1. The failed RDS DB instance reboots.
    2. The IP of the primary DB instance is switched to the standby DB instance.
    3. The DNS record for the RDS endpoint is changed from primary to standby.
    4. A new DB instance is created in the standby availability zone.
  17. Which of these is not a reason a Multi-AZ RDS instance will failover?
    1. An Availability Zone outage
    2. A manual failover of the DB instance was initiated using Reboot with failover
    3. To autoscale to a higher instance class (Refer link)
    4. Master database corruption occurs
    5. The primary DB instance fails
  18. How does Amazon RDS multi Availability Zone model work?
    1. A second, standby database is deployed and maintained in a different availability zone from master, using synchronous replication. (Refer link)
    2. A second, standby database is deployed and maintained in a different availability zone from master using asynchronous replication.
    3. A second, standby database is deployed and maintained in a different region from master using asynchronous replication.
    4. A second, standby database is deployed and maintained in a different region from master using synchronous replication.
  19. A user is using a small MySQL RDS DB. The user is experiencing high latency due to the Multi AZ feature. Which of the below mentioned options may not help the user in this situation?
    1. Schedule the automated back up in non-working hours
    2. Use a large or higher size instance
    3. Use PIOPS
    4. Take a snapshot from standby Replica
  20. What is the charge for the data transfer incurred in replicating data between your primary and standby?
    1. No charge. It is free.
    2. Double the standard data transfer charge
    3. Same as the standard data transfer charge
    4. Half of the standard data transfer charge
  21. A user has enabled the Multi AZ feature with the MS SQL RDS database server. Which of the below mentioned statements will help the user understand the Multi AZ feature better?
    1. In a Multi AZ, AWS runs two DBs in parallel and copies the data asynchronously to the replica copy
    2. In a Multi AZ, AWS runs two DBs in parallel and copies the data synchronously to the replica copy
    3. In a Multi AZ, AWS runs just one DB but copies the data synchronously to the standby replica
    4. AWS MS SQL does not support the Multi AZ feature

AWS RDS Read Replicas

RDS Read Replicas Overview

  • Read replicas enable increased scalability and database availability in the case of an AZ failure.
  • Read Replicas allow elastic scaling beyond the capacity constraints of a single DB instance for read-heavy database workloads
  • RDS read replicas can be Multi-AZ i.e. set up with their own standby instances in a different AZ.
  • Load on the source DB instance can be reduced by routing read queries from applications to the Read Replica.
  • one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances.
  • RDS uses DB engines’ built-in replication functionality to create a special type of DB instance called a Read Replica from a source DB instance. It uses the engines’ native asynchronous replication to update the read replica whenever there is a change to the source DB instance.
  • Read replicas are available in RDS for MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server as well as Aurora.
  • RDS supports replication between an RDS MySQL or MariaDB DB instance and a MySQL or MariaDB instance that is external to RDS using Binary Log File Position or  Global Transaction Identifiers (GTIDs) replication.

Read Replicas creation

  • Up to five Read Replicas can be created from one source DB instance.
  • Creation process
    • Automatic backups must be enabled on the source DB instance by setting the backup retention period to a value other than 0
    • Existing DB instance needs to be specified as the source.
    • RDS takes a snapshot of the source instance and creates a read-only instance from the snapshot.
    • RDS then uses the asynchronous replication method for the DB engine to update the Read Replica for any changes to the source DB instance.
  • RDS replicates all databases in the source DB instance.
  • RDS sets up a secure communications channel between the source DB instance and the Read Replica, if that Read Replica is in a different AWS region from the DB instance.
  • RDS establishes any AWS security configurations, such as adding security group entries, needed to enable the secure channel.
  • During the Read Replica creation, a brief I/O suspension on the source DB instance can be experienced as the DB snapshot occurs.
  • I/O suspension typically lasts about one minute and can be avoided if the source DB instance is a Multi-AZ deployment (in the case of Multi-AZ deployments, DB snapshots are taken from the standby).
  • Read Replica creation time can be slow if any long-running transactions are being executed and should wait for completion
  • For multiple Read Replicas created in parallel from the same source DB instance, only one snapshot is taken at the start of the first create action.
  • A Read Replica can be promoted to a new independent source DB, in which case the replication link is broken between the Read Replica and the source DB.  However, the replication continues for other replicas using the original source DB as the replication source

Read Replica Deletion & DB Failover

  • Read Replicas must be explicitly deleted, using the same mechanisms for deleting a DB instance.
  • If the source DB instance is deleted without deleting the replicas, each replica is promoted to a stand-alone, single-AZ DB instance.
  • If the source instance of a Multi-AZ deployment fails over to the standby, any associated Read Replicas are switched to use the secondary as their replication source.

Read Replica Storage & Compute requirements

  • A Read Replica, by default, is created with the same storage type as the source DB instance.
  • For replication to operate effectively, each Read Replica should have the same amount of compute & storage resources as the source DB instance.
  • Source DB instance, if scaled, Read Replicas should be scaled accordingly

Cross-Region Read Replicas

  • Supported for MySQL, PostgreSQL, MariaDB and Oracle
  • Not supported for SQL Server
  • helps to improve
    • disaster recovery capabilities (reduces RTO and RPO),
    • scale read operations into a region closer to end users,
    • migration from a data center in one region to another region
  • A source DB instance can have cross-Region read replicas in multiple AWS Regions.
  • Cross-Region RDS read replica can be created from a source RDS DB instance that is not a read replica of another RDS DB instance.
  • Replica lags are higher for Cross-region replicas. This lag time comes from the longer network channels between regional data centers.
  • RDS can’t guarantee more than five cross-Region read replica instances, due to the limit on the number of access control list (ACL) entries for a VPC
  • Read replica uses the default DB parameter group and default security group for the specified DB engine.
  • Deleting the source for a cross-Region read replica will result in
    • read replica promotion for MariaDB, MySQL, and Oracle DB instances
    • no read replica promotion for PostgreSQL DB instances and the replication status of the read replica is set to terminated.

Read Replica Features & Limitations

  • RDS does not support circular replication.
  • DB instance cannot be configured to serve as a replication source for an existing DB instance; a new Read Replica can be created only from an existing DB instance for e.g., if MyDBInstance replicates to ReadReplica1, ReadReplica1 can’t be configured to replicate back to MyDBInstance.  From ReadReplica1, only a new Read Replica can be created, such as ReadRep2.
  • Read Replica can be created from other Read replicas as well. However, the replica lag is higher for these instances and there cannot be more than four instances involved in a replication chain.

Read Replica ComparisionRead Replica Use cases

  • Scaling beyond the compute or I/O capacity of a single DB instance for read-heavy database workloads, directing excess read traffic to Read Replica(s)
  • Serving read traffic while the source DB instance is unavailable for e.g. If the source DB instance cannot take I/O requests due to backups I/O suspension or scheduled maintenance, the read traffic can be directed to the Read Replica(s). However, the data might be stale.
  • Business reporting or data warehousing scenarios where business reporting queries can be executed against a Read Replica, rather than the primary, production DB instance.
  • Implementing disaster recovery by promoting the read replica to a standalone instance as a disaster recovery solution, if the primary DB instance fails.

[do_widget id=text-15]

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are running a successful multi-tier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting tier will aggregate and publish status reports every 30 minutes from user-generated information that is being stored in your web applications database. You are currently running a Multi-AZ RDS MySQL instance for the database tier. You also have implemented ElastiCache as a database caching layer between the application tier and database tier. Please select the answer that will allow you to successfully implement the reporting tier with as little impact as possible to your database.
    1. Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests.
    2. Generate the reports by querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ (Standby instance cannot be used as a scaling solution)
    3. Launch a RDS Read Replica connected to your Multi-AZ master database and generate reports by querying the Read Replica.
    4. Generate the reports by querying the ElastiCache database caching tier. (ElasticCache does not maintain full data and is simply a caching solution)
  2. Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)
    1. Deploy ElastiCache in-memory cache running in each availability zone
    2. Implement sharding to distribute load to multiple RDS MySQL instances (this is only a read contention, the writes work fine)
    3. Increase the RDS MySQL Instance size and Implement provisioned IOPS (not scalable, this is only a read contention, the writes work fine)
    4. Add an RDS MySQL read replica in each availability zone
  3. Your company has HQ in Tokyo and branch offices all over the world and is using logistics software with a multi-regional deployment on AWS in Japan, Europe and US. The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence. Each region has deployed its own database. In the HQ region you run an hourly batch process reading data from every region to compute cross-regional reports that are sent by email to all offices this batch process must be completed as fast as possible to quickly optimize logistics. How do you build the database architecture in order to meet the requirements?
    1. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
    2. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
    3. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
    4. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region
    5. Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process
  4. Your business is building a new application that will store its entire customer database on a RDS MySQL database, and will have various applications and users that will query that data for different purposes. Large analytics jobs on the database are likely to cause other applications to not be able to get the query results they need to, before time out. Also, as your data grows, these analytics jobs will start to take more time, increasing the negative effect on the other applications. How do you solve the contention issues between these different workloads on the same data?
    1. Enable Multi-AZ mode on the RDS instance
    2. Use ElastiCache to offload the analytics job data
    3. Create RDS Read-Replicas for the analytics work
    4. Run the RDS instance on the largest size possible
  5. If I have multiple Read Replicas for my master DB Instance and I promote one of them, what happens to the rest of the Read Replicas?
    1. The remaining Read Replicas will still replicate from the older master DB Instance
    2. The remaining Read Replicas will be deleted
    3. The remaining Read Replicas will be combined to one read replica
  6. You need to scale an RDS deployment. You are operating at 10% writes and 90% reads, based on your logging. How best can you scale this in a simple way?
    1. Create a second master RDS instance and peer the RDS groups.
    2. Cache all the database responses on the read side with CloudFront.
    3. Create read replicas for RDS since the load is mostly reads.
    4. Create a Multi-AZ RDS installs and route read traffic to standby.
  7. A customer is running an application in US-West (Northern California) region and wants to setup disaster recovery failover to the Asian Pacific (Singapore) region. The customer is interested in achieving a low Recovery Point Objective (RPO) for an Amazon RDS multi-AZ MySQL database instance. Which approach is best suited to this need?
    1. Synchronous replication
    2. Asynchronous replication
    3. Route53 health checks
    4. Copying of RDS incremental snapshots
  8. A user is using a small MySQL RDS DB. The user is experiencing high latency due to the Multi AZ feature. Which of the below mentioned options may not help the user in this situation?
    1. Schedule the automated back up in non-working hours
    2. Use a large or higher size instance
    3. Use PIOPS
    4. Take a snapshot from standby Replica
  9. My Read Replica appears “stuck” after a Multi-AZ failover and is unable to obtain or apply updates from the source DB Instance. What do I do?
    1. You will need to delete the Read Replica and create a new one to replace it.
    2. You will need to disassociate the DB Engine and re associate it.
    3. The instance should be deployed to Single AZ and then moved to Multi- AZ once again
    4. You will need to delete the DB Instance and create a new one to replace it.
  10. A company is running a batch analysis every hour on their main transactional DB running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible?
    1. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
    2. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
    3. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
    4. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.

AWS Certified Data Analytics – Specialty (DAS-C01) Exam Learning Path

  • Recently validated myself with the AWS Certified Data Analytics – Specialty (DAS-C01).
  • Data Analytics – Specialty (DAS-C01) has replaced the previous Big Data – Specialty (DAS-C01).
  • Big Data in itself is a very vast topic and with AWS services, there is lots to cover and know for the exam.
  • If you have worked on Big Data technologies including a bit of Visualization, it would be a great asset to pass this exam.

AWS Certified Data Analytics – Specialty (DAS-C01) exam basically validates

  • Define AWS data analytics services and understand how they integrate with each other.
  • Explain how AWS data analytics services fit in the data lifecycle of collection, storage, processing, and visualization.

Refer AWS Certified Data Analytics – Specialty Exam Guide for details

AWS Certified Data Analytics - Specialty DAS-C01 Domains

AWS Certified Data Analytics – Specialty (DAS-C01) Exam Summary

  • AWS Certified Data Analytics – Specialty exam, as its name suggests, covers a lot of Big Data concepts right from data transfer and collection techniques, storage, pre and post processing, analytics, visualization with the added concepts for data security at each layer.
  • AWS Certified Data Analytics – Specialty exam has 65 questions to be solved within a time limit of 170 minutes
  • Questions and answer options are pretty long, so need time to read through them to make sense of the requirements and filter out the answers
  • As the exam was online from home, there was no access to paper and pen but the trick remains the same, read the question and draw a rough architecture and focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach to the right answer or atleast have a 50% chance of getting it right.
  • Be sure to cover the following topics
    • Whitepapers and articles
    • Analytics
      • Make sure you know and cover all the services in depth, as 80% of the exam is focused on topics like Glue, Kinesis and Redshift.
      • Glue
        • DAS-C01 covers Glue in detail. This is one of the newly added service as compared to Big Data -Specialty exam
        • Understand Glue as a fully-managed, extract, transform, and load (ETL) service
        • Glue natively supports RDS, Redshift, S3 and databases on EC2 instances.
        • Glue provides Glue crawlers to crawl data and helps discover and create schema in Glue Data Catalog
        • Glue supports Job Bookmark that helps track data that has already been processed during a previous run of an ETL job by persisting state information from the job run. Job bookmarks help AWS Glue maintain state information and prevent the reprocessing of old data or duplicate records.
      • Elastic Map Reduce
        • Understand EMR in depth
        • Understand EMRFS (hint: Use Consistent view to make sure S3 objects referred by different applications are in sync)
        • Know EMR Best Practices (hint: start with many small nodes instead on few large nodes)
        • Know EMR Encryption options
          • supports SSE-S3, SS3-KMS, CSE-KMS and CSE-Custom encryption for EMRFS
          • doesn’t support SSE-C  encryption
          • supports LUKS encryption for local disks
          • supports TLS for data in transit encryption
        • Know Hive can be externally hosted using RDS, Aurora and AWS Glue Data Catalog
        • Know also different technologies
          • Presto is a fast SQL query engine designed for interactive analytic queries over large datasets from multiple sources
          • Spark is a distributed processing framework and programming model that helps do machine learning, stream processing, or graph analytics using Amazon EMR clusters
          • Zeppelin/Jupyter as a notebook for interactive data exploration and provides open-source web application that can be used to create and share documents that contain live code, equations, visualizations, and narrative text
          • Phoenix is used for OLTP and operational analytics, allowing you to use standard SQL queries and JDBC APIs to work with an Apache HBase backing store
      • Kinesis
        • Understand Kinesis Data Streams and Kinesis Data Firehose in depth
        • Know Kinesis Data Streams vs Kinesis Firehose
          • Know Kinesis Data Streams is open ended on both producer and consumer. It supports KCL and works with Spark.
          • Know Kinesis Firehose is open ended for producer only. Data is stored in S3, Redshift and ElasticSearch.
          • Kinesis Firehose works in batches with minimum 60secs interval and is near-real time.
          • Kinesis Firehose supports transformation and  custom transformation using Lambda
        • Understand Kinesis Encryption (hint: use server side encryption or encrypt in producer for data streams)
        • Know difference between KPL vs SDK (hint: PutRecords are synchronously, while KPL supports batching)
        • Kinesis Best Practices (hint: increase performance increasing the shards)
      • Elasticsearch
        • Know ElasticSearch is a search service which supports indexing, full text search, faceting etc.
        • Elasticsearch can be used to analysis and supports visualization using Kibana which can be real time.
      • Redshift
        • Understand Redshift in depth
        • Understand Redshift Advanced topics like Workload Management, Distribution Style, Sort key
        • Understand Redshift Spectrum which allows querying data in S3 without loading existing Redshift cluster. It also helps querying S3 data with Redshift data.
        • Know Redshift Best Practices w.r.t selection of Distribution style, Sort key, importing/exporting data
          • COPY command which allows parallelism, and performs better than multiple COPY commands
          • COPY command can use manifest files to load data
          • COPY command handles encrypted data
        • Understand Redshift Resizing cluster options (elastic resize did not support node type changes before, but does now)
        • Know Redshift views to control access to data.
      • Athena
        • serverless, interactive query service to analyze data in S3 using standard SQL
      • QuickSight
        • Understand QuickSight
        • Know Visual Types (hint: esp. plotting line, bar and story based visualizations)
        • Know Supported Data Sources (hint: supports files)
        • QuickSight provides direct integration with Microsoft AD
        • QuickSight supports Row level security using dataset rules
        • QuickSight supports ML insights as well
      • Know Data Pipeline for data transfer
    • Security, Identity & Compliance
    • Management & Governance Tools
      • Understand AWS CloudWatch for Logs and Metrics. Also, CloudWatch Events more real time alerts as compared to CloudTrail

AWS Certified Data Analytics – Specialty (DAS-C01) Exam Resources

AWS QuickSight

AWS QuickSight

  • QuickSight is a very fast, easy-to-use, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from their data, anytime, on any device.
  • QuickSight enables organizations to scale their business analytics capabilities to hundreds of thousands of users, and delivers fast and responsive query performance by using a robust in-memory engine (SPICE).
  • QuickSight supports
    • Excel files and flat files like CSV, TSV, CLF, ELF
    • on-premises databases like PostgreSQL, SQL Server and MySQL
    • SaaS applications like Salesforce
    • and AWS data sources such as Redshift, RDS, Aurora, Athena, and S3
  • QuickSight supports various functions to format and transform the data.
    • alias data fields and change data types.
    • subset the data using built in filters and perform database join operations using drag and drop.
    • create calculated fields using mathematical operations and built-in functions such conditional statements, string, numerical and date functions
  • QuickSight supports assorted visualizations that facilitate different analytical approaches:
    • Comparison and distribution – Bar charts (several assorted variants)
    • Changes over time – Line graphs, Area line charts
    • Correlation – Scatter plots, Heat maps
    • Aggregation – Pie graphs, Tree maps
    • Tabular – Pivot tables
  • QuickSight comes with a built-in suggestion engine that provides suggested visualizations based on the properties of the underlying datasets
  • QuickSight support Stories, that provide guided tours through specific views of an analysis. They are used to convey key points, a thought process, or the evolution of an analysis for collaboration.

Super-fast, Parallel, In-memory Calculation Engine – SPICE

  • QuickSight is built with “SPICE” – a Super-fast, Parallel, In-memory Calculation Engine
  • SPICE uses a combination of columnar storage, in-memory technologies enabled through the latest hardware innovations and machine code generation to run interactive queries on large datasets and get rapid responses.
  • SPICE supports rich data discovery and business analytics capabilities to help customers derive valuable insights from their data without worrying about provisioning or managing infrastructure.
  • SPICE supports rich calculations to help derive valuable insights from the analysis without worrying about provisioning or managing infrastructure.
  • Data in SPICE is persisted until it is explicitly deleted by the user.
  • QuickSight can also be  configured to keep the data in SPICE up-to-date as the data in the underlying sources change.
  • SPICE automatically replicates data for high availability and enables QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wide variety of AWS data sources.

Authors and Readers

  • QuickSight Author is a user who
    • can connect to data sources (within AWS or outside), create visuals and analyze data.
    • can create interactive dashboards using advanced QuickSight capabilities such as parameters and calculated fields, and publish dashboards with other users in the account.
  • QuickSight Reader is a user who
    • consumes interactive dashboards.
    • can log in via their organization’s preferred authentication mechanism (QuickSight username/password, SAML portal or AD auth), view shared dashboards, filter data, drill down to details or export data as a CSV file, using a web browser or mobile app.
    • Readers do not have any allocated SPICE capacity.

Security and access

  • QuickSight supports multi-factor authentication (MFA) for the AWS account via the AWS Management console.
  • For VPC with public connectivity, QuickSight’s IP address range can be added to the database instances’ security group rules to enable traffic flow into the VPC and database instances.
  • QuickSight supports Row-level security (RLS), which enables dataset owners to control access to data at row granularity based on permissions associated with the user interacting with the data. With RLS, QuickSight users only need to manage a single set of data and apply appropriate row-level dataset rules to it. All associated dashboards and analyses will enforce these rules, simplifying dataset management and removing the need to maintain multiple datasets for users with different data access privileges.
  • QuickSight supports Private VPC (Virtual Private Cloud) Access, which uses an Elastic Network Interface (ENI) for secure, private communication with data sources in a VPC. It also allows use of  AWS Direct Connect to create a secure, private link with the on-premises resources.

[do_widget id=text-15]

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are using QuickSight to identify demand trends over multiple months for your top five product lines. Which type of visualization do you choose?
    1. Scatter Plot
    2. Pie Chart
    3. Pivot Table
    4. Line Chart
  2. You need to provide customers with rich visualizations that allow you to easily connect multiple disparate data sources in S3, Redshift, and several CSV files. Which tool should you use that requires the least setup?
    1. Hue on EMR
    2. Redshift
    3. QuickSight
    4. Elasticsearch