AWS Certified Database – Specialty (DBS-C01) Exam Learning Path

AWS Certified Database – Specialty (DBS-C01) Exam Learning Path

11 Down !!! Continuing on my AWS journey which has lasted for over 3 years now, validating and re-validating the certs multiple times, I took another step and have passed the AWS Certified Database – Specialty (DBS-C01) certification

AWS Certified Database – Specialty (DBS-C01) exam basically validates

  • Understand and differentiate the key features of AWS database services.
  • Analyze needs and requirements to design and recommend appropriate database solutions using AWS services

AWS Certified Database – Specialty (DBS-C01) Exam Resources

AWS Certified Database – Specialty (DBS-C01) Exam Summary

  • AWS Certified Database – Specialty exam focuses on Data services from the relational, non-relational, graph, caching, and data warehousing. It also focuses on data migration.
  • AWS Certified Database – Specialty exam has 65 questions with a time limit of 170 minutes
  • Questions and answer options are pretty long, so you need time to read through each of them to make sense of the requirements and filter out the answers
  • As the exam was online from home, there was no access to paper and pen but the trick remains the same, read the question and draw a rough architecture and focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.
  • Be sure to cover the following topics
    • Whitepapers and articles
    • Database
      • Make sure you know and cover all the services in-depth, as 80% of the exam is focused on topics like Aurora, RDS, DynamoDB
      • Aurora
        • Understand Aurora in depth
        • Know Aurora DR & HA using Read Replicas an
          • Aurora promotes read replicas as per the priority tier (tier 0 is the highest), largest size if the tier matches
        • Know  Aurora Global Database
          • Aurora provides Global Database with cross region read replicas for low latency reads. remember it is not multi-master and would not provide low latency writes.
        • Know Aurora Connection endpoints
          • cluster for primary read/write
          • reader for read replicas
          • custom for a specific group of instances
          • Instance for specific single instance – Not recommended
        • Know Aurora Fast Failover techniques
          • set TCP keepalives low
          • set Java DNS caching timeouts low
          • Set the timeout variables used in the JDBC connection string as low
          • Use the provided read and write Aurora endpoints
          • Use cluster cache management for Aurora PostgreSQL. Cluster cache management ensures that application performance is maintained if there’s a failover.
        • Know Aurora Serverless
        • Know Aurora Backtrack feature which rewinds the DB cluster to the specified time. It is not a replacement for backups.
        • Supports Server Auditing Events for different activities that cover log in, DML, permission changes DCL, schema changes DDL, etc.
        • Know Aurora Cluster Cache management feature which helps fast failover
        • Know Aurora Clone feature which allows you to create quick and cost-effective clones
        • Aurora supports fault injection queries to simulate various failover like node down, primary failover, etc.
        • RDS PostgreSQL and MySQL can be migrated to Aurora, by creating an Aurora Read Replica from the instance. Once the replica lag is zero, switch the DNS with no data loss
        • Supports Database Activity Streams to stream audit logs to external services like Kinesis
        • Supports stored procedures calling lambda functions
      • DynamoDB
      • RDS
        • Know Relational Database Service (RDS) in depth
        • Understand RDS Snapshots, Backups and Restore
          • restoring a DB from snapshot does not retain the parameter group and security group
          • automated snapshots cannot be shared.
        • Understand RDS Read Replicas
        • Understand RDS Multi-AZ
        • Understand RDS Multi-AZ vs Read Replicas (hint: cross region replication and availability of data)
          • Multi-AZ failover can be simulated using Reboot with Failure option
          • Read Replicas require automated backups enabled
        • Understand DB components esp. DB parameter group, DB options groups
          • Dynamic parameters are applied immediately
          • Static parameters need manual reboot.
          • Default parameter group cannot be modified. Need to create custom parameter group and associate to RDS
          • Know max connections also depends on DB instance size
        • Understand RDS Security
          • RDS supports security groups to control who can access RDS instances
          • RDS supports data at rest encryption and SSL for data in transit encryption
          • RDS also support IAM database authentication
          • Existing RDS instance cannot be encrypted, create a snapshot -> encrypt it -> restore as encrypted DB
          • RDS PosgreSQL requires rds.force_ssl=1 and sslmode=ca/verify-full to enable SSL encryption
          • Know RDS Encrypted Database limitations
        • Understand RDS Monitoring and Notification
          • Know RDS supports notification events through SNS for events like database creation, deletion, snapshot creation etc.
          • CloudWatch gathers metrics about CPU utilization from the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance.
          • Enhanced Monitoring metrics are useful to understand how different processes or threads on a DB instance use the CPU.
          • RDS Performance Insights is a database performance tuning and monitoring feature that helps illustrate the database’s performance and help analyze any issues that affect it
        • RDS instance cannot be stopped if with read replicas
      • Neptune
        • provides Neptune loader to quick import data from S3
        • supports VPC endpoints
      • Redshift
        • Understand Redshift at high level. Exam does not cover Redshift if depth.
        • Know Redshift Best Practices w.r.t selection of Distribution style, Sort key, importing/exporting data
          • COPY command which allows parallelism, and performs better than multiple COPY commands
          • COPY command can use manifest files to load data
          • COPY command handles encrypted data
        • Know Redshift cross region encrypted snapshot copy
          • Create a new key in destination region
          • Use CreateSnapshotCopyGrant to allow Amazon Redshift to use the KMS key from the destination region.
          • In the source region, enable cross-region replication and specify the name of the copy grant created.
        • Know Redshift supports Audit logging which covers authentication attempts, connections and disconnections usually for compliance reasons.
      • Data Migration Service (DMS)
        • Understand Data Migration Service in depth for migration homogeneous and heterogeneous database
        • DMS with Full load plus CDC migration capability can be used to migration databases with zero downtime and no data loss.
        • DMS with SCT (Schema Conversion Tool) can be used to migration heterogeneous database.
        • DMS support validation after the migration to ensure data was migrated correctly
        • DMS supports LOB migration as a 2-step process. It can do a full or limited LOB migration
          • In full LOB mode AWS DMS migrates all LOBs from source to target regardless of size. Full LOB mode can be quite slow.
          • In limited LOB mode, a maximum LOB size can be set that AWS DMS should accept. Doing so allows AWS DMS to pre-allocate memory and load the LOB data in bulk. LOBs that exceed the maximum LOB size are truncated and a warning is issued to the log file. In limited LOB mode, you get significant performance gains over full LOB mode.
          • Recommended to use limited LOB mode whenever possible.
    • Security, Identity & Compliance
      • Data security is a key concept controlled in the Database – Specialty exam
      • Identity and Access Management (IAM)
      • Trusted Advisor provides RDS Idle instances
    • Management & Governance Tools
      • Understand AWS CloudWatch for Logs and Metrics.
        • CloudWatch Events more real time alerts as compared to CloudTrail
        • CloudWatch can be used used to store RDS logs with custom retention period, which is indefinite by default.
        • CloudWatch Application Insights support .Net and SQL Server monitoring
      • Know CloudFormation for provisioning, in terms of
        • Stack drifts – to understand difference between current state and on actual environment with any manual changes
        • Change Set – allows you to verify the changes before being propagated
        • parameters – allows you to configure variables or environment specific values
        • Stack policy defines the update actions that can be performed on designated resources.
        • Deletion policy for RDS allows you to configure if the resources is retained, snapshot or deleted once destroy is initiated
        • Supports secrets manager for DB credentials generation, storage and easy rotation
        • System parameter store for environments specific parameters

19 thoughts on “AWS Certified Database – Specialty (DBS-C01) Exam Learning Path

      1. I passed my AWS Data Speciality Exam yesterday. I was skeptical to use BrainCert , however after taking the exam i found it was very helpfule.

        I used both BrainCert and Whizlab test practice. Bothe were good, i would suggest to use practice your test with both.

        Thank you Jayendra for suggestions.

  1. Great Jayendra.. Congrats and thanks for sharing the info. Just would like to have your thoughts… Actually I am preparing for SA Pro cert. but just thinking of first go for Database Specialty before Pro… would it be right move?

    1. professional and specialty are at the same level and you can choose either of them as per your role. Its not mandatory to go for professional.

  2. Hi Jayendra,

    Thank you for sharing your experience for database exam, quick question, does exam covers syntax for the database command like command for using ssl certificate for different databases (sql server, mariadb etc). Or, it just checks the knowledge. I see tricky questions in one of the site asking what syntax is valid if we have to use ssl for a particular database, do we need to know commands for all the RDS databases?


    1. Hi kavita, there are some questions on the actual ssl configuration needed to make the ssl working for postgresql as far as i remember. But apart from these, there are non on the actual syntax or command.

  3. Hi Jayendra.
    I am prepping for AWS Certified Database Speciality.
    I have read in some forums about many incorrect answers in exam preps.
    This can easily misguide when one is still learning and practicing.

    Any recommendations between Braincert vs Whizlabs vs Stephan Maarek?
    I would appreciate your thoughts..

    1. prefer Braincert, they have good question bank and good explanation with reference to the documentation.

  4. Hi Jayendra,

    Do we really need to have 2 years of hands on experience on AWS before appearing for this exam? Or having good understanding of DB is more important?

  5. Very useful guide for the ones pursuing this certification. Keep up the good work!

  6. Hi Sir, Could you please help me AWS data analytics specialty is easy or Database Specialty is easier? Please explain in details, I am in confusion which one to take?

    1. Both of them are quite tough. But if you prepare for data analytics, you might get a head start for database specialty as it includes some of the data services as well.

Comments are closed.