AWS Certified Data Analytics – Specialty (DAS-C01) Exam Learning Path

AWS Data Analytics - Specialty DAS-C01 Certificate

AWS Certified Data Analytics – Specialty (DAS-C01) Exam Learning Path

  • Recertified with the AWS Certified Data Analytics – Specialty (DAS-C01) which tends to cover a lot of big data topics focused on AWS services.
  • Data Analytics – Specialty (DAS-C01) has replaced the previous Big Data – Specialty (BDS-C01).

AWS Certified Data Analytics – Specialty (DAS-C01) exam basically validates

  • Define AWS data analytics services and understand how they integrate with each other.
  • Explain how AWS data analytics services fit in the data lifecycle of collection, storage, processing, and visualization.

Refer AWS Certified Data Analytics – Specialty Exam Guide for details

AWS Certified Data Analytics - Specialty DAS-C01 Domains

AWS Certified Data Analytics – Specialty (DAS-C01) Exam Resources

AWS Certified Data Analytics – Specialty (DAS-C01) Exam Summary

  • Specialty exams are tough, lengthy, and tiresome. Most of the questions and answers options have a lot of prose and a lot of reading that needs to be done, so be sure you are prepared and manage your time well.
  • DAS-C01 exam has 65 questions to be solved in 170 minutes which gives you roughly 2 1/2 minutes to attempt each question.
  • DAS-C01 exam includes two types of questions, multiple-choice and multiple-response.
  • DAS-C01 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 750.
  • Specialty exams currently cost $ 300 + tax.
  • You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
  • As always, mark the questions for review and move on and come back to them after you are done with all.
  • As always, having a rough architecture or mental picture of the setup helps focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.
  • AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
  • Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.

AWS Certified Data Analytics – Specialty (DAS-C01) Exam Topics

  • AWS Certified Data Analytics – Specialty exam, as its name suggests, covers a lot of Big Data concepts right from data collection, ingestion, transfer, storage, pre and post-processing, analytics, and visualization with the added concepts for data security at each layer.

Analytics

  • Make sure you know and cover all the services in-depth, as 80% of the exam is focused on topics like Glue, Kinesis, and Redshift.
  • AWS Analytics Services Cheat Sheet
  • Glue
    • DAS-C01 covers Glue in great detail.
    • AWS Glue is a fully managed, ETL service that automates the time-consuming steps of data preparation for analytics.
    • supports server-side encryption for data at rest and SSL for data in motion.
    • Glue ETL engine to Extract, Transform, and Load data that can automatically generate Scala or Python code.
    • Glue Data Catalog is a central repository and persistent metadata store to store structural and operational metadata for all the data assets. It works with Apache Hive as its metastore.
    • Glue Crawlers scan various data stores to automatically infer schemas and partition structures to populate the Data Catalog with corresponding table definitions and statistics.
    • Glue Job Bookmark tracks data that has already been processed during a previous run of an ETL job by persisting state information from the job run.
    • Glue Streaming ETL enables performing ETL operations on streaming data using continuously-running jobs.
    • Glue provides flexible scheduler that handles dependency resolution, job monitoring, and retries.
    • Glue Studio offers a graphical interface for authoring AWS Glue jobs to process data allowing you to define the flow of the data sources, transformations, and targets in the visual interface and generating Apache Spark code on your behalf.
    • Glue Data Quality helps reduces manual data quality efforts by automatically measuring and monitoring the quality of data in data lakes and pipelines.
    • Glue DataBrew helps prepare, visualize, clean, and normalize data directly from the data lake, data warehouses, and databases, including S3, Redshift, Aurora, and RDS.
  • Redshift
    • Redshift is also covered in depth.
    • Cover Redshift Advanced topics
      • Redshift Distribution Style determines how data is distributed across compute nodes and helps minimize the impact of the redistribution step by locating the data where it needs to be before the query is executed.
      • Redshift Enhanced VPC routing forces all COPY and UNLOAD traffic between the cluster and the data repositories through the VPC.
      • Workload management (WLM) enables users to flexibly manage priorities within workloads so that short, fast-running queries won’t get stuck in queues behind long-running queries.
      • Redshift Spectrum helps query and retrieve structured and semistructured data from files in S3 without having to load the data into Redshift tables.
      • Federated Query feature allows querying and analyzing data across operational databases, data warehouses, and data lakes.
      • Short query acceleration (SQA) prioritizes selected short-running queries ahead of longer-running queries.
      • Redshift Serverless is a serverless option of Redshift that makes it more efficient to run and scale analytics in seconds without the need to set up and manage data warehouse infrastructure.
    • Redshift Best Practices w.r.t selection of Distribution style, Sort key, importing/exporting data
      • COPY command which allows parallelism, and performs better than multiple COPY commands
      • COPY command can use manifest files to load data
      • COPY command handles encrypted data
    • Redshift Resizing cluster options (elastic resize did not support node type changes before, but does now)
    • Redshift supports encryption at rest and in transit
    • Redshift supports encrypting an unencrypted cluster using KMS. However, you can’t enable hardware security module (HSM) encryption by modifying the cluster. Instead, create a new, HSM-encrypted cluster and migrate your data to the new cluster.
    • Know Redshift views to control access to data.
  • Elastic Map Reduce
    • Understand EMRFS
      • Use Consistent view to make sure S3 objects referred by different applications are in sync. Although, it is not needed now.
    • Know EMR Best Practices (hint: start with many small nodes instead of few large nodes)
    • Know EMR Encryption options
      • supports SSE-S3, SS3-KMS, CSE-KMS, and CSE-Custom encryption for EMRFS
      • supports LUKS encryption for local disks
      • supports TLS for data in transit encryption
      • supports EBS encryption
    • Hive metastore can be externally hosted using RDS, Aurora, and AWS Glue Data Catalog
    • Know also different technologies
      • Presto is a fast SQL query engine designed for interactive analytic queries over large datasets from multiple sources
      • Spark is a distributed processing framework and programming model that helps do machine learning, stream processing, or graph analytics using Amazon EMR clusters
      • Zeppelin/Jupyter as a notebook for interactive data exploration and provides open-source web application that can be used to create and share documents that contain live code, equations, visualizations, and narrative text
      • Phoenix is used for OLTP and operational analytics, allowing you to use standard SQL queries and JDBC APIs to work with an Apache HBase backing store
  • Kinesis
    • Understand Kinesis Data Streams and Kinesis Data Firehose in depth
    • Know Kinesis Data Streams vs Kinesis Firehose
      • Know Kinesis Data Streams is open-ended for both producer and consumer. It supports KCL and works with Spark.
      • Know Kinesis Firehose is open-ended for producers only. Data is stored in S3, Redshift, and OpenSearch.
      • Kinesis Firehose works in batches with minimum 60secs intervals and in near-real time.
      • Kinesis Firehose supports out-of-the-box transformation and  custom transformation using Lambda
    • Kinesis supports encryption at rest using server-side encryption
    • Kinesis Producer Library supports batching
    • Kinesis Data Analytics
      • helps transform and analyze streaming data in real time using Apache Flink.
      • supports anomaly detection using Random Cut Forest ML
      • supports reference data stored in S3.
  • OpenSearch
    • OpenSearch is a search service that supports indexing, full-text search, faceting, etc.
    • OpenSearch can be used for analysis and supports visualization using OpenSearch Dashboards which can be real-time.
    • OpenSearch Service Storage tiers support Hot, UltraWarm, and Cold and the data can be transitioned using Index State management.
  • QuickSight
    • Know Visual Types (hint: esp. word clouds, plotting line, bar, and story based visualizations)
    • Know Supported Data Sources
    • QuickSight provides IP addresses that need to be whitelisted for QuickSight to access the data store.
    • QuickSight provides direct integration with Microsoft AD
    • QuickSight supports Row level security using dataset rules to control access to data at row granularity based on permissions associated with the user interacting with the data.
    • QuickSight supports ML insights as well
    • QuickSight supports users defined via IAM or email signup.
  • Athena
    • is a serverless, interactive analytics service built on open-source frameworks, supporting open-table and file formats.
    • provides a simplified, flexible way to analyze data in an S3 data lake and 30 data sources, including on-premises data sources or other cloud systems using SQL or Python without loading the data.
    • integrates with QuickSight for visualizing the data or creating dashboards.
    • uses a managed Glue Data Catalog to store information and schemas about the databases and tables that you create for the data stored in S3
    • Workgroups can be used to separate users, teams, applications, or workloads, to set limits on the amount of data each query or the entire workgroup can process, and to track costs.
    • Athena best practices recommended partitioning the data, partition projection, and using the Columnar file format like ORC or Parquet as they support compression and are splittable.
  • Know Data Pipeline for data transfer

Security, Identity & Compliance

Management & Governance Tools

  • Understand AWS CloudWatch for Logs and Metrics.
  • CloudWatch Subscription Filters can be used to route data to Kinesis Data Streams, Kinesis Data Firehose, and Lambda.

Whitepapers and articles

On the Exam Day

  • Make sure you are relaxed and get some good night’s sleep. The exam is not tough if you are well-prepared.
  • If you are taking the AWS Online exam
    • Try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
    • The online verification process does take some time and usually, there are glitches.
    • Remember, you would not be allowed to take the take if you are late by more than 30 minutes.
    • Make sure you have your desk clear, no hand-watches, or external monitors, keep your phones away, and nobody can enter the room.

Finally, All the Best 🙂

AWS Directory Services

AWS Directory Services

  • AWS Directory Services is a managed service offering, providing directories that contain information about the organization, including users, groups, computers, and other resources.
  • AWS Directory Services provides multiple ways including
    • Simple AD – a standalone directory service
    • AD Connector – acts as a proxy to use On-Premise Microsoft Active Directory with other AWS services.
    • AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also referred to as Microsoft AD

Simple AD

  • is a Microsoft Active Directory compatible directory from AWS Directory Service that is powered by Samba 4.
  • is the least expensive option and the best choice if there are 5,000 or fewer users & don’t need the more advanced Microsoft Active Directory features.
  • supports commonly used Active Directory features such as user accounts, group memberships, domain-joining EC2 instances running Linux and Windows, Kerberos-based single sign-on (SSO), and group policies.
  • does not support features like DNS dynamic update, schema extensions, multi-factor authentication, communication over LDAPS, PowerShell AD cmdlets, and the transfer of FSMO roles
  • provides daily automated snapshots to enable point-in-time recovery
  • Trust relationships between Simple AD and other Active Directory domains cannot be set up.
  • does not support MFA, RDS SQL Server, or AWS SSO.

AD Connector

  • helps connect to an existing on-premises Active Directory to AWS
  • is the best choice to leverage an existing on-premises directory with AWS services
  • requires VPN or Direct Connect connection
  • is a proxy service for connecting on-premises Microsoft Active Directory to AWS without requiring complex directory synchronization technologies or the cost and complexity of hosting a federation infrastructure
  • forwards sign-in requests to the Active Directory domain controllers for authentication and provides the ability for applications to query the directory for data
  • enables consistent enforcement of existing security policies, such as password expiration, password history, and account lockouts, whether users are accessing resources on-premises or in the AWS cloud

Microsoft Active Directory (Enterprise Edition)

  • is a feature-rich managed Microsoft Active Directory hosted on AWS
  • is the best choice if there are more than 5,000 users
  • supports trust relationship (forest trust) set up between an AWS-hosted directory and on-premises directories providing users and groups with access to resources in either domain, using single sign-on (SSO) without the need to synchronize or replicate the users, groups, or passwords.
  • requires a VPN or Direct Connect connection.
  • provides much of the functionality offered by Microsoft Active Directory plus integration with AWS applications.
  • provides a highly available pair of domain controllers running in different AZs connected to the VPC in a Region of your choice.
  • supports MFA by integrating with an existing RADIUS-based MFA infrastructure to provide an additional layer of security when users access AWS applications.
  • automatically configures and manages host monitoring and recovery, data replication, snapshots, and software updates.
  • supports RDS for SQL Server, AWS Workspaces, Quicksight, WorkDocs, etc.

AWS Directory Services - Microsoft AD Use Cases

Microsoft AD Connectivity Options

  • If the VGW is used to connect to the On-Premise AD is not stable or has connectivity issues, the following options can be explored
    • Simple AD
      • lower cost, low scale, basic AD compatible, or LDAP compatibility
      • provides a standalone instance for the Microsoft AD in AWS
      • No single point of Authentication or Authorization, as a separate copy is maintained
      • trust relationships cannot be set up between Simple AD and other Active Directory domains
    • Read-only Domain Controllers (RODCs)
      • works out as a Read-only Active Directory
      • holds a copy of the Active Directory Domain Service (AD DS) database and responds to authentication requests.
      • are typically deployed in locations where physical security cannot be guaranteed.
      • they cannot be written to by applications or other servers.
      • helps maintain a single point to authentication & authorization controls, however, needs to be synced.
    • Writable Domain Controllers
      • are expensive to setup
      • operate in a multi-master model; changes can be made on any writable server in the forest, and those changes are replicated to servers throughout the entire forest

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. The majority of your Infrastructure is on-premises and you have a small footprint on AWS. Your company has decided to roll out a new application that is heavily dependent on low latency connectivity to LDAP for authentication. Your security policy requires minimal changes to the company’s existing application user management processes. What option would you implement to successfully launch this application?
    1. Create a second, independent LDAP server in AWS for your application to use for authentication (independent would not work for authentication as its a separate copy)
    2. Establish a VPN connection so your applications can authenticate against your existing on-premises LDAP servers (not a low latency solution)
    3. Establish a VPN connection between your data center and AWS create an LDAP replica on AWS and configure your application to use the LDAP replica for authentication (RODCs low latency and minimal setup)
    4. Create a second LDAP domain on AWS establish a VPN connection to establish a trust relationship between your new and existing domains and use the new domain for authentication (Not minimal effort)
  2. A company is preparing to give AWS Management Console access to developers Company policy mandates identity federation and role-based access control. Roles are currently assigned using groups in the corporate Active Directory. What combination of the following will give developers access to the AWS console? (Select 2) Choose 2 answers
    1. AWS Directory Service AD Connector (for Corporate Active directory)
    2. AWS Directory Service Simple AD
    3. AWS Identity and Access Management groups
    4. AWS Identity and Access Management roles
    5. AWS Identity and Access Management users
  3. An Enterprise customer is starting their migration to the cloud, their main reason for migrating is agility, and they want to make their internal Microsoft Active Directory available to any applications running on AWS; this is so internal users only have to remember one set of credentials and as a central point of user control for leavers and joiners. How could they make their Active Directory secure, and highly available, with minimal on-premises infrastructure changes, in the most cost and time-efficient way? Choose the most appropriate
    1. Using Amazon Elastic Compute Cloud (EC2), they would create a DMZ using a security group; within the security group they could provision two smaller Amazon EC2 instances that are running Openswan for resilient IPSEC tunnels, and two larger instances that are domain controllers; they would use multiple Availability Zones (Whats Openswan? Refer Implementation)
    2. Using VPC, they could create an extension to their data center and make use of resilient hardware IPSEC tunnels; they could then have two domain controller instances that are joined to their existing domain and reside within different subnets, in different Availability Zones (highly available with 2 AZ’s, secure with VPN connection and minimal changes)
    3. Within the customer’s existing infrastructure, they could provision new hardware to run Active Directory Federation Services; this would present Active Directory as a SAML2 endpoint on the internet; any new application on AWS could be written to authenticate using SAML2 (not minimal on-premises hardware changes)
    4. The customer could create a stand-alone VPC with its own Active Directory Domain Controllers; two domain controller instances could be configured, one in each Availability Zone; new applications would authenticate with those domain controllers (not a central location, but a copy)
  4. A company needs to deploy virtual desktops to its customers in a virtual private cloud, leveraging existing security controls. Which set of AWS services and features will meet the company’s requirements?
    1. Virtual Private Network connection. AWS Directory Services, and ClassicLink (ClassicLink allows you to link an EC2-Classic instance to a VPC in your account, within the same region)
    2. Virtual Private Network connection. AWS Directory Services, and Amazon Workspaces (WorkSpaces for Virtual desktops, and AWS Directory Services to authenticate to an existing on-premises AD through VPN)
    3. AWS Directory Service, Amazon Workspaces, and AWS Identity and Access Management (AD service needs a VPN connection to interact with an On-premise AD directory)
    4. Amazon Elastic Compute Cloud, and AWS Identity and Access Management (Need WorkSpaces for virtual desktops)
  5. An Enterprise customer is starting their migration to the cloud, their main reason for migrating is agility and they want to make their internal Microsoft active directory available to any applications running on AWS, this is so internal users only have to remember one set of credentials and as a central point of user control for leavers and joiners. How could they make their active directory secure and highly available with minimal on-premises infrastructure changes in the most cost and time-efficient way? Choose the most appropriate:
    1. Using Amazon EC2, they could create a DMZ using a security group, within the security group they could provision two smaller Amazon EC2 instances that are running Openswan for resilient IPSEC tunnels and two larger instances that are domain controllers, they would use multiple availability zones.
    2. Using VPC, they could create an extension to their data center and make use of resilient hardware IPSEC tunnels, they could then have two domain controller instances that are joined to their existing domain and reside within different subnets in different availability zones.
    3. Within the customer’s existing infrastructure, they could provision new hardware to run active directory federation services, this would present active directory as a SAML2 endpoint on the internet and any new application on AWS could be written to authenticate using SAML2 (not a  minimal change to the existing infrastructure)
    4. The customer could create a stand alone VPC with its own active directory domain controllers, two domain controller instances could be configured, one in each availability zone, new applications would authenticate with those domain controllers. (Standalone cannot use the same security)
  6. You run a 2000-engineer organization. You are about to begin using AWS at a large scale for the first time. You want to integrate with your existing identity management system running on Microsoft Active Directory because your organization is a power-user of Active Directory. How should you manage your AWS identities in the simplest manner?
    1. Use a large AWS Directory Service Simple AD.
    2. Use a large AWS Directory Service AD Connector. (AD Connector can be used as power-user of Microsoft Active Directory. Simple AD only works with a subset of AD functionality)
    3. Use a Sync Domain running on AWS Directory Service.
    4. Use an AWS Directory Sync Domain running on AWS Lambda.

References

AWS Security Hub

AWS Security Hub

  • AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation.
  • collects security data from across AWS accounts, services, and supported third-party partner products and helps analyze the security trends and identify the highest priority security issues.
  • is Regional and only receives and processes findings from the Region where the Security Hub is enabled. However, it supports cross-region aggregation of findings via the designation of an aggregator region.
  • must be enabled in each region to view findings in that region.
  • automatically runs continuous, account-level configuration and security checks based on AWS best practices and industry standards which include
    • CIS AWS Foundations
    • Payment Card Industry Data Security Standard (PCI DSS)
    • AWS Foundational Security Best Practices
  • can consume, aggregate, organize, and prioritize findings from
  • consolidates the security findings across accounts and provider products and displays results on the Security Hub console.
  • supports integration with Amazon EventBridge. Custom actions can be defined when a finding is received.
  • only detects and consolidates findings that are generated after the Security Hub is enabled.
  • has multi-account management through AWS Organizations integration, which allows delegating an administrator account for the organization.
  • uses service-linked AWS Config rules to perform most of its security checks for controls. AWS Config must be enabled on all accounts – both the administrator account and member accounts – in each Region where Security Hub is enabled.
  • works with a service-linked role named AWSServiceRoleForSecurityHub which includes the permissions and trust policy to do the following:
    • Detect and aggregate findings from Amazon GuardDuty, Amazon Inspector, and Amazon Macie
    • Configure the requisite AWS Config infrastructure to run security checks for the supported standards

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A security engineer has been asked to continuously monitor the company’s AWS account using automated compliance checks based on AWS best practices and Center for Internet Security (CIS) AWS Foundations Benchmarks. How can the security engineer accomplish this using AWS services?
    1. AWS Config + AWS Security Hub
    2. Amazon Inspector + AWS GuardDuty
    3. Amazon Inspector + AWS Shield
    4. AWS Config + Amazon Inspector

References

AWS_Security_Hub

Amazon GuardDuty

Amazon GuardDuty

Amazon GuardDuty

  • Amazon GuardDuty is a threat detection service that continuously monitors the AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.
  • is a continuous security monitoring service that analyzes and processes the following data sources:
  • uses threat intelligence feeds, such as lists of malicious IP addresses and domains, and machine learning to identify unexpected and potentially unauthorized and malicious activity within the AWS environment.
  • combines machine learning, anomaly detection, network monitoring, and malicious file discovery, utilizing both AWS-developed and industry-leading third-party sources to help protect workloads and data on AWS
  • is a Regional service and is recommended to be enabled in all supported AWS Regions. This helps generate findings of unauthorized or unusual activity even in Regions not actively used.
  • does not look at historical data, it monitors only the activity that starts after it is enabled.
  • operates completely independent of the AWS resources and therefore has no impact on the performance or availability of the accounts or workloads.
  • GuardDuty supports
    • Suppression rules, allow the creation of very specific combinations of attributes to suppress findings.
    • Trusted IP List for highly secure communication with the AWS environment. Findings are not generated based on trusted IP lists.
    • Threat List for known malicious IP addresses. Findings are generated based on threat lists.
  • Security findings are retained and made available through the GuardDuty console and APIs for 90 days, after which they are discarded.
  • Findings are assigned a severity, and actions can be automated by integrating with Security Hub, EventBridge, Lambda, and Step Functions.
  • Amazon Detective is also tightly integrated with GuardDuty which helps perform deeper forensic and root cause investigations.
  • GuardDuty Malware Protection feature helps to detect malicious files on EBS volumes attached to an EC2 instance and container workloads.

Amazon GuardDuty

GuardDuty with Multiple Accounts

  • GuardDuty has multi-account management through AWS Organizations integration, which allows delegating an administrator account for the organization.
  • The delegated administrator (DA) account is a centralized account that consolidates all findings and can configure all member accounts.
  • The administrator account helps to associate and manage multiple AWS accounts.
  • All security findings are aggregated to the administrator account for review and remediation.
  • CloudWatch Events are also aggregated to the administrator account when using this configuration.

GuardDuty Automated Remediation

  • GuardDuty security findings can be remediated automatically using EventBridge and AWS Lambda
  • For example, a Lambda function can be created to modify the AWS security group rules based on security findings. For a GuardDuty finding indicating one of your EC2 instances is being probed by a known malicious IP, the address can be added through an EventBridge rule, initiating a Lambda function to automatically modify the security group rules and restrict access on that port.

GuardDuty Malware Protection

  • GuardDuty Malware Protection helps scan EBS volume data for possible malware and identifies suspicious behavior indicative of malicious software in EC2 instances or container workloads.
  • is optimized to consume large data volumes for near real-time processing of security detections.
  • scans a replica EBS volume that GuardDuty generates based on the snapshot of the EBS volume for trojans, worms, crypto miners, rootkits, bots, and more.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which AWS service makes detecting and reporting unexpected and potentially malicious activity in your AWS environment easy?
    1. AWS Shield
    2. AWS Inspector
    3. AWS GuardDuty
    4. AWS WAF

References

Amazon_GuardDuty

AWS Data Pipeline

AWS Data Pipeline

  • AWS Data Pipeline is a web service that makes it easy to automate and schedule regular data movement and data processing activities in AWS
  • helps define data-driven workflows
  • integrates with on-premises and cloud-based storage systems
  • helps quickly define a pipeline, which defines a dependent chain of data sources, destinations, and predefined or custom data processing activities
  • supports scheduling where the pipeline regularly performs processing activities such as distributed data copy, SQL transforms, EMR applications, or custom scripts against destinations such as S3, RDS, or DynamoDB.
  • ensures that the pipelines are robust and highly available by executing the scheduling, retry, and failure logic for the workflows as a highly scalable and fully managed service.

AWS Data Pipeline features

  • Distributed, fault-tolerant, and highly available
  • Managed workflow orchestration service for data-driven workflows
  • Infrastructure management service, as it will provision and terminate resources as required
  • Provides dependency resolution
  • Can be scheduled
  • Supports Preconditions for readiness checks.
  • Grants control over retries, including frequency and number
  • Native integration with S3, DynamoDB, RDS, EMR, EC2 and Redshift
  • Support for both AWS based and external on-premise resources

AWS Data Pipeline Concepts

Pipeline Definition

  • Pipeline definition helps the business logic to be communicated to the AWS Data Pipeline
  • Pipeline definition defines the location of data (Data Nodes), activities to be performed, the schedule, resources to run the activities, per-conditions, and actions to be performed

Pipeline Components, Instances, and Attempts

  • Pipeline components represent the business logic of the pipeline and are represented by the different sections of a pipeline definition.
  • Pipeline components specify the data sources, activities, schedule, and preconditions of the workflow
  • When AWS Data Pipeline runs a pipeline, it compiles the pipeline components to create a set of actionable instances and contains all the information needed to perform a specific task
  • Data Pipeline provides durable and robust data management as it retries a failed operation depending on frequency & defined number of retries

Task Runners

  • A task runner is an application that polls AWS Data Pipeline for tasks and then performs those tasks
  • When Task Runner is installed and configured,
    • it polls AWS Data Pipeline for tasks associated with activated pipelines
    • after a task is assigned to Task Runner, it performs that task and reports its status back to Pipeline.
  • A task is a discreet unit of work that the Pipeline service shares with a task runner and differs from a pipeline, which defines activities and resources that usually yields several tasks
  • Tasks can be executed either on the AWS Data Pipeline managed or user-managed resources.

Data Nodes

  • Data Node defines the location and type of data that a pipeline activity uses as source (input) or destination (output)
  • supports S3, Redshift, DynamoDB, and SQL data nodes

Databases

  • supports JDBC, RDS, and Redshift database

Activities

  • An activity is a pipeline component that defines the work to perform
  • Data Pipeline provides pre-defined activities for common scenarios like sql transformation, data movement, hive queries, etc
  • Activities are extensible and can be used to run own custom scripts to support endless combinations

Preconditions

  • Precondition is a pipeline component containing conditional statements that must be satisfied (evaluated to True) before an activity can run
  • A pipeline supports
    • System-managed preconditions
      • are run by the AWS Data Pipeline web service on your behalf and do not require a computational resource
      • Includes source data and keys check for e.g. DynamoDB data, table exists or S3 key exists or prefix not empty
    • User-managed preconditions
      • run on user defined and managed computational resources
      • Can be defined as Exists check or Shell command

Resources

  • A resource is a computational resource that performs the work that a pipeline activity specifies
  • supports AWS Data Pipeline-managed and self-managed resources
  • AWS Data Pipeline-managed resources include EC2 and EMR, which are launched by the Data Pipeline service only when they’re needed
  • Self managed on-premises resources can also be used, where a Task Runner package is installed which  continuously polls the AWS Data Pipeline service for work to perform
  • Resources can run in the same region as their working data set or even on a region different than AWS Data Pipeline
  • Resources launched by AWS Data Pipeline are counted within the resource limits and should be taken into account

Actions

  • Actions are steps that a pipeline takes when a certain event like success, or failure occurs.
  • Pipeline supports SNS notifications and termination action on resources

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An International company has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery capability in a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be able to provision the web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements. Which design would you choose to meet these requirements?
    1. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. Create a ‘Lastupdated’ attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter. (Refer Blog Post)
    2. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region. (No Schedule and throughput control)
    3. Use AWS data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region. (With AWS Data pipeline the data can be copied directly to other DynamoDB table)
    4. Send each item into an SQS queue in the second region; use an auto-scaling group behind the SQS queue to replay the write in the second region. (Not Automated to replay the write)
  2. Your company produces customer commissioned one-of-a-kind skiing helmets combining nigh fashion with custom technical enhancements. Customers can show off their Individuality on the ski slopes and have access to head-up-displays, GPS rear-view cams and any other technical innovation they wish to embed in the helmet. The current manufacturing process is data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the highest standards. Assessments are a mixture of human and automated assessments you need to add a new set of assessment to model the failure modes of the custom electronics using GPUs with CUD across a cluster of servers with low latency networking. What architecture would allow you to automate the existing process using a hybrid approach and ensure that the architecture can support the evolution of processes over time?
    1. Use AWS Data Pipeline to manage movement of data & meta-data and assessments. Use an auto-scaling group of G2 instances in a placement group. (Involves mixture of human assessments)
    2. Use Amazon Simple Workflow (SWF) to manage assessments, movement of data & meta-data. Use an autoscaling group of G2 instances in a placement group. (Human and automated assessments with GPU and low latency networking)
    3. Use Amazon Simple Workflow (SWF) to manage assessments movement of data & meta-data. Use an autoscaling group of C3 instances with SR-IOV (Single Root I/O Virtualization). (C3 and SR-IOV won’t provide GPU as well as Enhanced networking needs to be enabled)
    4. Use AWS data Pipeline to manage movement of data & meta-data and assessments use auto-scaling group of C3 with SR-IOV (Single Root I/O virtualization). (Involves mixture of human assessments)

References

AWS Lake Formation

AWS Lake Formation

AWS Lake Formation

  • AWS Lake Formation easily creates secure data lakes, making data available for wide-ranging analytics.
  • is an integrated data lake service that helps to discover, ingest, clean, catalog, transform, and secure data and make it available for analysis and ML.
  • automatically manages access to the registered data in S3 through services including AWS Glue, Athena, Redshift, QuickSight, and EMR using Zeppelin notebooks with Apache Spark to ensure compliance with your defined policies.
  • helps configure and manage your data lake without manually integrating multiple underlying AWS services.
  • can manage data ingestion through AWS Glue. Data is automatically classified, and relevant data definitions, schema, and metadata are stored in the central Glue Data Catalog. Once the data is in the S3 data lake, access policies, including table-and-column-level access controls can be defined, and encryption for data at rest enforced.
  • uses a shared infrastructure with AWS Glue, including console controls, ETL code creation and job monitoring, blueprints to create workflows for data ingest, the same data catalog, and a serverless architecture.
  • integrates with IAM so authenticated users and roles can be automatically mapped to data protection policies that are stored in the data catalog. The IAM integration also supports Microsoft Active Directory or LDAP to federate into IAM using SAML.
  • helps centralize data access policy controls. Users and roles can be defined to control access, down to the table and column level.
  • supports private endpoints in the VPC and records all activity in AWS CloudTrail for network isolation and auditability.

AWS Lake Formation

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_Lake_Formation

AWS Resource Access Manager – RAM

AWS Resource Access Manager – RAM

  • AWS Resource Access Manager – RAM helps secure sharing of the AWS resources created in one AWS account with other AWS accounts.
  • Using RAM, with multiple AWS accounts, a resource can be created once and made usable by those other accounts.
  • For an account managed by AWS Organizations, resources can be shared with all the other accounts in the organization or only those accounts contained by one or more specified organizational units (OUs).
  • Resources can also be shared with specific AWS accounts by account ID, regardless of whether the account is part of an organization.

RAM Benefits

  • Reduces operational overhead
    • Create a resource once, and then use AWS RAM to share that resource with other accounts. This eliminates the need to provision duplicate resources in every account, which reduces operational overhead.
  • Provides security and consistency
    • Simplify security management for the shared resources by using a single set of policies and permissions.
  • Provides visibility and auditability
    • AWS RAM provides comprehensive visibility into shared resources and accounts through the integration with CloudWatch and CloudTrail.

RAM vs Resource-based Policies

  • Resources can be shared with an Organization or OU without having to enumerate every one of the AWS account IDs.
  • Users can see the resources shared with them directly in the originating AWS service console and API operations as if those resources were directly in the user’s account.
  • Owners of a resource can see which principals have access to each individual resource that they have shared.
  • RAM initiates an invitation process for resources shared with an account that isn’t part of the organization. Sharing within an organization doesn’t require an invitation and is auto-accepted.

RAM Supported Resources

  • AWS App Mesh
  • Amazon Aurora
  • AWS Certificate Manager Private Certificate Authority
  • AWS CodeBuild
  • Amazon EC2
  • EC2 Image Builder
  • AWS Glue
  • AWS License Manager
  • AWS Migration Hub Refactor Spaces
  • AWS Network Firewall
  • AWS Outposts
  • Amazon S3 on Outposts
  • AWS Resource Groups
  • Amazon Route 53
  • Amazon SageMaker
  • AWS Service Catalog AppRegistry
  • AWS Systems Manager Incident Manager
  • Amazon VPC
  • AWS Cloud WAN

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_Resource_Access_Manager

AWS Elastic Map Reduce – EMR

AWS EMR – Elastic Map Reduce

  • Amazon EMR – Elastic Map Reduce is a web service that utilizes a hosted Hadoop framework running on the web-scale infrastructure of EC2 and S3.
  • enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data.
  • uses Apache Hadoop as its distributed data processing engine, which is an open-source, Java software that supports data-intensive distributed applications running on large clusters of commodity hardware
  • provides data processing, interactive analysis, and machine learning using open-source frameworks such as Apache Spark, Hive, and Presto.
  • is ideal for problems that necessitate fast and efficient processing of large amounts of data.
  • helps focus on crunching, transforming, and analyzing data without having to worry about time-consuming set-up, the management, or tuning of Hadoop clusters, the compute capacity, or open-source applications.
  • can help perform data-intensive tasks for applications such as web indexing, data mining, log file analysis, machine learning, financial analysis, scientific simulation, bioinformatics research, etc
  • provides web service interface to launch the clusters and monitor processing-intensive computation on clusters.
  • workloads can be deployed using EC2, Elastic Kubernetes Service (EKS), or on-premises AWS Outposts.
  • seamlessly supports On-Demand, Spot, and Reserved Instances.
  • EMR launches all nodes for a given cluster in the same AZ, which improves performance as it provides a higher data access rate.
  • EMR supports different EC2 instance types including Standard, High CPU, High Memory, Cluster Compute, High I/O, and High Storage.
  • EMR charges on hourly increments i.e. once the cluster is running,  charges apply the entire hour.
  • EMR integrates with CloudTrail to record AWS API calls
  • EMR Serverless helps run big data frameworks such as Apache Spark and Apache Hive without configuring, managing, and scaling clusters.
  • EMR Studio is an IDE that helps data scientists and data engineers to develop, visualize, and debug data engineering and data science applications written in R, Python, Scala, and PySpark.
  • EMR Notebooks provide a managed environment, based on Jupyter Notebook, that helps prepare and visualize data, collaborate with peers, build applications, and perform interactive analysis using EMR clusters.

EMR Architecture

  • EMR uses industry-proven, fault-tolerant Hadoop software as its data processing engine.
  • Hadoop is an open-source, Java software that supports data-intensive distributed applications running on large clusters of commodity hardware
  • Hadoop splits the data into multiple subsets and assigns each subset to more than one EC2 instance. So, if an EC2 instance fails to process one subset of data, the results of another Amazon EC2 instance can be used.
  • EMR consists of Master node, one or more Slave nodes
    • Master Node Or Primary Node
      • Master node or Primary node manages the cluster by running software components to coordinate the distribution of data and tasks among other nodes for processing.
      • Primary node tracks the status of tasks and monitors the health of the cluster.
      • Every cluster has a primary node, and it’s possible to create a single-node cluster with only the primary node.
      • EMR currently does not support automatic failover of the master nodes or master node state recovery
      • If master node goes down, the EMR cluster will be terminated and the job needs to be re-executed.
    • Slave Nodes – Core nodes and Task nodes
      • Core nodes
        • with software components  that host persistent data using Hadoop Distributed File System (HDFS) and run Hadoop tasks
        • Multi-node clusters have at least one core node.
        • can be increased in an existing cluster
      • Task nodes
        • only run Hadoop tasks and do not store data in HDFS.
        • can be increased or decreased in an existing cluster.
      • EMR is fault tolerant for slave failures and continues job execution if a slave node goes down.
      • Currently, EMR does not automatically provision another node to take over failed slaves
  • EMR supports Bootstrap actions which allow
    • users a way to run custom set-up prior to the execution of the cluster.
    • can be used to install software or configure instances before running the cluster

EMR Storage

  • Hadoop Distributed File System (HDFS)
    • HDFS is a distributed, scalable file system for Hadoop.
    • HDFS distributes the data it stores across instances in the cluster, storing multiple copies of data on different instances to ensure that no data is lost if an individual instance fails.
    • HDFS is ephemeral storage that is reclaimed when you terminate a cluster.
    • HDFS is useful for caching intermediate results during MapReduce processing or for workloads that have significant random I/O.
  • EMR File System – EMRFS
    • EMR File System (EMRFS) helps extend Hadoop to add the ability to directly access data stored in S3 as if it were a file system like HDFS.
    • You can use either HDFS or S3 as the file system in your cluster. Most often, S3 is used to store input and output data and intermediate results are stored in HDFS.
  • Local file system
    • Local file system refers to a locally connected EC2 pre-attached disk instance store storage.
    • Data on instance store volumes persists only during the lifecycle of its Amazon EC2 instance.
  • Storing data on S3 provides several benefits
    • inherent features high availability, durability, lifecycle management, data encryption, and archival of data to Glacier
    • cost-effective as storing data in S3 is cheaper as compared to HDFS with the replication factor
    • ability to use Transient EMR cluster and shutdown the clusters after the job is completed, with data being maintained in S3
    • ability to use Spot instances and not having to worry about losing the spot instances at any time.
    • provides data durability from any HDFS node failures, where node failures exceed the HDFS replication factor
    • data ingestion with high throughput data stream to S3 is much easier than ingesting to HDFS

EMR Security

  • EMR cluster starts with different security groups for Master and Cluster nodes
    • Master security group
      • has a port open for communication with the service.
      • has an SSH port open to allow direct SSH into the instances, using the key specified at the startup.
    • Cluster security group
      • only allows interaction with the master instance
      • SSH to the slave nodes can be done by doing SSH to the master node and then to the slave node
    • Security groups can be configured with different access rules

EMR Security Encryption

  • EMR always uses HTTPS to send data between S3 and EC2.
  • EMR enables the use of security configuration
    • which helps to encrypt data at rest, data in transit, or both
    • can be used to specify settings for S3 encryption with EMR file system (EMRFS), local disk encryption, and in-transit encryption
    • is stored in EMR rather than the cluster configuration making it reusable
    • gives the flexibility to choose from several options, including keys managed by AWS KMS, keys managed by S3, and keys and certificates from custom providers that you supply.
  • At-rest Encryption for S3 with EMRFS
    • EMRFS supports Server-side (SSE-S3, SSE-KMS) and Client-side encryption (CSE-KMS or CSE-Custom)
    • S3 SSE and CSE encryption with EMRFS are mutually exclusive; either one can be selected but not both
    • Transport layer security (TLS) encrypts EMRFS objects in-transit between EMR cluster nodes & S3
  • At-rest Encryption for Local Disks
    • Open-source HDFS Encryption
      • HDFS exchanges data between cluster instances during distributed processing, and also reads from and writes data to instance store volumes and the EBS volumes attached to instances
      • Open-source Hadoop encryption options are activated
        • Secure Hadoop RPC is set to “Privacy”, which uses Simple Authentication Security Layer (SASL).
        • Data encryption on HDFS block data transfer is set to true and is configured to use AES 256 encryption.
    • EBS Volume Encryption
      • EBS Encryption
        • EBS encryption option encrypts the EBS root device volume and attached storage volumes.
        • EBS encryption option is available only when you specify AWS KMS as your key provider.
      • LUKS
        • EC2 instance store volumes (except boot/root volumes) and the attached EBS volumes can be encrypted using LUKS.
  • In-Transit Data Encryption
    • Encryption artifacts used for in-transit encryption in one of two ways:
      • either by providing a zipped file of certificates that you upload to S3,
      • or by referencing a custom Java class that provides encryption artifacts
  • EMR block public access prevents a cluster in a public subnet from launching when any security group associated with the cluster has a rule that allows inbound traffic from anywhere (public access) on a port, unless the port has been specified as an exception.
  • EMR Runtime Roles help manage access control for each job or query individually, instead of sharing the EMR instance profile of the cluster.
  • EMR IAM service roles help perform actions on your behalf when provisioning cluster resources, running applications, dynamically scaling resources, and creating and running EMR Notebooks.
  • SSH clients can use an EC2 key pair or Kerberos to authenticate to cluster instances.
  • Lake Formation based access control can be applied to Spark, Hive, and Presto jobs that you submit to the EMR clusters.

EMR Cluster Types

  • EMR has two cluster types, transient and persistent
  • Transient EMR Clusters
    • Transient EMR clusters are clusters that shut down when the job or the steps (series of jobs) are complete
    • Transient EMT clusters can be used in situations
      • where total number of EMR processing hours per day < 24 hours and its beneficial to shut down the cluster when it’s not being used.
      • using HDFS as your primary data storage.
      • job processing is intensive, iterative data processing.
  • Persistent EMR Clusters
    • Persistent EMR clusters continue to run after the data processing job is complete
    • Persistent EMR clusters can be used in situations
      • frequently run processing jobs where it’s beneficial to keep the cluster running after the previous job.
      • processing jobs have an input-output dependency on one another.
      • In rare cases when it is more cost effective to store the data on HDFS instead of S3

EMR Serverless

  • EMR Serverless helps run big data frameworks such as Apache Spark and Apache Hive without configuring, managing, and scaling clusters.
  • currently supports Apache Spark and Apache Hive engines.
  • automatically determines the resources that the application needs and gets these resources to process the jobs, and releases the resources when the jobs finish.
  • minimum and maximum number of concurrent workers and the vCPU and memory configuration for workers can be specified.
  • supports multiple AZs and provides resilience to AZ failures.
  • An EMR Serverless application internally uses workers to execute your workloads and it offers two options for workers
    • On-demand workers
      • are launched only when needed for a job and are released automatically when the job is complete.
      • scales the application up or down based on the workload, so you don’t have to worry about over- or under-provisioning resources.
      • takes up to 120 seconds to determine the required resources and provision them.
      • distributes jobs across multiple AZs by default, but each job runs only in one AZ.
      • automatically runs your job in another healthy AZ, if an AZ fails.
    • Pre-initialized workers
      • are an optional feature where you can keep workers ready to respond in seconds.
      • It effectively creates a warm pool of workers for an application which allows jobs to start instantly, making it ideal for iterative applications and time-sensitive jobs.
      • submits job in an healthy AZ from the specified subnets. Application needs to be restarted to switch to another healthy AZ, if an AZ becomes impaired.

EMR Studio

  • EMR Studio is an IDE that helps data scientists and data engineers to develop, visualize, and debug data engineering and data science applications written in R, Python, Scala, and PySpark.
  • is a fully managed application with single sign-on, fully managed Jupyter Notebooks, automated infrastructure provisioning, and the ability to debug jobs without logging into the AWS Console or cluster.

EMR Notebooks

  • EMR Notebooks provide a managed environment, based on Jupyter Notebook, that allows data scientists, analysts, and developers to prepare and visualize data, collaborate with peers, build applications, and perform interactive analysis using EMR clusters.
  • AWS recommends using EMR Studio instead of EMR Notebooks now.
  • Users can create serverless notebooks directly from the console, attach them to an existing shared EMR cluster, or provision a cluster directly from the console and build Spark applications and run interactive queries.
  • Notebooks are auto-saved to S3 buckets, and can be retrieved from the console to resume work.
  • Notebooks can be detached and attached to new clusters.
  • Notebooks are prepackaged with the libraries found in the Anaconda repository, allowing you to import and use these libraries in the notebooks code and use them to manipulate data and visualize results.

EMR Best Practices

  • Data Migration
    • Two tools – S3DistCp and DistCp – can be used to move data stored on the local (data center) HDFS storage to S3, from S3 to HDFS and between S3 and local disk (non HDFS) to S3
    • AWS Import/Export and Direct Connect can also be considered for moving data
  • Data Collection
    • Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, & moving large amounts of log data
    • Flume agents can be installed on the data sources (web-servers, app servers etc) and data shipped to the collectors which can then be stored in persistent storage like S3 or HDFS
  • Data Aggregation
    • Data aggregation refers to techniques for gathering individual data records (for e.g. log records) and combining them into a large bundle of data files i.e. creating a large file from small files
    • Hadoop, on which EMR runs, generally performs better with fewer large files compared to many small files
    • Hadoop splits the file on HDFS on multiple nodes, while for the data in S3 it uses the HTTP Range header query to split the files which helps improve performance by supporting parallelization
    • Log collectors like Flume and Fluentd can be used to aggregate data before copying it to the final destination (S3 or HDFS)
    • Data aggregation has following benefits
      • Improves data ingest scalability by reducing the number of times needed to upload data to AWS
      • Reduces the number of files stored on S3 (or HDFS), which inherently helps provide better performance when processing data
      • Provides a better compression ratio as compressing large, highly compressible files is often more effective than compressing a large number of smaller files.
  • Data compression
    • Data compression can be used at the input as well as intermediate outputs from the mappers
    • Data compression helps
      • Lower storage costs
      • Lower bandwidth cost for data transfer
      • Better data processing performance by moving less data between data storage location, mappers, and reducers
      • Better data processing performance by compressing the data that EMR writes to disk, i.e. achieving better performance by writing to disk less frequently
    • Data Compression can have an impact on Hadoop data splitting logic as some of the compression techniques like gzip do not support it
    • Data Compression Techniques
  • Data Partitioning
    • Data partitioning helps in data optimizations and lets you create unique buckets of data and eliminate the need for a data processing job to read the entire data set
    • Data can be partitioned by
      • Data type (time series)
      • Data processing frequency (per hour, per day, etc.)
      • Data access and query pattern (query on time vs. query on geo location)
  • Cost Optimization
    • AWS offers different pricing models for EC2 instances
      • On-Demand instances
        • are a good option if using transient EMR jobs or if the EMR hourly usage is less than 17% of the time
      • Reserved instances
        • are a good option for persistent EMR cluster or if the  EMR hourly usage is more than 17% of the time as is more cost effective
      • Spot instances
        • can be a cost effective mechanism to add compute capacity
        • can be used where the data is persists on S3
        • can be used to add extra task capacity with Task nodes, and
        • is not suited for Master node, as if it is lost the cluster is lost and Core nodes (data nodes) as they host data and if lost needs to be recovered to rebalance the HDFS cluster
    • Architecture pattern can be used,
      • Run master node on On-Demand or Reserved Instances (if running persistent EMR clusters).
      • Run a portion of the EMR cluster on core nodes using On-Demand or Reserved Instances and
      • the rest of the cluster on task nodes using Spot Instances.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You require the ability to analyze a large amount of data, which is stored on Amazon S3 using Amazon Elastic Map Reduce. You are using the cc2.8xlarge instance type, who’s CPUs are mostly idle during processing. Which of the below would be the most cost efficient way to reduce the runtime of the job? [PROFESSIONAL]
    1. Create smaller files on Amazon S3.
    2. Add additional cc2.8xlarge instances by introducing a task group.
    3. Use smaller instances that have higher aggregate I/O performance.
    4. Create fewer, larger files on Amazon S3.
  2. A customer’s nightly EMR job processes a single 2-TB data file stored on Amazon Simple Storage Service (S3). The Amazon Elastic Map Reduce (EMR) job runs on two On-Demand core nodes and three On-Demand task nodes. Which of the following may help reduce the EMR job completion time? Choose 2 answers
    1. Use three Spot Instances rather than three On-Demand instances for the task nodes.
    2. Change the input split size in the MapReduce job configuration.
    3. Use a bootstrap action to present the S3 bucket as a local filesystem.
    4. Launch the core nodes and task nodes within an Amazon Virtual Cloud.
    5. Adjust the number of simultaneous mapper tasks.
    6. Enable termination protection for the job flow.
  3. Your department creates regular analytics reports from your company’s log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data? [PROFESSIONAL]
    1. Use reduced redundancy storage (RRS) for PDF and CSV data in Amazon S3. Add Spot instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. (Only Spot instances impacts performance)
    2. Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability)
    3. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift (Only Spot instances impacts performance)
    4. Use reduced redundancy storage (RRS) for PDF and CSV data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift. (Spot instances impacts performance and Spot instance not available for Redshift)
  4. A research scientist is planning for the one-time launch of an Elastic MapReduce cluster and is encouraged by her manager to minimize the costs. The cluster is designed to ingest 200TB of genomics data with a total of 100 Amazon EC2 instances and is expected to run for around four hours. The resulting data set must be stored temporarily until archived into an Amazon RDS Oracle instance. Which option will help save the most money while meeting requirements? [PROFESSIONAL]
    1. Store ingest and output files in Amazon S3. Deploy on-demand for the master and core nodes and spot for the task nodes.
    2. Optimize by deploying a combination of on-demand, RI and spot-pricing models for the master, core and task nodes. Store ingest and output files in Amazon S3 with a lifecycle policy that archives them to Amazon Glacier. (Master and Core must be RI or On Demand. Cannot be Spot)
    3. Store the ingest files in Amazon S3 RRS and store the output files in S3. Deploy Reserved Instances for the master and core nodes and on-demand for the task nodes. (Need better durability for ingest file. Spot instances can be used for task nodes for cost saving. RI will not provide cost saving in this case)
    4. Deploy on-demand master, core and task nodes and store ingest and output files in Amazon S3 RRS (Input should be in S3 standard, as re-ingesting the input data might end up being more costly then holding the data for limited time in standard S3)
  5. Your company sells consumer devices and needs to record the first activation of all sold devices. Devices are not activated until the information is written on a persistent database. Activation data is very important for your company and must be analyzed daily with a MapReduce job. The execution time of the data analysis process must be less than three hours per day. Devices are usually sold evenly during the year, but when a new device model is out, there is a predictable peak in activation’s, that is, for a few days there are 10 times or even 100 times more activation’s than in average day. Which of the following databases and analysis framework would you implement to better optimize costs and performance for this workload? [PROFESSIONAL]
    1. Amazon RDS and Amazon Elastic MapReduce with Spot instances.
    2. Amazon DynamoDB and Amazon Elastic MapReduce with Spot instances.
    3. Amazon RDS and Amazon Elastic MapReduce with Reserved instances.
    4. Amazon DynamoDB and Amazon Elastic MapReduce with Reserved instances

References

Amazon S3 Replication

S3 Replication

Amazon S3 Replication

  • S3 Replication enables automatic, asynchronous copying of objects across S3 buckets in the same or different AWS regions.
  • S3 Cross-Region Replication – CRR is used to copy objects across S3 buckets in different AWS Regions.
  • S3 Same-Region Replication – SRR is used to copy objects across S3 buckets in the same AWS Regions.
  • S3 Replication helps to
    • Replicate objects while retaining metadata
    • Replicate objects into different storage classes
    • Maintain object copies under different ownership
    • Keep objects stored over multiple AWS Regions
    • Replicate objects within 15 minutes
  • S3 can replicate all or a subset of objects with specific key name prefixes
  • S3 encrypts all data in transit across AWS regions using SSL
  • Object replicas in the destination bucket are exact replicas of the objects in the source bucket with the same key names and the same metadata.
  • Objects may be replicated to a single destination bucket or multiple destination buckets.
  • Cross-Region Replication can be useful for the following scenarios:-
    • Compliance requirement to have data backed up across regions
    • Minimize latency to allow users across geography to access objects
    • Operational reasons compute clusters in two different regions that analyze the same set of objects
  • Same-Region Replication can be useful for the following scenarios:-
    • Aggregate logs into a single bucket
    • Configure live replication between production and test accounts
    • Abide by data sovereignty laws to store multiple copies

S3 Replication

S3 Replication Requirements

  • source and destination buckets must be versioning-enabled
  • for CRR, the source and destination buckets must be in different AWS regions.
  • S3 must have permission to replicate objects from that source bucket to the destination bucket on your behalf.
  • If the source bucket owner also owns the object, the bucket owner has full permission to replicate the object. If not, the source bucket owner must have permission for the S3 actions s3:GetObjectVersion and s3:GetObjectVersionACL to read the object and object ACL
  • Setting up cross-region replication in a cross-account scenario (where the source and destination buckets are owned by different AWS accounts), the source bucket owner must have permission to replicate objects in the destination bucket.
  • if the source bucket has S3 Object Lock enabled, the destination buckets must also have S3 Object Lock enabled.
  • destination buckets cannot be configured as Requester Pays buckets

S3 Replication – Replicated & Not Replicated

  • Only new objects created after you add a replication configuration are replicated. S3 does NOT retroactively replicate objects that existed before you added replication configuration.
  • Objects encrypted using customer provided keys (SSE-C), objects encrypted at rest under an S3 managed key (SSE-S3) or a KMS key stored in AWS Key Management Service (SSE-KMS).
  • S3 replicates only objects in the source bucket for which the bucket owner has permission to read objects and read ACLs
  • Any object ACL updates are replicated, although there can be some delay before S3 can bring the two in sync. This applies only to objects created after you add a replication configuration to the bucket.
  • S3 does NOT replicate objects in the source bucket for which the bucket owner does not have permission.
  • Updates to bucket-level S3 subresources are NOT replicated, allowing different bucket configurations on the source and destination buckets
  • Only customer actions are replicated & actions performed by lifecycle configuration are NOT replicated
  • Replication chaining is NOT allowed, Objects in the source bucket that are replicas, created by another replication, are NOT replicated.
  • S3 does NOT replicate the delete marker by default. However, you can add delete marker replication to non-tag-based rules to override it.
  • S3 does NOT replicate deletion by object version ID. This protects data from malicious deletions.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

S3_Replication