AWS Resource Access Manager – RAM

AWS Resource Access Manager – RAM

  • AWS Resource Access Manager – RAM helps secure sharing of the AWS resources created in one AWS account with other AWS accounts.
  • Using RAM, with multiple AWS accounts, a resource can be created once and made usable by those other accounts.
  • For an account managed by AWS Organizations, resources can be shared with all the other accounts in the organization or only those accounts contained by one or more specified organizational units (OUs).
  • Resources can also be shared with specific AWS accounts by account ID, regardless of whether the account is part of an organization.

RAM Benefits

  • Reduces operational overhead
    • Create a resource once, and then use AWS RAM to share that resource with other accounts. This eliminates the need to provision duplicate resources in every account, which reduces operational overhead.
  • Provides security and consistency
    • Simplify security management for the shared resources by using a single set of policies and permissions.
  • Provides visibility and auditability
    • AWS RAM provides comprehensive visibility into shared resources and accounts through the integration with CloudWatch and CloudTrail.

RAM vs Resource-based Policies

  • Resources can be shared with an Organization or OU without having to enumerate every one of the AWS account IDs.
  • Users can see the resources shared with them directly in the originating AWS service console and API operations as if those resources were directly in the user’s account.
  • Owners of a resource can see which principals have access to each individual resource that they have shared.
  • RAM initiates an invitation process for resources shared with an account that isn’t part of the organization. Sharing within an organization doesn’t require an invitation and is auto-accepted.

RAM Supported Resources

  • AWS App Mesh
  • Amazon Aurora
  • AWS Certificate Manager Private Certificate Authority
  • AWS CodeBuild
  • Amazon EC2
  • EC2 Image Builder
  • AWS Glue
  • AWS License Manager
  • AWS Migration Hub Refactor Spaces
  • AWS Network Firewall
  • AWS Outposts
  • Amazon S3 on Outposts
  • AWS Resource Groups
  • Amazon Route 53
  • Amazon SageMaker
  • AWS Service Catalog AppRegistry
  • AWS Systems Manager Incident Manager
  • Amazon VPC
  • AWS Cloud WAN

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_Resource_Access_Manager

AWS Elastic Map Reduce – EMR

AWS EMR – Elastic Map Reduce

  • Amazon EMR – Elastic Map Reduce is a web service that utilizes a hosted Hadoop framework running on the web-scale infrastructure of EC2 and S3.
  • enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data.
  • uses Apache Hadoop as its distributed data processing engine, which is an open-source, Java software that supports data-intensive distributed applications running on large clusters of commodity hardware
  • provides data processing, interactive analysis, and machine learning using open-source frameworks such as Apache Spark, Hive, and Presto.
  • is ideal for problems that necessitate fast and efficient processing of large amounts of data.
  • helps focus on crunching, transforming, and analyzing data without having to worry about time-consuming set-up, the management, or tuning of Hadoop clusters, the compute capacity, or open-source applications.
  • can help perform data-intensive tasks for applications such as web indexing, data mining, log file analysis, machine learning, financial analysis, scientific simulation, bioinformatics research, etc
  • provides web service interface to launch the clusters and monitor processing-intensive computation on clusters.
  • workloads can be deployed using EC2, Elastic Kubernetes Service (EKS), or on-premises AWS Outposts.
  • seamlessly supports On-Demand, Spot, and Reserved Instances.
  • EMR launches all nodes for a given cluster in the same AZ, which improves performance as it provides a higher data access rate.
  • EMR supports different EC2 instance types including Standard, High CPU, High Memory, Cluster Compute, High I/O, and High Storage.
  • EMR charges on hourly increments i.e. once the cluster is running,  charges apply the entire hour.
  • EMR integrates with CloudTrail to record AWS API calls
  • EMR Serverless helps run big data frameworks such as Apache Spark and Apache Hive without configuring, managing, and scaling clusters.
  • EMR Studio is an IDE that helps data scientists and data engineers to develop, visualize, and debug data engineering and data science applications written in R, Python, Scala, and PySpark.
  • EMR Notebooks provide a managed environment, based on Jupyter Notebook, that helps prepare and visualize data, collaborate with peers, build applications, and perform interactive analysis using EMR clusters.

EMR Architecture

  • EMR uses industry-proven, fault-tolerant Hadoop software as its data processing engine.
  • Hadoop is an open-source, Java software that supports data-intensive distributed applications running on large clusters of commodity hardware
  • Hadoop splits the data into multiple subsets and assigns each subset to more than one EC2 instance. So, if an EC2 instance fails to process one subset of data, the results of another Amazon EC2 instance can be used.
  • EMR consists of Master node, one or more Slave nodes
    • Master Node Or Primary Node
      • Master node or Primary node manages the cluster by running software components to coordinate the distribution of data and tasks among other nodes for processing.
      • Primary node tracks the status of tasks and monitors the health of the cluster.
      • Every cluster has a primary node, and it’s possible to create a single-node cluster with only the primary node.
      • EMR currently does not support automatic failover of the master nodes or master node state recovery
      • If master node goes down, the EMR cluster will be terminated and the job needs to be re-executed.
    • Slave Nodes – Core nodes and Task nodes
      • Core nodes
        • with software components  that host persistent data using Hadoop Distributed File System (HDFS) and run Hadoop tasks
        • Multi-node clusters have at least one core node.
        • can be increased in an existing cluster
      • Task nodes
        • only run Hadoop tasks and do not store data in HDFS.
        • can be increased or decreased in an existing cluster.
      • EMR is fault tolerant for slave failures and continues job execution if a slave node goes down.
      • Currently, EMR does not automatically provision another node to take over failed slaves
  • EMR supports Bootstrap actions which allow
    • users a way to run custom set-up prior to the execution of the cluster.
    • can be used to install software or configure instances before running the cluster

EMR Storage

  • Hadoop Distributed File System (HDFS)
    • HDFS is a distributed, scalable file system for Hadoop.
    • HDFS distributes the data it stores across instances in the cluster, storing multiple copies of data on different instances to ensure that no data is lost if an individual instance fails.
    • HDFS is ephemeral storage that is reclaimed when you terminate a cluster.
    • HDFS is useful for caching intermediate results during MapReduce processing or for workloads that have significant random I/O.
  • EMR File System – EMRFS
    • EMR File System (EMRFS) helps extend Hadoop to add the ability to directly access data stored in S3 as if it were a file system like HDFS.
    • You can use either HDFS or S3 as the file system in your cluster. Most often, S3 is used to store input and output data and intermediate results are stored in HDFS.
  • Local file system
    • Local file system refers to a locally connected EC2 pre-attached disk instance store storage.
    • Data on instance store volumes persists only during the lifecycle of its Amazon EC2 instance.
  • Storing data on S3 provides several benefits
    • inherent features high availability, durability, lifecycle management, data encryption, and archival of data to Glacier
    • cost-effective as storing data in S3 is cheaper as compared to HDFS with the replication factor
    • ability to use Transient EMR cluster and shutdown the clusters after the job is completed, with data being maintained in S3
    • ability to use Spot instances and not having to worry about losing the spot instances at any time.
    • provides data durability from any HDFS node failures, where node failures exceed the HDFS replication factor
    • data ingestion with high throughput data stream to S3 is much easier than ingesting to HDFS

EMR Security

  • EMR cluster starts with different security groups for Master and Cluster nodes
    • Master security group
      • has a port open for communication with the service.
      • has an SSH port open to allow direct SSH into the instances, using the key specified at the startup.
    • Cluster security group
      • only allows interaction with the master instance
      • SSH to the slave nodes can be done by doing SSH to the master node and then to the slave node
    • Security groups can be configured with different access rules

EMR Security Encryption

  • EMR always uses HTTPS to send data between S3 and EC2.
  • EMR enables the use of security configuration
    • which helps to encrypt data at rest, data in transit, or both
    • can be used to specify settings for S3 encryption with EMR file system (EMRFS), local disk encryption, and in-transit encryption
    • is stored in EMR rather than the cluster configuration making it reusable
    • gives the flexibility to choose from several options, including keys managed by AWS KMS, keys managed by S3, and keys and certificates from custom providers that you supply.
  • At-rest Encryption for S3 with EMRFS
    • EMRFS supports Server-side (SSE-S3, SSE-KMS) and Client-side encryption (CSE-KMS or CSE-Custom)
    • S3 SSE and CSE encryption with EMRFS are mutually exclusive; either one can be selected but not both
    • Transport layer security (TLS) encrypts EMRFS objects in-transit between EMR cluster nodes & S3
  • At-rest Encryption for Local Disks
    • Open-source HDFS Encryption
      • HDFS exchanges data between cluster instances during distributed processing, and also reads from and writes data to instance store volumes and the EBS volumes attached to instances
      • Open-source Hadoop encryption options are activated
        • Secure Hadoop RPC is set to “Privacy”, which uses Simple Authentication Security Layer (SASL).
        • Data encryption on HDFS block data transfer is set to true and is configured to use AES 256 encryption.
    • EBS Volume Encryption
      • EBS Encryption
        • EBS encryption option encrypts the EBS root device volume and attached storage volumes.
        • EBS encryption option is available only when you specify AWS KMS as your key provider.
      • LUKS
        • EC2 instance store volumes (except boot/root volumes) and the attached EBS volumes can be encrypted using LUKS.
  • In-Transit Data Encryption
    • Encryption artifacts used for in-transit encryption in one of two ways:
      • either by providing a zipped file of certificates that you upload to S3,
      • or by referencing a custom Java class that provides encryption artifacts
  • EMR block public access prevents a cluster in a public subnet from launching when any security group associated with the cluster has a rule that allows inbound traffic from anywhere (public access) on a port, unless the port has been specified as an exception.
  • EMR Runtime Roles help manage access control for each job or query individually, instead of sharing the EMR instance profile of the cluster.
  • EMR IAM service roles help perform actions on your behalf when provisioning cluster resources, running applications, dynamically scaling resources, and creating and running EMR Notebooks.
  • SSH clients can use an EC2 key pair or Kerberos to authenticate to cluster instances.
  • Lake Formation based access control can be applied to Spark, Hive, and Presto jobs that you submit to the EMR clusters.

EMR Cluster Types

  • EMR has two cluster types, transient and persistent
  • Transient EMR Clusters
    • Transient EMR clusters are clusters that shut down when the job or the steps (series of jobs) are complete
    • Transient EMT clusters can be used in situations
      • where total number of EMR processing hours per day < 24 hours and its beneficial to shut down the cluster when it’s not being used.
      • using HDFS as your primary data storage.
      • job processing is intensive, iterative data processing.
  • Persistent EMR Clusters
    • Persistent EMR clusters continue to run after the data processing job is complete
    • Persistent EMR clusters can be used in situations
      • frequently run processing jobs where it’s beneficial to keep the cluster running after the previous job.
      • processing jobs have an input-output dependency on one another.
      • In rare cases when it is more cost effective to store the data on HDFS instead of S3

EMR Serverless

  • EMR Serverless helps run big data frameworks such as Apache Spark and Apache Hive without configuring, managing, and scaling clusters.
  • currently supports Apache Spark and Apache Hive engines.
  • automatically determines the resources that the application needs and gets these resources to process the jobs, and releases the resources when the jobs finish.
  • minimum and maximum number of concurrent workers and the vCPU and memory configuration for workers can be specified.
  • supports multiple AZs and provides resilience to AZ failures.
  • An EMR Serverless application internally uses workers to execute your workloads and it offers two options for workers
    • On-demand workers
      • are launched only when needed for a job and are released automatically when the job is complete.
      • scales the application up or down based on the workload, so you don’t have to worry about over- or under-provisioning resources.
      • takes up to 120 seconds to determine the required resources and provision them.
      • distributes jobs across multiple AZs by default, but each job runs only in one AZ.
      • automatically runs your job in another healthy AZ, if an AZ fails.
    • Pre-initialized workers
      • are an optional feature where you can keep workers ready to respond in seconds.
      • It effectively creates a warm pool of workers for an application which allows jobs to start instantly, making it ideal for iterative applications and time-sensitive jobs.
      • submits job in an healthy AZ from the specified subnets. Application needs to be restarted to switch to another healthy AZ, if an AZ becomes impaired.

EMR Studio

  • EMR Studio is an IDE that helps data scientists and data engineers to develop, visualize, and debug data engineering and data science applications written in R, Python, Scala, and PySpark.
  • is a fully managed application with single sign-on, fully managed Jupyter Notebooks, automated infrastructure provisioning, and the ability to debug jobs without logging into the AWS Console or cluster.

EMR Notebooks

  • EMR Notebooks provide a managed environment, based on Jupyter Notebook, that allows data scientists, analysts, and developers to prepare and visualize data, collaborate with peers, build applications, and perform interactive analysis using EMR clusters.
  • AWS recommends using EMR Studio instead of EMR Notebooks now.
  • Users can create serverless notebooks directly from the console, attach them to an existing shared EMR cluster, or provision a cluster directly from the console and build Spark applications and run interactive queries.
  • Notebooks are auto-saved to S3 buckets, and can be retrieved from the console to resume work.
  • Notebooks can be detached and attached to new clusters.
  • Notebooks are prepackaged with the libraries found in the Anaconda repository, allowing you to import and use these libraries in the notebooks code and use them to manipulate data and visualize results.

EMR Best Practices

  • Data Migration
    • Two tools – S3DistCp and DistCp – can be used to move data stored on the local (data center) HDFS storage to S3, from S3 to HDFS and between S3 and local disk (non HDFS) to S3
    • AWS Import/Export and Direct Connect can also be considered for moving data
  • Data Collection
    • Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, & moving large amounts of log data
    • Flume agents can be installed on the data sources (web-servers, app servers etc) and data shipped to the collectors which can then be stored in persistent storage like S3 or HDFS
  • Data Aggregation
    • Data aggregation refers to techniques for gathering individual data records (for e.g. log records) and combining them into a large bundle of data files i.e. creating a large file from small files
    • Hadoop, on which EMR runs, generally performs better with fewer large files compared to many small files
    • Hadoop splits the file on HDFS on multiple nodes, while for the data in S3 it uses the HTTP Range header query to split the files which helps improve performance by supporting parallelization
    • Log collectors like Flume and Fluentd can be used to aggregate data before copying it to the final destination (S3 or HDFS)
    • Data aggregation has following benefits
      • Improves data ingest scalability by reducing the number of times needed to upload data to AWS
      • Reduces the number of files stored on S3 (or HDFS), which inherently helps provide better performance when processing data
      • Provides a better compression ratio as compressing large, highly compressible files is often more effective than compressing a large number of smaller files.
  • Data compression
    • Data compression can be used at the input as well as intermediate outputs from the mappers
    • Data compression helps
      • Lower storage costs
      • Lower bandwidth cost for data transfer
      • Better data processing performance by moving less data between data storage location, mappers, and reducers
      • Better data processing performance by compressing the data that EMR writes to disk, i.e. achieving better performance by writing to disk less frequently
    • Data Compression can have an impact on Hadoop data splitting logic as some of the compression techniques like gzip do not support it
    • Data Compression Techniques
  • Data Partitioning
    • Data partitioning helps in data optimizations and lets you create unique buckets of data and eliminate the need for a data processing job to read the entire data set
    • Data can be partitioned by
      • Data type (time series)
      • Data processing frequency (per hour, per day, etc.)
      • Data access and query pattern (query on time vs. query on geo location)
  • Cost Optimization
    • AWS offers different pricing models for EC2 instances
      • On-Demand instances
        • are a good option if using transient EMR jobs or if the EMR hourly usage is less than 17% of the time
      • Reserved instances
        • are a good option for persistent EMR cluster or if the  EMR hourly usage is more than 17% of the time as is more cost effective
      • Spot instances
        • can be a cost effective mechanism to add compute capacity
        • can be used where the data is persists on S3
        • can be used to add extra task capacity with Task nodes, and
        • is not suited for Master node, as if it is lost the cluster is lost and Core nodes (data nodes) as they host data and if lost needs to be recovered to rebalance the HDFS cluster
    • Architecture pattern can be used,
      • Run master node on On-Demand or Reserved Instances (if running persistent EMR clusters).
      • Run a portion of the EMR cluster on core nodes using On-Demand or Reserved Instances and
      • the rest of the cluster on task nodes using Spot Instances.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You require the ability to analyze a large amount of data, which is stored on Amazon S3 using Amazon Elastic Map Reduce. You are using the cc2.8xlarge instance type, who’s CPUs are mostly idle during processing. Which of the below would be the most cost efficient way to reduce the runtime of the job? [PROFESSIONAL]
    1. Create smaller files on Amazon S3.
    2. Add additional cc2.8xlarge instances by introducing a task group.
    3. Use smaller instances that have higher aggregate I/O performance.
    4. Create fewer, larger files on Amazon S3.
  2. A customer’s nightly EMR job processes a single 2-TB data file stored on Amazon Simple Storage Service (S3). The Amazon Elastic Map Reduce (EMR) job runs on two On-Demand core nodes and three On-Demand task nodes. Which of the following may help reduce the EMR job completion time? Choose 2 answers
    1. Use three Spot Instances rather than three On-Demand instances for the task nodes.
    2. Change the input split size in the MapReduce job configuration.
    3. Use a bootstrap action to present the S3 bucket as a local filesystem.
    4. Launch the core nodes and task nodes within an Amazon Virtual Cloud.
    5. Adjust the number of simultaneous mapper tasks.
    6. Enable termination protection for the job flow.
  3. Your department creates regular analytics reports from your company’s log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data? [PROFESSIONAL]
    1. Use reduced redundancy storage (RRS) for PDF and CSV data in Amazon S3. Add Spot instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. (Only Spot instances impacts performance)
    2. Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability)
    3. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift (Only Spot instances impacts performance)
    4. Use reduced redundancy storage (RRS) for PDF and CSV data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift. (Spot instances impacts performance and Spot instance not available for Redshift)
  4. A research scientist is planning for the one-time launch of an Elastic MapReduce cluster and is encouraged by her manager to minimize the costs. The cluster is designed to ingest 200TB of genomics data with a total of 100 Amazon EC2 instances and is expected to run for around four hours. The resulting data set must be stored temporarily until archived into an Amazon RDS Oracle instance. Which option will help save the most money while meeting requirements? [PROFESSIONAL]
    1. Store ingest and output files in Amazon S3. Deploy on-demand for the master and core nodes and spot for the task nodes.
    2. Optimize by deploying a combination of on-demand, RI and spot-pricing models for the master, core and task nodes. Store ingest and output files in Amazon S3 with a lifecycle policy that archives them to Amazon Glacier. (Master and Core must be RI or On Demand. Cannot be Spot)
    3. Store the ingest files in Amazon S3 RRS and store the output files in S3. Deploy Reserved Instances for the master and core nodes and on-demand for the task nodes. (Need better durability for ingest file. Spot instances can be used for task nodes for cost saving. RI will not provide cost saving in this case)
    4. Deploy on-demand master, core and task nodes and store ingest and output files in Amazon S3 RRS (Input should be in S3 standard, as re-ingesting the input data might end up being more costly then holding the data for limited time in standard S3)
  5. Your company sells consumer devices and needs to record the first activation of all sold devices. Devices are not activated until the information is written on a persistent database. Activation data is very important for your company and must be analyzed daily with a MapReduce job. The execution time of the data analysis process must be less than three hours per day. Devices are usually sold evenly during the year, but when a new device model is out, there is a predictable peak in activation’s, that is, for a few days there are 10 times or even 100 times more activation’s than in average day. Which of the following databases and analysis framework would you implement to better optimize costs and performance for this workload? [PROFESSIONAL]
    1. Amazon RDS and Amazon Elastic MapReduce with Spot instances.
    2. Amazon DynamoDB and Amazon Elastic MapReduce with Spot instances.
    3. Amazon RDS and Amazon Elastic MapReduce with Reserved instances.
    4. Amazon DynamoDB and Amazon Elastic MapReduce with Reserved instances

References

Amazon S3 Replication

S3 Replication

Amazon S3 Replication

  • S3 Replication enables automatic, asynchronous copying of objects across S3 buckets in the same or different AWS regions.
  • S3 Cross-Region Replication – CRR is used to copy objects across S3 buckets in different AWS Regions.
  • S3 Same-Region Replication – SRR is used to copy objects across S3 buckets in the same AWS Regions.
  • S3 Replication helps to
    • Replicate objects while retaining metadata
    • Replicate objects into different storage classes
    • Maintain object copies under different ownership
    • Keep objects stored over multiple AWS Regions
    • Replicate objects within 15 minutes
  • S3 can replicate all or a subset of objects with specific key name prefixes
  • S3 encrypts all data in transit across AWS regions using SSL
  • Object replicas in the destination bucket are exact replicas of the objects in the source bucket with the same key names and the same metadata.
  • Objects may be replicated to a single destination bucket or multiple destination buckets.
  • Cross-Region Replication can be useful for the following scenarios:-
    • Compliance requirement to have data backed up across regions
    • Minimize latency to allow users across geography to access objects
    • Operational reasons compute clusters in two different regions that analyze the same set of objects
  • Same-Region Replication can be useful for the following scenarios:-
    • Aggregate logs into a single bucket
    • Configure live replication between production and test accounts
    • Abide by data sovereignty laws to store multiple copies

S3 Replication

S3 Replication Requirements

  • source and destination buckets must be versioning-enabled
  • for CRR, the source and destination buckets must be in different AWS regions.
  • S3 must have permission to replicate objects from that source bucket to the destination bucket on your behalf.
  • If the source bucket owner also owns the object, the bucket owner has full permission to replicate the object. If not, the source bucket owner must have permission for the S3 actions s3:GetObjectVersion and s3:GetObjectVersionACL to read the object and object ACL
  • Setting up cross-region replication in a cross-account scenario (where the source and destination buckets are owned by different AWS accounts), the source bucket owner must have permission to replicate objects in the destination bucket.
  • if the source bucket has S3 Object Lock enabled, the destination buckets must also have S3 Object Lock enabled.
  • destination buckets cannot be configured as Requester Pays buckets

S3 Replication – Replicated & Not Replicated

  • Only new objects created after you add a replication configuration are replicated. S3 does NOT retroactively replicate objects that existed before you added replication configuration.
  • Objects encrypted using customer provided keys (SSE-C), objects encrypted at rest under an S3 managed key (SSE-S3) or a KMS key stored in AWS Key Management Service (SSE-KMS).
  • S3 replicates only objects in the source bucket for which the bucket owner has permission to read objects and read ACLs
  • Any object ACL updates are replicated, although there can be some delay before S3 can bring the two in sync. This applies only to objects created after you add a replication configuration to the bucket.
  • S3 does NOT replicate objects in the source bucket for which the bucket owner does not have permission.
  • Updates to bucket-level S3 subresources are NOT replicated, allowing different bucket configurations on the source and destination buckets
  • Only customer actions are replicated & actions performed by lifecycle configuration are NOT replicated
  • Replication chaining is NOT allowed, Objects in the source bucket that are replicas, created by another replication, are NOT replicated.
  • S3 does NOT replicate the delete marker by default. However, you can add delete marker replication to non-tag-based rules to override it.
  • S3 does NOT replicate deletion by object version ID. This protects data from malicious deletions.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

S3_Replication

Amazon S3 Event Notifications

S3 Event Notifications

  • S3 notification feature enables notifications to be triggered when certain events happen in the bucket.
  • Notifications are enabled at the Bucket level
  • Notifications can be configured to be filtered by the prefix and suffix of the key name of objects. However, filtering rules cannot be defined with overlapping prefixes, overlapping suffixes, or prefix and suffix overlapping
  • S3 can publish the following events
    • New Object created events
      • Can be enabled for PUT, POST, or COPY operations
      • You will not receive event notifications from failed operations
    • Object Removal events
      • Can public delete events for object deletion, version object deletion or insertion of delete marker
      • You will not receive event notifications from automatic deletes from lifecycle policies or from failed operations.
    • Restore object events
      • restoration of objects archived to the S3 Glacier storage classes
    • Reduced Redundancy Storage (RRS) object lost events
      • Can be used to reproduce/recreate the Object
    • Replication events
      • for replication configurations that have S3 replication metrics or S3 Replication Time Control (S3 RTC) enabled
  • S3 can publish events to the following destination
  • For S3 to be able to publish events to the destination, the S3 principal should be granted the necessary permissions
  • S3 event notifications are designed to be delivered at least once. Typically, event notifications are delivered in seconds but can sometimes take a minute or longer.

S3 Event Notifications

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

Amazon_S3_Event_Notifications

Amazon S3 Static Website Hosting

S3 Static Website Hosting

  • Amazon S3 can be used for Static Website hosting with Client-side scripts.
  • S3 does not support server-side scripting.
  • S3, in conjunction with Route 53, supports hosting a website at the root domain which can point to the S3 website endpoint
  • S3 website endpoints do not support HTTPS or access points
  • For S3 website hosting the content should be made publicly readable which can be provided using a bucket policy or an ACL on an object.
  • Users can configure the index, and error document as well as configure the conditional routing of an object name
  • Bucket policy applies only to objects owned by the bucket owner. If the bucket contains objects not owned by the bucket owner, then public READ permission on those objects should be granted using the object ACL.
  • Requester Pays buckets or DevPay buckets do not allow access through the website endpoint. Any request to such a bucket will receive a 403 -Access Denied response

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Company ABCD is currently hosting their corporate site in an Amazon S3 bucket with Static Website Hosting enabled. Currently, when visitors go to http://www.companyabcd.com the index.html page is returned. Company C now would like a new page welcome.html to be returned when a visitor enters http://www.companyabcd.com in the browser. Which of the following steps will allow Company ABCD to meet this requirement? Choose 2 answers.
    1. Upload an html page named welcome.html to their S3 bucket
    2. Create a welcome subfolder in their S3 bucket
    3. Set the Index Document property to welcome.html
    4. Move the index.html page to a welcome subfolder
    5. Set the Error Document property to welcome.html

References

Amazon_S3_Website_Hosting

Amazon Managed Streaming for Apache Kafka – MSK

Managed Streaming for Apache Kafka – MSK

  • Managed Streaming for Apache Kafka- MSK is an AWS streaming data service that manages Apache Kafka infrastructure and operations.
  • Apache Kafka
    • is an open-source, high-performance, fault-tolerant, and scalable streaming data store platform for building real-time streaming data pipelines and applications.
    • stores streaming data in a fault-tolerant way, providing a buffer between producers and consumers.
    • stores events as a continuous series of records and preserves the order in which the records were produced.
    • runs as a cluster and stores data records in topics, which are partitioned and replicated across one or more brokers that can be spread across multiple AZs for high availability.
    • allows many data producers and multiple consumers that can process data from Kafka topics on a first-in-first-out basis, preserving the order data were produced.
  • makes it easy for developers and DevOps managers to run Kafka applications and Kafka Connect connectors on AWS, without the need to become experts in operating Kafka.
  • operates, maintains, and scales Kafka clusters, provides enterprise-grade security features out of the box, and has built-in AWS integrations that accelerate development of streaming data applications.
  • always runs within a VPC managed by the MSK and is available to your own selected VPC, subnet, and security group when the cluster is setup.
  • IP addresses from the VPC are attached to the MSK resources through elastic network interfaces (ENIs), and all network traffic stays within the AWS network and is not accessible to the internet by default.
  • integrates with CloudWatch for monitoring, metrics, and logging.
  • MSK Serverless is a cluster type for MSK that makes it easy for you to run Apache Kafka clusters without having to manage compute and storage capacity. With MSK Serverless, you can run your applications without having to provision, configure, or optimize clusters, and you pay for the data volume you stream and retain.

MSK Serverless

  • MSK Serverless is a cluster type that helps run Kafka clusters without having to manage compute and storage capacity.
  • fully manages partitions, including monitoring and moving them to even load across a cluster.
  • creates 2 replicas for each partition and places them in different AZs. Additionally, MSK serverless automatically detects and recovers failed backend resources to maintain high availability.
  • encrypts all traffic in transit and all data at rest using Key Management Service (KMS).
  • allows clients to connect over a private connection using AWS PrivateLink without exposing the traffic to the public internet.
  • offers IAM Access Control to manage client authentication and client authorization to Kafka resources such as topics.

MSK Security

  • MSK uses EBS server-side encryption and KMS keys to encrypt storage volumes.
  • Clusters have encryption in transit enabled via TLS for inter-broker communication. For provisioned clusters, you can opt out of using encryption in transit when a cluster is created.
  • MSK clusters running Kafka version 2.5.1 or greater support TLS in-transit encryption between Kafka brokers and ZooKeeper nodes.
  • For provisioned clusters, you have three options:
    • IAM Access Control for both AuthN/Z (recommended),
    • TLS certificate authentication (CA) for AuthN and access control lists for AuthZ
    • SASL/SCRAM for AuthN and access control lists for AuthZ.
  • MSK recommends using IAM Access Control as it defaults to least privilege access and is the most secure option.
  • For serverless clusters, IAM Access Control can be used for both authentication and authorization.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

Amazon_Managed_Streaming_for_Apache_Kafka

AWS Organizations

AWS Organizations

AWS Organizations

  • AWS Organizations is an account management service that enables consolidating multiple AWS accounts into an organization that can be created and centrally managed.
  • AWS Organizations include consolidated billing and account management capabilities that enable one to better meet the budgetary, security, and compliance needs of the business.
  • As an administrator of an organization, new accounts can be created in an organization, and existing accounts invited to join the organization.
  • AWS Organizations enables you to
    • Automate AWS account creation and management, and provision resources with AWS CloudFormation Stacksets.
    • Maintain a secure environment with policies and management of AWS security services
    • Govern access to AWS services, resources, and regions
    • Centrally manage policies across multiple AWS accounts
    • Audit the environment for compliance 
    • View and manage costs with consolidated billing 
    • Configure AWS services across multiple accounts 
AWS Organizations

AWS Organization Features

  • Centralized management of all of the AWS accounts
    • Combine existing accounts into or create new ones within an organization that enables them to be managed centrally
    • Policies can be attached to accounts that affect some or all of the accounts
  • Consolidated billing for all member accounts
    • Consolidated billing is a feature of AWS Organizations.
    • Master or Management account of the organization can be used to consolidate and pay for all member accounts.
  • Hierarchical grouping of accounts to meet budgetary, security, or compliance needs
    • Accounts can be grouped into organizational units (OUs) and each OU can be attached to different access policies.
    • OUs can also be nested to a depth of five levels, providing flexibility in how you structure your account groups.
  • Control over AWS services and API actions that each account can access
    • As an administrator of the master account of an organization, access to users and roles in each member account can be restricted to which AWS services and individual API actions
  • Organization permissions overrule account permissions.
    • This restriction even overrides the administrators of member accounts in the organization.
    • When AWS Organizations blocks access to a service or API action for a member account, a user or role in that account can’t access any prohibited service or API action, even if an administrator of a member account explicitly grants such permissions in an IAM policy.
  • Integration and support for AWS IAM
    • IAM provides granular control over users and roles in individual accounts.
    • Organizations expand that control to the account level by giving control over what users and roles in an account or a group of accounts can do.
    • Users can access only what is allowed by both the Organization policies and IAM policies.
    • Resulting permissions are the logical intersection of what is allowed by AWS Organizations at the account level, and what permissions are explicitly granted by IAM at the user or role level within that account.
    • If either blocks an operation, the user can’t access that operation.
  • Integration with other AWS services
    • Select AWS services can be enabled to access accounts in the organization and perform actions on the resources in the accounts.
    • When another service is configured and authorized to access the organization, AWS Organizations creates an IAM service-linked role for that service in each member account.
    • Service-linked role has predefined IAM permissions that allow the other AWS service to perform specific tasks in the organization and its accounts.
    • All accounts in an organization automatically have a service-linked role created, which enables the AWS Organizations service to create the service-linked roles required by AWS services for which you enable trusted access
    • These additional service-linked roles come with policies that enable the specified service to perform only those required tasks
  • Data replication that is eventually consistent
    • AWS Organizations is eventually consistent.
    • AWS Organizations achieve high availability by replicating data across multiple servers in AWS data centers within its region.
    • If a request to change some data is successful, the change is committed and safely stored.
    • However, the change must then be replicated across multiple servers.

AWS Organizations Terminology and Concepts

AWS Organizations Terminology and Concepts

Organization

  • An entity created to consolidate AWS accounts that can be administered as a single unit.
  • An organization has one master/management account along with zero or more member accounts.
  • An organization has the functionality that is determined by the feature set that you enable i.e. All features or Consolidated Billing only

Root

  • Parent container for all the accounts for the organization.
  • Policy applied to the root is applied to all the organizational units (OUs) and accounts in the organization.
  • There can be only one root currently and AWS Organization automatically creates it when an organization is created

Organizational Unit (OU)

  • A container for accounts within a root.
  • An OU also can contain other OUs, enabling hierarchy creation that resembles an upside-down tree, with a root at the top and branches of OUs that reach down, ending in accounts that are the leaves of the tree.
  • A policy attached to one of the nodes in the hierarchy flows down and affects all branches (OUs) and leaves (accounts) beneath it.
  • An OU can have exactly one parent, and currently, each account can be a member of exactly one OU.

Account

  • A standard AWS account that contains AWS resources.
  • Each account can be directly in the root or placed in one of the OUs in the hierarchy.
  • Policy can be attached to an account to apply controls to only that one account.
  • Accounts can be organized in a hierarchical, tree-like structure with a root at the top and organizational units nested under the root.
  • Master or Management account
    • Primary account which creates the organization
    • can create new accounts in the organization, invite existing accounts, remove accounts, manage invitations, and apply policies to entities within the organization.
    • has the responsibilities of a payer account and is responsible for paying all charges that are accrued by the member accounts.
  • Member account
    • Rest of the accounts within the organization are member accounts.
    • An account can be a member of only one organization at a time.

Invitation

  • Process of asking another account to join an organization.
  • An invitation can be issued only by the organization’s management account and is extended to either the account ID or the email address that is associated with the invited account.
  • Invited account becomes a member account in the organization after it accepts the invitation.
  • Invitations can be sent to existing member accounts as well, to approve the change from supporting only consolidated billing features to supporting all features.
  • Invitations work by accounts exchanging handshakes.

Handshake

  • A multi-step process of exchanging information between two parties.
  • Primary use in AWS Organizations is to serve as the underlying implementation for invitations.
  • Handshake messages are passed between and responded to by the handshake initiator (management account) and the recipient (member account) in such a way that it ensures that both parties always know what the current status is.

Available Feature Sets

Consolidated billing

  • provides shared or consolidated billing functionality which includes pricing benefits for aggregated usage.

All Features

  • includes all the functionality of consolidated billing and advanced features that give more control over accounts in the organization.
  • allows the management account to have full control over what member accounts can do.
  • invited accounts must approve enabling all features
  • The Management account can apply SCPs to restrict the services and actions that users (including the root user) and roles in an account can access, and it can prevent member accounts from leaving the organization
  • Member accounts can’t switch from All features to Consolidated Billing only mode.

Service Control Policies – SCPs

  • Service Control Policies specify the services and actions that users and roles can use in the accounts that the SCP affects.
  • SCPs are similar to IAM permission policies except that they don’t grant any permissions.
  • SCPs are filters that allow only the specified services and actions to be used in affected accounts.
  • SCPs override the IAM permission policy. So even if a user is granted full administrator permissions with an IAM permission policy, any access that is not explicitly allowed or that is explicitly denied by the SCPs affecting that account is blocked. for e.g., if you assign an SCP that allows only database service access to your “database” account, then any user, group, or role in that account is denied access to any other service’s operations.
  • SCP can be attached to
    • A root, which affects all accounts in the organization
    • An OU, which affects all accounts in that OU and all accounts in any OUs in that OU subtree
    • An individual account
  • Organization’s Management account is not affected by any SCPs that are attached either to it or to any root or OU the master account might be in.

Whitelisting vs. Blacklisting

  • Whitelisting and blacklisting are complementary techniques used to apply SCPs to filter the permissions available to accounts.
  • Whitelisting
    • Explicitly specify the access that is allowed.
    • All other access is implicitly blocked or denied.
    • By default, all permissions are whitelisted.
    • AWS Organizations attaches an AWS-managed policy called FullAWSAccess to all roots, OUs, and accounts, which ensures the building of the organizations.
    • For restricting permissions, replace the FullAWSAccess policy with one that allows only the more limited, desired set of permissions.
    • Users and roles in the affected accounts can then exercise only that level of access, even if their IAM policies allow all actions.
    • If you replace the default policy on the root, all accounts in the organization are affected by the restrictions.
    • You can’t add them back at a lower level in the hierarchy because an SCP never grants permissions; it only filters them.
  • Blacklisting
    • The default behavior of AWS Organizations.
    • Explicitly specify the access that is not allowed.
    • Explicit deny of a service action overrides any allow of that action.
    • All other permissions are allowed unless explicitly blocked
    • By default, AWS Organizations attach an AWS-managed policy called FullAWSAccess to all roots, OUs, and accounts. This allows any account to access any service or operation with no AWS Organizations–imposed restrictions.
    • With blacklisting, additional policies are attached that explicitly deny access to the unwanted services and actions

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An organization that is currently using consolidated billing has recently acquired another company that already has a number of AWS accounts. How could an Administrator ensure that all AWS accounts, from both the existing company and the acquired company, are billed to a single account?
    1. Merge the two companies, AWS accounts by going to the AWS console and selecting the “Merge accounts” option.
    2. Invite the acquired company’s AWS account to join the existing company’s organization using AWS Organizations.
    3. Migrate all AWS resources from the acquired company’s AWS account to the master payer account of the existing company.
    4. Create a new AWS account and set it up as the master payer. Move the AWS resources from both the existing and acquired companies’ AWS accounts to the new account.
  2. Which of the following are the benefits of AWS Organizations? Choose the 2 correct answers:
    1. Centrally manage access polices across multiple AWS accounts.
    2. Automate AWS account creation and management.
    3. Analyze cost across all multiple AWS accounts.
    4. Provide technical help (by AWS) for issues in your AWS account.
  3. A company has several departments with separate AWS accounts. Which feature would allow the company to enable consolidate billing?
    1. AWS Inspector
    2. AWS Shield
    3. AWS Organizations
    4. AWS Lightsail

References

AWS SQS Standard vs FIFO Queue

SQS Standard vs FIFO Queues

AWS SQS Standard vs FIFO Queue

SQS offers two types of queues – Standard & FIFO queues

SQS Standard vs FIFO Queues

SQS Standard vs FIFO Queue Features

Message Order

  • Standard queues provide best-effort ordering which ensures that messages are generally delivered in the same order as they are sent. Occasionally (because of the highly-distributed architecture that allows high throughput), more than one copy of a message might be delivered out of order
  • FIFO queues offer first-in-first-out delivery and exactly-once processing: the order in which messages are sent and received is strictly preserved

Delivery

  • Standard queues guarantee that a message is delivered at least once and duplicates can be introduced into the queue
  • FIFO queues ensure a message is delivered exactly once and remains available until a consumer processes and deletes it; duplicates are not introduced into the queue

Transactions Per Second (TPS)

  • Standard queues allow nearly-unlimited number of transactions per second
  • FIFO queues are limited to 300 transactions per second per API action. It can be increased to 3000 using batching.

Regions

  • Standard & FIFO queues are now available in all the regions

SQS Buffered Asynchronous Client

  • FIFO queues aren’t currently compatible with the SQS Buffered Asynchronous Client, where messages are buffered at the client side and sent as a single request to the SQS queue to reduce cost.

AWS Services Supported

  • Standard Queues are supported by all AWS services
  • FIFO Queues are currently not supported by all AWS services like
    • CloudWatch Events
    • S3 Event Notifications
    • SNS Topic Subscriptions
    • Auto Scaling Lifecycle Hooks
    • AWS IoT Rule Actions
    • AWS Lambda Dead Letter Queues

Use Cases

  • Standard queues can be used in any scenario, as long as the application can process messages that arrive more than once and out of order
    • Decouple live user requests from intensive background work: Let users upload media while resizing or encoding it.
    • Allocate tasks to multiple worker nodes: Process a high number of credit card validation requests.
    • Batch messages for future processing: Schedule multiple entries to be added to a database.
  • FIFO queues are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can’t be tolerated
    • Ensure that user-entered commands are executed in the right order.
    • Display the correct product price by sending price modifications in the right order.
    • Prevent a student from enrolling in a course before registering for an account.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A restaurant reservation application needs the ability to maintain a waiting list. When a customer tries to reserve a table, and none are available, the customer must be put on the waiting list, and the application must notify the customer when a table becomes free. What service should the Solutions Architect recommend ensuring that the system respects the order in which the customer requests are put onto the waiting list?
    1. Amazon SNS
    2. AWS Lambda with sequential dispatch
    3. A FIFO queue in Amazon SQS
    4. A standard queue in Amazon SQS
  2. A solutions architect is designing an application for a two-step order process. The first step is synchronous and must return to the user with little latency. The second step takes longer, so it will be implemented in a separate component. Orders must be processed exactly once and in the order in which they are received. How should the solutions architect integrate these components?
    1. Use Amazon SQS FIFO queues.
    2. Use an AWS Lambda function along with Amazon SQS standard queues.
    3. Create an SNS topic and subscribe an Amazon SQS FIFO queue to that topic.
    4. Create an SNS topic and subscribe an Amazon SQS Standard queue to that topic.

References

AWS_SQS_Queue_Types

AWS Kinesis Data Streams vs SQS

Kinesis Data Streams vs SQS

Kinesis Data Streams vs SQS

Purpose

  • Amazon Kinesis Data Streams
    • allows real-time processing of streaming big data and the ability to read and replay records to multiple Amazon Kinesis Applications.
    • Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications that read from the same Amazon Kinesis stream (for example, to perform counting, aggregation, and filtering).
  • Amazon SQS
    • offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices.
    • It moves data between distributed application components and helps  decouple these components.
    • provides common middleware constructs such as dead-letter queues and poison-pill management.
    • provides a generic web services API and can be accessed by any programming language that the AWS SDK supports.
    • supports both standard and FIFO queues

Scaling

  • Kinesis Data streams is not fully managed and requires manual provisioning and scaling by increasing shards
  • SQS is fully managed, highly scalable and requires no administrative overhead and little configuration

Ordering

  • Kinesis provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Kinesis Applications
  • SQS Standard Queue does not guarantee data ordering and provides at least once delivery of messages
  • SQS FIFO Queue guarantees data ordering within the message group

Data Retention Period

  • Kinesis Data Streams stores the data for up to 24 hours, by default, and can be extended to 365 days
  • SQS stores the message for up to 4 days, by default, and can be configured from 1 minute to 14 days but clears the message once deleted by the consumer

Delivery Semantics

  • Kinesis and SQS Standard Queue both guarantee at least one delivery of the message.
  • SQS FIFO Queue guarantees Exactly once delivery

Parallel Clients

  • Kinesis supports multiple consumers
  • SQS allows the messages to be delivered to only one consumer at a time and requires multiple queues to deliver messages to multiple consumers

Use Cases

  • Kinesis use cases requirements
    • Ordering of records.
    • Ability to consume records in the same order a few hours later
    • Ability for multiple applications to consume the same stream concurrently
    • Routing related records to the same record processor (as in streaming MapReduce)
  • SQS uses cases requirements
    • Messaging semantics like message-level ack/fail and visibility timeout
    • Leveraging SQS’s ability to scale transparently
    • Dynamically increasing concurrency/throughput at read time
    • Individual message delay, which can be delayed

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?
    1. Amazon Kinesis
    2. AWS Data Pipeline
    3. Amazon AppStream
    4. Amazon Simple Queue Service
  2. Your customer is willing to consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?
    1. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
    2. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs (Can perform real time analysis and stores data for 24 hours which can be extended to 7 days)
    3. Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs (CloudTrail is only for auditing)
    4. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs (EMR is for batch analysis)

References

Kinesis Data Streams – Comparision with other services