AWS Organizations Service Control Policies – SCPs

AWS Organizations Service Control Policies

  • AWS Organizations Service control policies – SCPs offer central control over the maximum available permissions for all of the accounts in the organization, ensuring member accounts stay within the organization’s access control guidelines.
  • are one type of policy that help manage the organization.
  • are available only in an organization that has all features enabled, and aren’t available if the organization has enabled only the consolidated billing features.
  • are NOT sufficient for granting access to the accounts in the organization.
  • defines a guardrail for what actions accounts within the organization root or OU can do, but IAM policies need to be attached to the users and roles in the organization’s accounts to grant permissions to them.
  • Effective permissions are the logical intersection between what is allowed by the SCP and what is allowed by the IAM and resource-based policies.
  • with an SCP attached to member accounts, identity-based and resource-based policies grant permissions to entities only if those policies and the SCP allow the action.
  • don’t affect users or roles in the management account. They affect only the member accounts in your organization.

SCPs Effects on Permissions

  • never grant permissions but define the maximum permissions for the affected accounts.
  • Users and roles must still be granted permissions with appropriate IAM permission policies. A user without any IAM permission policies has no access at all, even if the applicable SCPs allow all services and all actions.
  • limits permissions for entities in member accounts, including each AWS account root user.
  • does not limit actions performed by the master or management account.
  • does not affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can’t be restricted by SCPs.
  • affect only IAM users or roles that are managed by accounts that are part of the organization. They don’t affect users or roles from accounts outside the organization.
  • don’t affect resource-based policies directly.

SCPs Strategies

  • By default, an SCP named FullAWSAccess is attached to every root, OU, and account, which allows all actions and all services.
  • Blacklist or Deny Strategy
    • actions are allowed by default and services and actions to be prohibited need to be specified.
    • blacklist permissions using deny statements can be assigned in combination with the default FullAWSAccess SCP.
    • using deny statements in SCPs require less maintenance because they don’t need to be updated when AWS adds new services.
    • deny statements usually use less space, thus making it easier to stay within SCP size limits.
  • Whitelist or Allow Strategy
    • actions are prohibited by default, and you specify what services and actions are allowed.
    • whitelist permissions can be assigned, by removing the default FullAWSAccess SCP.
    • allows SCP that explicitly permits only those allowed services and actions

SCPs Testing Effects

  • don’t attach SCPs to the root of the organization without thoroughly testing the impact that the policy has on accounts.
  • Create an OU that the accounts can be moved into one at a time, or at least in small numbers, to ensure that users are not inadvertently locked out of key services.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your company is planning on setting up multiple accounts in AWS. The IT Security department has a requirement to ensure that certain services and actions are not allowed across all accounts. How would the system admin achieve this in the most EFFECTIVE way possible?
    1. Create a common IAM policy that can be applied across all accounts
    2. Create an IAM policy per account and apply them accordingly​
    3. Deny the services to be used across accounts by contacting AWS​ support
    4. Use AWS Organizations and Service Control Policies
  2. You are in the process of implementing AWS Organizations for your company. At your previous company, you saw an Organizations implementation go bad when an SCP (Service Control Policy) was applied at the root of the organization before being thoroughly tested. In what way can an SCP be properly tested and implemented?
    1. Back up your entire Organization to S3 and restore rollback and restore if something goes wrong
    2. The SCP must be verified with AWS before it is implemented to avoid any problems.
    3. Mirror your Organizational Unit in another region. Apply the SCP and test it. Once testing is complete, attach the SCP to the root of your organization.
    4. Create an Organizational Unit (OU). Attach the SCP to this new OU. Move your accounts in one at a time to ensure that you don’t inadvertently lock users out of key services.

AWS Data Pipeline

AWS Data Pipeline

  • AWS Data Pipeline is a web service that makes it easy to automate and schedule regular data movement and data processing activities in AWS
  • helps define data-driven workflows
  • integrates with on-premises and cloud-based storage systems
  • helps quickly define a pipeline, which defines a dependent chain of data sources, destinations, and predefined or custom data processing activities
  • supports scheduling where the pipeline regularly performs processing activities such as distributed data copy, SQL transforms, EMR applications, or custom scripts against destinations such as S3, RDS, or DynamoDB.
  • ensures that the pipelines are robust and highly available by executing the scheduling, retry, and failure logic for the workflows as a highly scalable and fully managed service.

AWS Data Pipeline features

  • Distributed, fault-tolerant, and highly available
  • Managed workflow orchestration service for data-driven workflows
  • Infrastructure management service, as it will provision and terminate resources as required
  • Provides dependency resolution
  • Can be scheduled
  • Supports Preconditions for readiness checks.
  • Grants control over retries, including frequency and number
  • Native integration with S3, DynamoDB, RDS, EMR, EC2 and Redshift
  • Support for both AWS based and external on-premise resources

AWS Data Pipeline Concepts

Pipeline Definition

  • Pipeline definition helps the business logic to be communicated to the AWS Data Pipeline
  • Pipeline definition defines the location of data (Data Nodes), activities to be performed, the schedule, resources to run the activities, per-conditions, and actions to be performed

Pipeline Components, Instances, and Attempts

  • Pipeline components represent the business logic of the pipeline and are represented by the different sections of a pipeline definition.
  • Pipeline components specify the data sources, activities, schedule, and preconditions of the workflow
  • When AWS Data Pipeline runs a pipeline, it compiles the pipeline components to create a set of actionable instances and contains all the information needed to perform a specific task
  • Data Pipeline provides durable and robust data management as it retries a failed operation depending on frequency & defined number of retries

Task Runners

  • A task runner is an application that polls AWS Data Pipeline for tasks and then performs those tasks
  • When Task Runner is installed and configured,
    • it polls AWS Data Pipeline for tasks associated with activated pipelines
    • after a task is assigned to Task Runner, it performs that task and reports its status back to Pipeline.
  • A task is a discreet unit of work that the Pipeline service shares with a task runner and differs from a pipeline, which defines activities and resources that usually yields several tasks
  • Tasks can be executed either on the AWS Data Pipeline managed or user-managed resources.

Data Nodes

  • Data Node defines the location and type of data that a pipeline activity uses as source (input) or destination (output)
  • supports S3, Redshift, DynamoDB, and SQL data nodes

Databases

  • supports JDBC, RDS, and Redshift database

Activities

  • An activity is a pipeline component that defines the work to perform
  • Data Pipeline provides pre-defined activities for common scenarios like sql transformation, data movement, hive queries, etc
  • Activities are extensible and can be used to run own custom scripts to support endless combinations

Preconditions

  • Precondition is a pipeline component containing conditional statements that must be satisfied (evaluated to True) before an activity can run
  • A pipeline supports
    • System-managed preconditions
      • are run by the AWS Data Pipeline web service on your behalf and do not require a computational resource
      • Includes source data and keys check for e.g. DynamoDB data, table exists or S3 key exists or prefix not empty
    • User-managed preconditions
      • run on user defined and managed computational resources
      • Can be defined as Exists check or Shell command

Resources

  • A resource is a computational resource that performs the work that a pipeline activity specifies
  • supports AWS Data Pipeline-managed and self-managed resources
  • AWS Data Pipeline-managed resources include EC2 and EMR, which are launched by the Data Pipeline service only when they’re needed
  • Self managed on-premises resources can also be used, where a Task Runner package is installed which  continuously polls the AWS Data Pipeline service for work to perform
  • Resources can run in the same region as their working data set or even on a region different than AWS Data Pipeline
  • Resources launched by AWS Data Pipeline are counted within the resource limits and should be taken into account

Actions

  • Actions are steps that a pipeline takes when a certain event like success, or failure occurs.
  • Pipeline supports SNS notifications and termination action on resources

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An International company has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery capability in a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be able to provision the web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements. Which design would you choose to meet these requirements?
    1. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. Create a ‘Lastupdated’ attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter. (Refer Blog Post)
    2. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region. (No Schedule and throughput control)
    3. Use AWS data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region. (With AWS Data pipeline the data can be copied directly to other DynamoDB table)
    4. Send each item into an SQS queue in the second region; use an auto-scaling group behind the SQS queue to replay the write in the second region. (Not Automated to replay the write)
  2. Your company produces customer commissioned one-of-a-kind skiing helmets combining nigh fashion with custom technical enhancements. Customers can show off their Individuality on the ski slopes and have access to head-up-displays, GPS rear-view cams and any other technical innovation they wish to embed in the helmet. The current manufacturing process is data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the highest standards. Assessments are a mixture of human and automated assessments you need to add a new set of assessment to model the failure modes of the custom electronics using GPUs with CUD across a cluster of servers with low latency networking. What architecture would allow you to automate the existing process using a hybrid approach and ensure that the architecture can support the evolution of processes over time?
    1. Use AWS Data Pipeline to manage movement of data & meta-data and assessments. Use an auto-scaling group of G2 instances in a placement group. (Involves mixture of human assessments)
    2. Use Amazon Simple Workflow (SWF) to manage assessments, movement of data & meta-data. Use an autoscaling group of G2 instances in a placement group. (Human and automated assessments with GPU and low latency networking)
    3. Use Amazon Simple Workflow (SWF) to manage assessments movement of data & meta-data. Use an autoscaling group of C3 instances with SR-IOV (Single Root I/O Virtualization). (C3 and SR-IOV won’t provide GPU as well as Enhanced networking needs to be enabled)
    4. Use AWS data Pipeline to manage movement of data & meta-data and assessments use auto-scaling group of C3 with SR-IOV (Single Root I/O virtualization). (Involves mixture of human assessments)

References

AWS Trusted Advisor

Trusted Advisor Categories

AWS Trusted Advisor

  • Trusted Advisor inspects the AWS environment to make recommendations for system performance, saving money, availability, and closing security gaps
  • Trusted Advisor checks the following categories
    • Cost Optimization
      • Recommendations that can potentially save money by highlighting unused resources and opportunities to reduce your bill.
    • Security
      • Identification of security settings and gaps, inline with the best practices, that could make the AWS solution less secure
    • Fault Tolerance
      • Recommendations that help increase the resiliency and availability of the AWS solution by highlighting redundancy shortfalls, current service limits, and over-utilized resources.
    • Performance
      • Recommendations that can help to improve the speed and responsiveness of the applications
    • Service Limits (Circa Nov 2017 – Latest Addition)
      • Checks for service usage that is more than 80% of the service limit.
      • Values are based on a snapshot, so the current usage might differ.
      • Limit and usage data can take up to 24 hours to reflect any changes
  • AWS Basic and Developer support plan covers few checks for free.
  • AWS Business and Enterprise support plans include
    • Full set of checks
    • AWS Support API provides programmatic access to AWS Support Center features to create, manage, and close the Support cases, and operationally manage the Trusted Advisor check requests and status.
    • Trusted Advisor Priority helps you focus on the most important recommendations to optimize your cloud deployments, improve resilience, and address security gaps.
    • Trusted Advisor notification feature helps stay up-to-date with the AWS resource deployment by performing an automated weekly refresh.

Trusted Advisor Categories

AWS Support API

  • API provides two different groups of operations:
    • Support case management operations to manage the entire life cycle of your AWS support cases, from creating a case to resolving it, and includes
      • Open a support case
      • Get a list and detailed information about recent support cases
      • Filter your search for support cases by dates and case identifiers, including resolved cases
      • Add communications and file attachments to your cases, and add the email recipients for case correspondence
      • Resolve your cases
    • AWS Trusted Advisor operations to access checks
      • Get the names and identifiers for the checks
      • Request that a check be run against your AWS account and resources
      • Get summaries and detailed information for your check results
      • Refresh the checks
      • Get the status of each check

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. The Trusted Advisor service provides insight regarding which categories of an AWS account?
    1. Security, fault tolerance, high availability, and connectivity
    2. Security, access control, high availability, and performance
    3. Performance, cost optimization, security, and fault tolerance (Note – Service limits is the latest addition)
    4. Performance, cost optimization, access control, and connectivity
  2. Which of the following are categories of AWS Trusted Advisor? (Select TWO.)
    1. Loose Coupling
    2. Disaster recovery
    3. Infrastructure as a Code
    4. Security
    5. Service limits
  3. Which AWS tool will identify security groups that grant unrestricted Internet access to a limited list of ports?
    1. AWS Organizations
    2. AWS Trusted Advisor
    3. AWS Usage Report
    4. Amazon EC2 dashboard

References

AWS_Trusted_Advisor

AWS Database Services Cheat Sheet

AWS Database Services Cheat Sheet

AWS Database Services

Relational Database Service – RDS

  • provides Relational Database service
  • supports MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL Server, and the new, MySQL-compatible Amazon Aurora DB engine
  • as it is a managed service, shell (root ssh) access is not provided
  • manages backups, software patching, automatic failure detection, and recovery
  • supports use initiated manual backups and snapshots
  • daily automated backups with database transaction logs enables Point in Time recovery up to the last five minutes of database usage
  • snapshots are user-initiated storage volume snapshot of DB instance, backing up the entire DB instance and not just individual databases that can be restored as a independent RDS instance
  • RDS Security
    • support encryption at rest using KMS as well as encryption in transit using SSL endpoints
    • supports IAM database authentication, which prevents the need to store static user credentials in the database, because authentication is managed externally using IAM.
    • supports Encryption only during creation of an RDS DB instance
    • existing unencrypted DB cannot be encrypted and you need to create a  snapshot, created a encrypted copy of the snapshot and restore as encrypted DB
    • supports Secret Manager for storing and rotating secrets
    • for encrypted database
      • logs, snapshots, backups, read replicas are all encrypted as well
      • cross region replicas and snapshots does not work across region (Note – this is possible now with latest AWS enhancement)
  • Multi-AZ deployment
    • provides high availability and automatic failover support and is NOT a scaling solution
    • maintains a synchronous standby replica in a different AZ
    • transaction success is returned only if the commit is successful both on the primary and the standby DB
    • Oracle, PostgreSQL, MySQL, and MariaDB DB instances use Amazon technology, while SQL Server DB instances use SQL Server Mirroring
    • snapshots and backups are taken from standby & eliminate I/O freezes
    • during automatic failover, its seamless and RDS switches to the standby instance and updates the DNS record to point to standby
    • failover can be forced with the Reboot with failover option
  • Read Replicas
    • uses the PostgreSQL, MySQL, and MariaDB DB engines’ built-in replication functionality to create a separate Read Only instance
    • updates are asynchronously copied to the Read Replica, and data might be stale
    • can help scale applications and reduce read only load
    • requires automatic backups enabled
    • replicates all databases in the source DB instance
    • for disaster recovery, can be promoted to a full fledged database
    • can be created in a different region for disaster recovery, migration and low latency across regions
    • can’t create encrypted read replicas from unencrypted DB or read replica
  • RDS does not support all the features of underlying databases, and if required the database instance can be launched on an EC2 instance
  • RDS Components
    • DB parameter groups contains engine configuration values that can be applied to one or more DB instances of the same instance type for e.g. SSL, max connections etc.
    • Default DB parameter group cannot be modified, create a custom one and attach to the DB
    • Supports static and dynamic parameters
      • changes to dynamic parameters are applied immediately (irrespective of apply immediately setting)
      • changes to static parameters are NOT applied immediately and require a manual reboot.
  • RDS Monitoring & Notification
    • integrates with CloudWatch and CloudTrail
    • CloudWatch provides metrics about CPU utilization from the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance
    • Performance Insights is a database performance tuning and monitoring feature that helps illustrate the database’s performance and help analyze any issues that affect it
    • supports RDS Event Notification which uses the SNS to provide notification when an RDS event like creation, deletion or snapshot creation etc occurs

Aurora

  • is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases
  • is a managed services and handles time-consuming tasks such as provisioning, patching, backup, recovery, failure detection and repair
  • is a proprietary technology from AWS (not open sourced)
  • provides PostgreSQL and MySQL compatibility
  • is “AWS cloud optimized” and claims 5x performance improvement
    over MySQL on RDS, over 3x the performance of PostgreSQL on RDS
  • scales storage automatically in increments of 10GB, up to 64 TB with no impact to database performance. Storage is striped across 100s of volumes.
  • no need to provision storage in advance.
  • provides self-healing storage. Data blocks and disks are continuously scanned for errors and repaired automatically.
  • provides instantaneous failover
  • replicates each chunk of my the database volume six ways across three Availability Zones i.e. 6 copies of the data across 3 AZ
    • requires 4 copies out of 6 needed for writes
    • requires 3 copies out of 6 need for reads
  • costs more than RDS (20% more) – but is more efficient
  • Read Replicas
    • can have 15 replicas while MySQL has 5, and the replication process is faster (sub 10 ms replica lag)
    • share the same data volume as the primary instance in the same AWS Region, there is virtually no replication lag
    • supports Automated failover for master in less than 30 seconds
    • supports Cross Region Replication using either physical or logical replication.
  • Security
    • supports Encryption at rest using KMS
    • supports Encryption in flight using SSL (same process as MySQL or Postgres)
    • Automated backups, snapshots and replicas are also encrypted
    • Possibility to authenticate using IAM token (same method as RDS)
    • supports protecting the instance with security groups
    • does not support SSH access to the underlying servers
  • Aurora Serverless
    • provides automated database Client  instantiation and on-demand  autoscaling based on actual usage
    • provides a relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads
    • automatically starts up, shuts down, and scales capacity up or down based on the application’s needs. No capacity planning needed
    • Pay per second, can be more cost-effective
  • Aurora Global Database
    • allows a single Aurora database to span multiple AWS regions.
    • provides Physical replication, which uses dedicated infrastructure that leaves the databases entirely available to serve the application
    • supports 1 Primary Region (read / write)
    • replicates across up to 5 secondary (read-only) regions, replication lag is less than 1 second
    • supports up to 16 Read Replicas per secondary region
    • recommended for low-latency global reads and disaster recovery with an RTO of < 1 minute
    • failover is not automated and if the primary region becomes unavailable, a secondary region can be manually removed from an Aurora Global Database and promote it to take full reads and writes. Application needs to be updated to point to the newly promoted region.
  • Aurora Backtrack
    • Backtracking “rewinds” the DB cluster to the specified time
    • Backtracking performs in place restore and does not create a new instance. There is a minimal downtime associated with it.
  • Aurora Clone feature allows quick and cost-effective creation of Aurora Cluster duplicates
  • supports parallel or distributed query using Aurora Parallel Query, which refers to the ability to push down and distribute the computational load of a single query across thousands of CPUs in Aurora’s storage layer.

DynamoDB

  • fully managed NoSQL database service
  • synchronously replicates data across three facilities in an AWS Region, giving high availability and data durability
  • runs exclusively on SSDs to provide high I/O performance
  • provides provisioned table reads and writes
  • automatically partitions, reallocates, and re-partitions the data and provisions additional server capacity as data or throughput changes
  • creates and maintains indexes for the primary key attributes for efficient access to data in the table
  • DynamoDB Table classes currently support
    • DynamoDB Standard table class is the default and is recommended for the vast majority of workloads.
    • DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class which is optimized for tables where storage is the dominant cost.
  • supports Secondary Indexes
    • allows querying attributes other than the primary key attributes without impacting performance.
    • are automatically maintained as sparse objects
  • Local secondary index vs Global secondary index
    • shares partition key + different sort key vs different partition + sort key
    • search limited to partition vs across all partition
    • unique attributes vs non-unique attributes
    • linked to the base table vs independent separate index
    • only created during the base table creation vs can be created later
    • cannot be deleted after creation vs can be deleted
    • consumes provisioned throughput capacity of the base table vs independent throughput
    • returns all attributes for item vs only projected attributes
    • Eventually or Strongly vs Only Eventually consistent reads
    • size limited to 10Gb per partition vs unlimited
  • DynamoDB Consistency
    • provides Eventually consistent (by default) or Strongly Consistent option to be specified during a read operation
    • supports Strongly consistent reads for a few operations like Query, GetItem, and BatchGetItem using the ConsistentRead parameter
  • DynamoDB Throughput Capacity
    • supports On-demand and Provisioned read/write capacity modes
    • Provisioned mode requires the number of reads and writes per second as required by the application to be specified
    • On-demand mode provides flexible billing option capable of serving thousands of requests per second without capacity planning
  • DynamoDB Auto Scaling helps dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
  • DynamoDB Adaptive capacity is a feature that enables DynamoDB to run imbalanced workloads indefinitely.
  • DynamoDB Global Tables provide multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table
  • DynamoDB Time to Live (TTL)
    • enables a per-item timestamp to determine when an item expiry
    • expired items are deleted from the table without consuming any write throughput.
  • DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second.
  • DynamoDB cross-region replication
    • allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
    • using DynamoDB streams which leverages Kinesis and provides time-ordered sequence of item-level changes and can help for lower RPO, lower RTO disaster recovery
  • DynamoDB Triggers (just like database triggers) are a feature that allows the execution of custom actions based on item-level updates on a table.
  • VPC Gateway Endpoints provide private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.

ElastiCache

  • managed web service that provides in-memory caching to deploy and run Memcached or Redis protocol-compliant cache clusters
  • ElastiCache with Redis,
    • like RDS, supports Multi-AZ, Read Replicas and Snapshots
    • Read Replicas are created across AZ within same region using Redis’s asynchronous replication technology
    • Multi-AZ differs from RDS as there is no standby, but if the primary goes down a Read Replica is promoted as primary
    • Read Replicas cannot span across regions, as RDS supports
    • cannot be scaled out and if scaled up cannot be scaled down
    • allows snapshots for backup and restore
    • AOF can be enabled for recovery scenarios, to recover the data in case the node fails or service crashes. But it does not help in case the underlying hardware fails
    • Enabling Redis Multi-AZ as a Better Approach to Fault Tolerance
  • ElastiCache with Memcached
    • can be scaled up by increasing size and scaled out by adding nodes
    • nodes can span across multiple AZs within the same region
    • cached data is spread across the nodes, and a node failure will always result in some data loss from the cluster
    • supports auto discovery
    • every node should be homogenous and of same instance type
  • ElastiCache Redis vs Memcached
    • complex data objects vs simple key value storage
    • persistent vs non persistent, pure caching
    • automatic failover with Multi-AZ vs Multi-AZ not supported
    • scaling using Read Replicas vs using multiple nodes
    • backup & restore supported vs not supported
  • can be used state management to keep the web application stateless

Redshift

  • fully managed, fast and powerful, petabyte scale data warehouse service
  • uses replication and continuous backups to enhance availability and improve data durability and can automatically recover from node and component failures
  • provides Massive Parallel Processing (MPP) by distributing & parallelizing queries across multiple physical resources
  • columnar data storage improving query performance and allowing advance compression techniques
  • only supports Single-AZ deployments and the nodes are available within the same AZ, if the AZ supports Redshift clusters
  • spot instances are NOT an option

AWS IAM Roles vs Resource Based Policies

AWS IAM Roles vs Resource-Based Policies

AWS allows granting cross-account access to AWS resources, which can be done using IAM Roles or Resource-Based Policies.

IAM Roles

  • Roles can be created to act as a proxy to allow users or services to access resources.
  • Roles support
    • trust policy which helps determine who can access the resources and
    • permission policy which helps to determine what they can access.
  • Users who assume a role temporarily give up their own permissions and instead take on the permissions of the role. The original user permissions are restored when the user exits or stops using the role.
  • Roles can be used to provide access to almost all the AWS resources.
  • Permissions provided to the User through the Role can be further restricted per user by passing an optional policy to the STS request. This policy cannot be used to elevate privileges beyond what the assumed role is allowed to access

Resource-based Policies

  • Resource-based policy allows you to attach a policy directly to the resource you want to share, instead of using a role as a proxy.
  • Resource-based policy specifies the Principal, in the form of a list of AWS account ID numbers, can access that resource and what they can access.
  • Using cross-account access with a resource-based policy, the User still works in the trusted account and does not have to give up their permissions in place of the role permissions.
  • Users can work on the resources from both accounts at the same time and this can be useful for scenarios e.g. copying objects from one bucket to the other bucket in a different AWS account.
  • Resources that you want to share are limited to resources that support resource-based policies
  • Resource-based policies need the trusted account to create users with permissions to be able to access the resources from the trusted account.
  • Only permissions equivalent to, or less than, the permissions granted to your account by the resource owning account can be delegated.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What are the two permission types used by AWS?
    1. Resource-based and Product-based
    2. Product-based and Service-based
    3. Service-based
    4. User-based and Resource-based
  2. What’s the policy used for cross-account access? (Choose 2)
    1. Trust policy
    2. Permissions Policy
    3. Key policy

References

AWS Simple Notification Service – SNS

SNS Delivery Protocols

Simple Notification Service – SNS

  • Simple Notification Service – SNS is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients.
  • SNS provides the ability to create a Topic which is a logical access point and communication channel.
  • Each topic has a unique name that identifies the SNS endpoint for publishers to post messages and subscribers to register for notifications.
  • Producers and Consumers communicate asynchronously with subscribers by producing and sending a message on a topic.
  • Producers push messages to the topic, they created or have access to, and SNS matches the topic to a list of subscribers who have subscribed to that topic and delivers the message to each of those subscribers.
  • Subscribers receive all messages published to the topics to which they subscribe, and all subscribers to a topic receive the same messages.
  • Subscribers (i.e., web servers, email addresses, SQS queues, AWS Lambda functions) consume or receive the message or notification over one of the supported protocols (i.e., SQS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic.

SNS Delivery Protocols

Accessing SNS

  • Amazon Management console
    • Amazon Management console is the web-based user interface that can be used to manage SNS
  • AWS Command-line Interface (CLI)
    • Provides commands for a broad set of AWS products, and is supported on Windows, Mac, and Linux.
  • AWS Tools for Windows Powershell
    • Provides commands for a broad set of AWS products for those who script in the PowerShell environment
  • AWS SNS Query API
    • Query API allows for requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action
  • AWS SDK libraries
    • AWS provides libraries in various languages which provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses

SNS Supported Transport Protocols

  • HTTP, HTTPS – Subscribers specify a URL as part of the subscription registration; notifications will be delivered through an HTTP POST to the specified URL.
  • Email, Email-JSON – Messages are sent to registered addresses as email. Email-JSON sends notifications as a JSON object, while Email sends text-based email.
  • SQS – Users can specify an SQS queue as the endpoint; SNS will enqueue a notification message to the specified queue (which subscribers can then process using SQS APIs such as ReceiveMessage, DeleteMessage, etc.)
  • SMS – Messages are sent to registered phone numbers as SMS text messages

SNS Supported Endpoints

  • Email Notifications
    • SNS provides the ability to send Email notifications
  • Mobile Push Notifications
    • SNS provides an ability to send push notification messages directly to apps on mobile devices. Push notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts
    • Supported push notification services
      • Amazon Device Messaging (ADM)
      • Apple Push Notification Service (APNS)
      • Google Cloud Messaging (GCM)
      • Windows Push Notification Service (WNS) for Windows 8+ and Windows Phone 8.1+
      • Microsoft Push Notification Service (MPNS) for Windows Phone 7+
      • Baidu Cloud Push for Android devices in China
  • SQS Queues
    • SNS with SQS provides the ability for messages to be delivered to applications that require immediate notification of an event, and also persist in an SQS queue for other applications to process at a later time
    • SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates.
    • SQS can be used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components, without requiring each component to be concurrently available.
  • SMS Notifications
    • SNS provides the ability to send and receive Short Message Service (SMS) notifications to SMS-enabled mobile phones and smart phones
  • HTTP/HTTPS Endpoints
    • SNS provides the ability to send notification messages to one or more HTTP or HTTPS endpoints.When you subscribe an endpoint to a topic, you can publish a notification to the topic and Amazon SNS sends an HTTP POST request delivering the contents of the notification to the subscribed endpoint
  • Lambda
    • SNS and Lambda are integrated so Lambda functions can be invoked with SNS notifications.
    • When a message is published to an SNS topic that has a Lambda function subscribed to it, the Lambda function is invoked with the payload of the published message
  • Kinesis Data Firehose
    • Deliver events to delivery streams for archiving and analysis purposes.
    • Through delivery streams, events can be delivered to AWS destinations like S3, Redshift, and OpenSearch Service, or to third-party destinations such as Datadog, New Relic, MongoDB, and Splunk.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following notification endpoints or clients does Amazon Simple Notification Service support? Choose 2 answers
    1. Email
    2. CloudFront distribution
    3. File Transfer Protocol
    4. Short Message Service
    5. Simple Network Management Protocol
  2. What happens when you create a topic on Amazon SNS?
    1. The topic is created, and it has the name you specified for it.
    2. An ARN (Amazon Resource Name) is created
    3. You can create a topic on Amazon SQS, not on Amazon SNS.
    4. This question doesn’t make sense.
  3. A user has deployed an application on his private cloud. The user is using his own monitoring tool. He wants to configure that whenever there is an error, the monitoring tool should notify him via SMS. Which of the below mentioned AWS services will help in this scenario?
    1. None because the user infrastructure is in the private cloud/
    2. AWS SNS
    3. AWS SES
    4. AWS SMS
  4. A user wants to make so that whenever the CPU utilization of the AWS EC2 instance is above 90%, the redlight of his bedroom turns on. Which of the below mentioned AWS services is helpful for this purpose?
    1. AWS CloudWatch + AWS SES
    2. AWS CloudWatch + AWS SNS
    3. It is not possible to configure the light with the AWS infrastructure services
    4. AWS CloudWatch and a dedicated software turning on the light
  5. A user is trying to understand AWS SNS. To which of the below mentioned end points is SNS unable to send a notification?
    1. Email JSON
    2. HTTP
    3. AWS SQS
    4. AWS SES
  6. A user is running a webserver on EC2. The user wants to receive the SMS when the EC2 instance utilization is above the threshold limit. Which AWS services should the user configure in this case?
    1. AWS CloudWatch + AWS SES
    2. AWS CloudWatch + AWS SNS
    3. AWS CloudWatch + AWS SQS
    4. AWS EC2 + AWS CloudWatch
  7. A user is planning to host a mobile game on EC2 which sends notifications to active users on either high score or the addition of new features. The user should get this notification when he is online on his mobile device. Which of the below mentioned AWS services can help achieve this functionality?
    1. AWS Simple Notification Service
    2. AWS Simple Queue Service
    3. AWS Mobile Communication Service
    4. AWS Simple Email Service
  8. You are providing AWS consulting service for a company developing a new mobile application that will be leveraging amazon SNS push for push notifications. In order to send direct notification messages to individual devices each device registration identifier or token needs to be registered with SNS, however the developers are not sure of the best way to do this. You advise them to: –
    1. Bulk upload the device tokens contained in a CSV file via the AWS Management Console
    2. Let the push notification service (e.g. Amazon Device messaging) handle the registration
    3. Implement a token vending service to handle the registration
    4. Call the CreatePlatformEndpoint API function to register multiple device tokens. (Refer documentation)
  9. A company is running a batch analysis every hour on their main transactional DB running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible?
    1. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
    2. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
    3. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
    4. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
  10. Which of the following are valid SNS delivery transports? Choose 2 answers.
    1. HTTP
    2. UDP
    3. SMS
    4. DynamoDB
    5. Named Pipes
  11. What is the format of structured notification messages sent by Amazon SNS?
    1. An XML object containing MessageId, UnsubscribeURL, Subject, Message and other values
    2. An JSON object containing MessageId, DuplicateFlag, Message and other values
    3. An XML object containing MessageId, DuplicateFlag, Message and other values
    4. An JSON object containing MessageId, unsubscribeURL, Subject, Message and other values
  12. Which of the following are valid arguments for an SNS Publish request? Choose 3 answers.
    1. TopicAm
    2. Subject
    3. Destination
    4. Format
    5. Message
    6. Language

References

AWS EBS Performance

AWS EBS Performance Tips

  • EBS Performance depends on several factors including I/O characteristics,  instances and volumes configuration and can be improved using PIOPS, EBS-Optimized instances, Pre-Warming, and RAIDed configuration.

EBS-Optimized or 10 Gigabit Network Instances

  • An EBS-Optimized instance uses an optimized configuration stack and provides additional, dedicated capacity for EBS I/O.
  • Optimization provides the best performance for the EBS volumes by minimizing contention between EBS I/O and other traffic from an instance
  • EBS-Optimized instances deliver dedicated throughput to EBS depending on the instance type used.
  • Not all instance types support EBS-Optimization
  • Some Instance types enable EBS-Optimization by default, while it can be enabled for some.
  • EBS optimization enabled for an instance, that is not EBS-Optimized by default, an additional low, hourly fee for the dedicated capacity is charged
  • When attached to an EBS–optimized instance,
    • General Purpose (SSD) volumes are designed to deliver within 10% of their baseline and burst performance 99.9% of the time in a given year
    • Provisioned IOPS (SSD) volumes are designed to deliver within 10% of their provisioned performance 99.9 percent of the time in a given year.

EBS Volume Initialization – Pre-warming

  • Empty EBS volumes receive their maximum performance the moment that they are available and DO NOT require initialization (pre-warming).
  • EBS volumes needed a pre-warming, previously, before being used to get maximum performance to start with. Pre-warming of the volume was possible by writing to the entire volume with 0 for new volumes or reading the entire volume for volumes from snapshots.
  • Storage blocks on volumes that were restored from snapshots must be initialized (pulled down from S3 and written to the volume) before the block can be accessed.
  • This preliminary action takes time and can cause a significant increase in the latency of an I/O operation the first time each block is accessed.
  • To avoid this initial performance hit in a production environment, the following options can be used
    • Force the immediate initialization of the entire volume by using the dd or fio utilities to read from all of the blocks on a volume.
    • Enable fast snapshot restore – FSR on a snapshot to ensure that the EBS volumes created from it are fully-initialized at creation and instantly deliver all of their provisioned performance.

RAID Configuration

  • EBS volumes can be striped, if a single EBS volume does not meet the performance and more is required.
  • Striping volumes allows pushing tens of thousands of IOPS.
  • EBS volumes are already replicated across multiple servers in an AZ for availability and durability, so AWS generally recommend striping for performance rather than durability.
  • For greater I/O performance than can be achieved with a single volume, RAID 0 can stripe multiple volumes together; for on-instance redundancy, RAID 1 can mirror two volumes together.
  • RAID 0 allows I/O distribution across all volumes in a stripe, allowing straight gains with each addition.
  • RAID 1 can be used for durability to mirror volumes, but in this case, it requires more EC2 to EBS bandwidth as the data is written to multiple volumes simultaneously and should be used with EBS–optimization.
  • EBS volume data is replicated across multiple servers in an AZ to prevent the loss of data from the failure of any single component
  • AWS doesn’t recommend RAID 5 and 6 because the parity write operations of these modes consume the IOPS available for the volumes and can result in 20-30% fewer usable IOPS than RAID 0.
  • A 2-volume RAID 0 config can outperform a 4-volume RAID 6 that costs twice as much.

RAID Configuration

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A user is trying to pre-warm a blank EBS volume attached to a Linux instance. Which of the below mentioned steps should be performed by the user?
    1. There is no need to pre-warm an EBS volume (with latest update no pre-warming is needed)
    2. Contact AWS support to pre-warm (This used to be the case before, but pre warming is not necessary now)
    3. Unmount the volume before pre-warming
    4. Format the device
  2. A user has created an EBS volume of 10 GB and attached it to a running instance. The user is trying to access EBS for first time. Which of the below mentioned options is the correct statement with respect to a first time EBS access?
    1. The volume will show a size of 8 GB
    2. The volume will show a loss of the IOPS performance the first time (the volume needed to be wiped cleaned before for new volumes, however pre warming is not needed any more)
    3. The volume will be blank
    4. If the EBS is mounted it will ask the user to create a file system
  3. You are running a database on an EC2 instance, with the data stored on Elastic Block Store (EBS) for persistence At times throughout the day, you are seeing large variance in the response times of the database queries Looking into the instance with the isolate command you see a lot of wait time on the disk volume that the database’s data is stored on. What two ways can you improve the performance of the database’s storage while maintaining the current persistence of the data? Choose 2 answers
    1. Move to an SSD backed instance
    2. Move the database to an EBS-Optimized Instance
    3. Use Provisioned IOPs EBS
    4. Use the ephemeral storage on an m2.4xLarge Instance Instead
  4. You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached. The EC2 Instance is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS. The two EBS volumes are configured as a single RAID 0 device, and each Provisioned IOPS volume is provisioned with 4,000 IOPS (4000 16KB reads or writes) for a total of 16,000 random IOPS on the instance. The EC2 Instance initially delivers the expected 16,000 IOPS random read and write performance. Sometime later in order to increase the total random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID. Each volume is provisioned to 4,000 IOPS like the original four for a total of 24,000 IOPS on the EC2 instance Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%, but the total random IOPS measured at the instance level does not increase at all. What is the problem and a valid solution?
    1. Larger storage volumes support higher Provisioned IOPS rates: increase the provisioned volume storage of each of the 6 EBS volumes to 1TB.
    2. EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS-Optimized instance that provides larger throughput. (EC2 Instance types have limit on max throughput and would require larger instance types to provide 24000 IOPS)
    3. Small block sizes cause performance degradation, limiting the I’O throughput, configure the instance device driver and file system to use 64KB blocks to increase throughput.
    4. RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but increase each Provisioned IOPS EBS volume to 6.000 IOPS.
    5. The standard EBS instance root volume limits the total IOPS rate, change the instant root volume to also be a 500GB 4,000 Provisioned IOPS volume
  5. A user has deployed an application on an EBS backed EC2 instance. For a better performance of application, it requires dedicated EC2 to EBS traffic. How can the user achieve this?
    1. Launch the EC2 instance as EBS provisioned with PIOPS EBS
    2. Launch the EC2 instance as EBS enhanced with PIOPS EBS
    3. Launch the EC2 instance as EBS dedicated with PIOPS EBS
    4. Launch the EC2 instance as EBS optimized with PIOPS EBS

AWS Identity Services Cheat Sheet

AWS Identity Services Cheat Sheet

AWS Identity and Security Services

IAM – Identity & Access Management

  • securely control access to AWS services and resources
  • helps create and manage user identities and grant permissions for those users to access AWS resources
  • helps create groups for multiple users with similar permissions
  • not appropriate for application authentication
  • is Global and does not need to be migrated to a different region
  • helps define Policies,
    • in JSON format
    • all permissions are implicitly denied by default
    • most restrictive policy wins
  • IAM Role
    • helps grants and delegate access to users and services without the need of creating permanent credentials
    • IAM users or AWS services can assume a role to obtain temporary security credentials that can be used to make AWS API calls
    • needs Trust policy to define who and Permission policy to define what the user or service can access
    • used with Security Token Service (STS), a lightweight web service that provides temporary, limited privilege credentials for IAM users or for authenticated federated users
    • IAM role scenarios
      • Service access for e.g. EC2 to access S3 or DynamoDB
      • Cross Account access for users
        • with user within the same account
        • with user within an AWS account owned the same owner
        • with user from a Third Party AWS account with External ID for enhanced security
      • Identity Providers & Federation
        • AssumeRoleWithWebIdentity – Web Identity Federation, where the user can be authenticated using external authentication Identity providers like Amazon, Google or any OpenId IdP
        • AssumeRoleWithSAML – Identity Provider using SAML 2.0, where the user can be authenticated using on premises Active Directory, Open Ldap or any SAML 2.0 compliant IdP
        • AssumeRole (recommended) or GetFederationToken – For other Identity Providers, use Identity Broker to authenticate and provide temporary Credentials
  • IAM Best Practices
    • Do not use Root account for anything other than billing
    • Create Individual IAM users
    • Use groups to assign permissions to IAM users
    • Grant least privilege
    • Use IAM roles for applications on EC2
    • Delegate using roles instead of sharing credentials
    • Rotate credentials regularly
    • Use Policy conditions for increased granularity
    • Use CloudTrail to keep a history of activity
    • Enforce a strong IAM password policy for IAM users
    • Remove all unused users and credentials

AWS Organizations

  • is an account management service that enables consolidating multiple AWS accounts into an organization that can be centrally managed.
  • include consolidated billing and account management capabilities that enable one to better meet the budgetary, security, and compliance needs of your business.
  • As an administrator of an organization, new accounts can be created in an organization and invite existing accounts to join the organization.
  • enables you to
    • Automate AWS account creation and management, and provision resources with AWS CloudFormation Stacksets.
    • Maintain a secure environment with policies and management of AWS security services
    • Govern access to AWS services, resources, and regions
    • Centrally manage policies across multiple AWS accounts
    • Audit your environment for compliance 
    • View and manage costs with consolidated billing 
    • Configure AWS services across multiple accounts 
  • supports Service Control Policies – SCPs
  • offer central control over the maximum available permissions for all of the accounts in your organization, ensuring member accounts stay within the organization’s access control guidelines.
  • are one type of policy that help manage the organization.
  • are available only in an organization that has all features enabled, and aren’t available if the organization has enabled only the consolidated billing features.
  • are NOT sufficient for granting access to the accounts in the organization.
  • defines a guardrail for what actions accounts within the organization root or OU can do, but IAM policies need to be attached to the users and roles in the organization’s accounts to grant permissions to them.
  • Effective permissions are the logical intersection between what is allowed by the SCP and what is allowed by the IAM and resource-based policies.
  • with an SCP attached to member accounts, identity-based and resource-based policies grant permissions to entities only if those policies and the SCP allow the action
  • don’t affect users or roles in the management account. They affect only the member accounts in your organization.

AWS Directory Services

  • gives applications in AWS access to Active Directory services
  • different from SAML + AD, where the access is granted to AWS services through Temporary Credentials
  • Simple AD
    • least expensive but does not support Microsoft AD advanced features
    • provides a Samba 4 Microsoft Active Directory compatible standalone directory service on AWS
    • No single point of Authentication or Authorization, as a separate copy is maintained
    • trust relationships cannot be setup between Simple AD and other Active Directory domains
    • Don’t use it, if the requirement is to leverage access and control through centralized authentication service
  • AD Connector
    • acts just as an hosted proxy service for instances in AWS to connect to on-premises Active Directory
    • enables consistent enforcement of existing security policies, such as password expiration, password history, and account lockouts, whether users are accessing resources on-premises or in the AWS cloud
    • needs VPN connectivity (or Direct Connect)
    • integrates with existing RADIUS-based MFA solutions to enabled multi-factor authentication
    • does not cache data which might lead to latency
  • Read-only Domain Controllers (RODCs)
    • works out as a Read-only Active Directory
    • holds a copy of the Active Directory Domain Service (AD DS) database and respond to authentication requests
    • they cannot be written to and are typically deployed in locations where physical security cannot be guaranteed
    • helps maintain a single point to authentication & authorization controls, however needs to be synced
  • Writable Domain Controllers
    • are expensive to setup
    • operate in a multi-master model; changes can be made on any writable server in the forest, and those changes are replicated to servers throughout the entire forest

AWS Single Sign-On SSO

  • is a cloud-based single sign-on (SSO) service that makes it easy to centrally manage SSO access to all of the AWS accounts and cloud applications.
  • helps manage access and permissions to commonly used third-party software as a service (SaaS) applications, AWS SSO-integrated applications as well as custom applications that support SAML 2.0.
  • includes a user portal where the end-users can find and access all their assigned AWS accounts, cloud applications, and custom applications in one place.

Amazon Cognito

  • Amazon Cognito provides authentication, authorization, and user management for the web and mobile apps.
  • Users can sign in directly with a username and password, or through a third party such as Facebook, Amazon, Google, or Apple.
  • Cognito has two main components.
    • User pools are user directories that provide sign-up and sign-in options for the app users.
    • Identity pools enable you to grant the users access to other AWS services.
  • Cognito Sync helps synchronize data across a user’s devices so that their app experience remains consistent when they switch between devices or upgrade to a new device.

AWS Security Services Cheat Sheet

AWS Identity and Security Services

AWS Security Services Cheat Sheet

AWS Identity and Security Services

Key Management Service – KMS

  • is a managed encryption service that allows the creation and control of encryption keys to enable data encryption.
  • provides a highly available key storage, management, and auditing solution to encrypt the data across AWS services & within applications.
  • uses hardware security modules (HSMs) to protect and validate the KMS keys by the FIPS 140-2 Cryptographic Module Validation Program.
  • seamlessly integrates with several AWS services to make encrypting data in those services easy.
  • supports multi-region keys, which are AWS KMS keys in different AWS Regions. Multi-Region keys are not global and each multi-region key needs to be replicated and managed independently.

CloudHSM

  • provides secure cryptographic key storage to customers by making hardware security modules (HSMs) available in the AWS cloud
  • helps manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs.
  • single tenant, dedicated physical device to securely generate, store, and manage cryptographic keys used for data encryption
  • are inside the VPC (not EC2-classic) & isolated from the rest of the network
  • can use VPC peering to connect to CloudHSM from multiple VPCs
  • integrated with Amazon Redshift and Amazon RDS for Oracle
  • EBS volume encryption, S3 object encryption and key management can be done with CloudHSM but requires custom application scripting
  • is NOT fault-tolerant and would need to build a cluster as if one fails all the keys are lost
  • enables quick scaling by adding and removing HSM capacity on-demand, with no up-front costs.
  • automatically load balance requests and securely duplicates keys stored in any HSM to all of the other HSMs in the cluster.
  • expensive, prefer AWS Key Management Service (KMS) if cost is a criteria.

AWS WAF

  • is a web application firewall that helps monitor the HTTP/HTTPS traffic and allows controlling access to the content.
  • helps protect web applications from attacks by allowing rules configuration that allow, block, or monitor (count) web requests based on defined conditions. These conditions include IP addresses, HTTP headers, HTTP body, URI strings, SQL injection and cross-site scripting.
  • helps define Web ACLs, which is a combination of Rules that is a combinations of Conditions and Action to block or allow
  • integrated with CloudFront, Application Load Balancer (ALB), API Gateway services commonly used to deliver content and applications
  • supports custom origins outside of AWS, when integrated with CloudFront

AWS Secrets Manager

  • helps protect secrets needed to access applications, services, and IT resources.
  • enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
  • secure secrets by encrypting them with encryption keys managed using AWS KMS.
  • offers native secret rotation with built-in integration for RDS, Redshift, and DocumentDB.
  • supports Lambda functions to extend secret rotation to other types of secrets, including API keys and OAuth tokens.
  • supports IAM and resource-based policies for fine-grained access control to secrets and centralized secret rotation audit for resources in the AWS Cloud, third-party services, and on-premises.
  • enables secret replication in multiple AWS regions to support multi-region applications and disaster recovery scenarios.
  • supports private access using VPC Interface endpoints

AWS Shield

  • is a managed service that provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS
  • provides protection for all AWS customers against common and most frequently occurring infrastructure (layer 3 and 4) attacks like SYN/UDP floods, reflection attacks, and others to support high availability of applications on AWS.
  • provides AWS Shield Advanced with additional protections against more sophisticated and larger attacks for applications running on EC2, ELB, CloudFront, AWS Global Accelerator, and Route 53.

AWS GuardDuty

  • offers threat detection that enables continuous monitoring and protects the AWS accounts and workloads.
  • is a Regional service
  • analyzes continuous streams of meta-data generated from AWS accounts and network activity found in AWS CloudTrail Events, EKS audit logs, VPC Flow Logs, and DNS Logs.
  • integrated threat intelligence
  • combines machine learning, anomaly detection, network monitoring, and malicious file discovery, utilizing both AWS-developed and industry-leading third-party sources to help protect workloads and data on AWS
  • supports suppression rules, trusted IP lists, and thread lists.
  • provides Malware Protection to detect malicious files on EBS volumes
  • operates completely independently from the resources so there is no risk of performance or availability impacts on the workloads.

Amazon Inspector

  • is a vulnerability management service that continuously scans the AWS workloads for vulnerabilities
  • automatically discovers and scans EC2 instances and container images residing in Elastic Container Registry (ECR) for software vulnerabilities and unintended network exposure.
  • creates a finding, when a software vulnerability or network issue is discovered, that describes the vulnerability, rates its severity, identifies the affected resource,  and provides remediation guidance.
  • is a Regional service.
  • requires Systems Manager (SSM) agent to be installed and enabled.

Amazon Detective

  • helps analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities.
  • automatically collects log data from the AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data to easily conduct faster and more efficient security investigations.
  • enables customers to view summaries and analytical data associated with CloudTrail logs, EKS audit logs, VPC Flow Logs.
  • provides detailed summaries, analysis, and visualizations of the behaviors and interactions amongst your AWS accounts, EC2 instances, AWS users, roles, and IP addresses.
  • maintains up to a year of aggregated data
  • is a Regional service and needs to be enabled on a region-by-region basis.
  • is a multi-account service that aggregates data from monitored member accounts under a single administrative account within the same region.
  • has no impact on the performance or availability of the AWS infrastructure since it retrieves the log data and findings directly from the AWS services.

AWS Security Hub

  • a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation.
  • collects security data from across AWS accounts, services, and supported third-party partner products and helps you analyze your security trends and identify the highest priority security issues.
  • is Regional abut supports cross-region aggregation of findings.
  • automatically runs continuous, account-level configuration and security checks based on AWS best practices and industry standards which include CIS Foundations, PCI DSS.
  • consolidates the security findings across accounts and provider products and displays results on the Security Hub console.
  • supports integration with Amazon EventBridge. Custom actions can be defined when a finding is received.
  • has multi-account management through AWS Organizations integration, which allows delegating an administrator account for the organization.
  • works with AWS Config to perform most of its security checks for controls

AWS Macie

  • Macie is a data security service that discovers sensitive data by using machine learning and pattern matching, provides visibility into data security risks, and enables automated protection against those risks.
  • provides an inventory of the S3 buckets and automatically evaluates and monitors the buckets for security and access control.
  • automates the discovery, classification, and reporting of sensitive data.
  • generates a finding for you to review and remediate as necessary if it detects a potential issue with the security or privacy of the data, such as a bucket that becomes publicly accessible.
  • provides multi-account support using AWS Organizations to enable Macie across all of the accounts.
  • is a regional service and must be enabled on a region-by-region basis and helps view findings across all the accounts within each Region.
  • supports VPC Interface Endpoints to access Macie privately from a VPC without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

AWS Artifact

  • is a self-service audit artifact retrieval portal that provides customers with on-demand access to AWS’ compliance documentation and agreements
  • can use AWS Artifact Reports to download AWS security and compliance documents, such as AWS ISO certifications, Payment Card Industry (PCI), and System and Organization Control (SOC) reports.

References

AWS_Security_Products