AWS S3 vs EBS vs EFS

S3 vs EBS vs EFS

Amazon EFS, Amazon EBS, and Amazon S3 are AWS’ three different storage types that are applicable for different types of workload needs

S3 vs EBS vs EFS

Amazon S3

    • is an object store with a with simple key, value store design and good at storing vast numbers of backups or user files.
    • offers pay for the storage you actually use. Offers cost-saving storage classes ideal for infrequently access data or for data archival
    • provides unlimited storage
    • provides durability as the data is replicated and stored across at least three geographically-dispersed AZs with a maximum of 99.999999999% (11! 9’s)
    • provide high availability with a maximum of 99.99%
    • provides Security with range of access control mechanism and ability. to encrypt data at rest and in transit
    • data can be accessed programmatically or directly from services such as AWS CloudFront.
    • provides backup capability using versioning and cross-region replication

Amazon EBS

    • delivers high-availability block-level storage volumes for Amazon Elastic Compute Cloud (EC2) instances.
    • offers pay for the provisioned storage, even if you do not use it
    • provides limited storage capability and cannot scale infinitely
    • stores data on a file system which can be retained after the EC2 instance is shut down.
    • provides durability by replicating data across multiple servers in an AZ to prevent the loss of data from the failure of any single component
    • designed for 99.999% availability
    • provides low-latency performance – using SSD EBS volumes, it offers reliable I/O performance scaled to meet your workload needs.
    • provides secure storage with access control and providing data at rest and in transit encryption
    • is only accessible from a single EC2 instance in the particular AWS region and AZ
    • provides backup capability using backups and snapshots

Amazon EFS

    • scalable file storage, also optimized for EC2.
    • offers pay for the storage you actually use. There’s no advance provisioning, up-front fees, or commitments
    • multiple instances can be configured to mount the file system.
    • allows mounting the file system across multiple regions and instances.
    • is designed to be highly durable and highly available. Data is redundantly stored across multiple AZs.
    • provides elasticity – scales up and down automatically, even to meet the most abrupt workload spikes.
    • provides performance that scales to support any workload: EFS offers the throughput changing workloads need. It can provide higher throughput in spurts that match sudden file system growth, even for workloads up to 500,000 IOPS or 10 GB per second.
    • provides accessible file storage, which can be accessed by On-premises servers and EC2 instances concurrently.
    • provides security and compliance – access to the file system can be secured with the current security solution, or control access to EFS file systems using IAM, VPC, or POSIX permissions.
    • provides data encryption in transit or at rest.
    • allows EC2 instances to access EFS file systems located in other AWS regions through VPC peering.
    • a file system can be accessed concurrently from all AZs in the region where it is located, which means the application can be architected to failover from one AZ to other AZs in the region in order to ensure the highest level of application availability. Mount targets themselves are designed to be highly available.
    • used as a common data source for any application or workload that runs on numerous instances.

[do_widget id=text-15]

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company runs an application on a group of Amazon Linux EC2 instances. The application writes log files using standard API calls. For compliance reasons, all log files must be retained indefinitely and will be analyzed by a reporting tool that must access all files concurrently. Which storage service should a solutions architect use to provide the MOST cost-effective solution?
    1. Amazon EBS
    2. Amazon EFS
    3. Amazon EC2 instance store
    4. Amazon S3
  2. A new application is being deployed on Amazon EC2. The Application needs to read write upto 3 TB of data to an external data store and requires read-after-write consistency across all AWS regions for writing new objects into this data store.
    1. Amazon EBS
    2. Amazon Glacier
    3. Amazon EFS
    4. Amazon S3
  3. To meet the requirements of an application, an organization needs to save a constantly increasing volume of files on a cloud storage system with the following features and abilities. What below AWS service will meet these requirements?
      1. Pay only for the storage used
      2. Create different security policies for different groups of files
      3. Allow access to the public
      4. Retrieve the files at any time
      5. Store an unlimited number of files
    1. Amazon EBS
    2. Amazon S3
    3. Amazon Glacier
    4. Amazon EFS

AWS SageMaker Built-in Algorithms Summary

SageMaker Built-in Algorithms

BlazingText algorithm

    • provides highly optimized implementations of the Word2vec and text classification algorithms.
    • Word2vec algorithm
      • useful for many downstream natural language processing (NLP) tasks, such as sentiment analysis, named entity recognition, machine translation, etc.
      • maps words to high-quality distributed vectors, whose representation is called word embeddings
      • word embeddings capture the semantic relationships between words.
    • Text classification
      • is an important task for applications performing web searches, information retrieval, ranking, and document classification
    • provides the Skip-gram and continuous bag-of-words (CBOW) training architectures

DeepAR forecasting algorithm

    • is a supervised learning algorithm for forecasting scalar (one-dimensional) time series using recurrent neural networks (RNN).
    • use the trained model to generate forecasts for new time series that are similar to the ones it has been trained on.

Factorization machine

    • is a general-purpose supervised learning algorithm used for both classification and regression tasks.
    • extension of a linear model designed to capture interactions between features within high dimensional sparse datasets economically

Image classification algorithm

    • a supervised learning algorithm that supports multi-label classification
    • takes an image as input and outputs one or more labels
    • uses a convolutional neural network (ResNet) that can be trained from scratch or trained using transfer learning when a large number of training images are not available.
    • recommended input format is Apache MXNet RecordIO. Also supports raw images in .jpg or .png format.

IP Insights

    • is an unsupervised learning algorithm that learns the usage patterns for IPv4 addresses.
    • designed to capture associations between IPv4 addresses and various entities, such as user IDs or account numbers

K-means algorithm

    • is an unsupervised learning algorithm for clustering
    • attempts to find discrete groupings within data, where members of a group are as similar as possible to one another and as different as possible from members of other groups

K-nearest neighbors (k-NN) algorithm

    • is an index-based algorithm.
    • uses a non-parametric method for classification or regression.
    • For classification problems, the algorithm queries the k points that are closest to the sample point and returns the most frequently used label of their class as the predicted label.
    • For regression problems, the algorithm queries the k closest points to the sample point and returns the average of their feature values as the predicted value.

Latent Dirichlet Allocation (LDA) algorithm

    • is an unsupervised learning algorithm that attempts to describe a set of observations as a mixture of distinct categories.
    • used to discover a user-specified number of topics shared by documents within a text corpus.

Linear Learner

    • are supervised learning algorithms used for solving either classification or regression problems

Neural Topic Model (NTM) Algorithm

    • is an unsupervised learning algorithm that is used to organize a corpus of documents into topics that contain word groupings based on their statistical distribution
    • Topic modeling can be used to classify or summarize documents based on the topics detected or to retrieve information or recommend content based on topic similarities.

Object2Vec algorithm

    • is a general-purpose neural embedding algorithm that is highly customizable
    • can learn low-dimensional dense embeddings of high-dimensional objects.

Object Detection algorithm

    • detects and classifies objects in images using a single deep neural network.
    • is a supervised learning algorithm that takes images as input and identifies all instances of objects within the image scene.

Principal Component Analysis

    • is an unsupervised machine learning algorithm that attempts to reduce the dimensionality (number of features) within a dataset while still retaining as much information as possible

Random Cut Forest (RCF)

    • is an unsupervised algorithm for detecting anomalous data points within a data set.

Semantic segmentation algorithm

    • provides a fine-grained, pixel-level approach to developing computer vision applications

SageMaker Sequence to Sequence (seq2seq)

    • is a supervised learning algorithm where the input is a sequence of tokens (for example, text, audio) and the output generated is another sequence of tokens.
    • key uses cases are machine translation (input a sentence from one language and predict what that sentence would be in another language), text summarization (input a longer string of words and predict a shorter string of words that is a summary), speech-to-text (audio clips converted into output sentences in tokens)

XGBoost (eXtreme Gradient Boosting)

    • is a popular and efficient open-source implementation of the gradient boosted trees algorithm.
    • Gradient boosting is a supervised learning algorithm that attempts to accurately predict a target variable by combining an ensemble of estimates from a set of simpler, weaker models

AWS Certified Big Data -Speciality (BDS-C00) Exam Learning Path

Clearing the AWS Certified Big Data – Speciality (BDS-C00) was a great feeling. This was my third Speciality certification and in terms of the difficulty level (compared to Network and Security Speciality exams), I would rate it between Network (being the toughest) Security (being the simpler one).

Big Data in itself is a very vast topic and with AWS services, there is lots to cover and know for the exam. If you have worked on Big Data technologies including a bit of Visualization and Machine learning, it would be a great asset to pass this exam.

AWS Certified Big Data – Speciality (BDS-C00) exam basically validates

  • Implement core AWS Big Data services according to basic architectural best practices
  • Design and maintain Big Data
  • Leverage tools to automate Data Analysis

Refer AWS Certified Big Data – Speciality Exam Guide for details

                              AWS Certified Big Data – Speciality Domains

AWS Certified Big Data – Speciality (BDS-C00) Exam Summary

  • AWS Certified Big Data – Speciality exam, as its name suggests, covers a lot of Big Data concepts right from data transfer and collection techniques, storage, pre and post processing, analytics, visualization with the added concepts for data security at each layer.
  • One of the key tactic I followed when solving any AWS Certification exam is to read the question and use paper and pencil to draw a rough architecture and focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach to the right answer or atleast have a 50% chance of getting it right.
  • Be sure to cover the following topics
    • Whitepapers and articles
    • Analytics
      • Make sure you know and cover all the services in depth, as 80% of the exam is focused on these topics
      • Elastic Map Reduce
        • Understand EMR in depth
        • Understand EMRFS (hint: Use Consistent view to make sure S3 objects referred by different applications are in sync)
        • Know EMR Best Practices (hint: start with many small nodes instead on few large nodes)
        • Know Hive can be externally hosted using RDS, Aurora and AWS Glue Data Catalog
        • Know also different technologies
          • Presto is a fast SQL query engine designed for interactive analytic queries over large datasets from multiple sources
          • D3.js is a JavaScript library for manipulating documents based on data. D3 helps you bring data to life using HTML, SVG, and CSS
          • Spark is a distributed processing framework and programming model that helps do machine learning, stream processing, or graph analytics using Amazon EMR clusters
          • Zeppelin/Jupyter as a notebook for interactive data exploration and provides open-source web application that can be used to create and share documents that contain live code, equations, visualizations, and narrative text
          • Phoenix is used for OLTP and operational analytics, allowing you to use standard SQL queries and JDBC APIs to work with an Apache HBase backing store
      • Kinesis
        • Understand Kinesis Data Streams and Kinesis Data Firehose in depth
        • Know Kinesis Data Streams vs Kinesis Firehose
          • Know Kinesis Data Streams is open ended on both producer and consumer. It supports KCL and works with Spark.
          • Know Kineses Firehose is open ended for producer only. Data is stored in S3, Redshift and ElasticSearch.
          • Kinesis Firehose works in batches with minimum 60secs interval.
        • Understand Kinesis Encryption (hint: use server side encryption or encrypt in producer for data streams)
        • Know difference between KPL vs SDK (hint: PutRecords are synchronously, while KPL supports batching)
        • Kinesis Best Practices (hint: increase performance increasing the shards)
      • Know ElasticSearch is a search service which supports indexing, full text search, faceting etc.
      • Redshift
        • Understand Redshift in depth
        • Understand Redshift Advance topics like Workload Management, Distribution Style, Sort key
        • Know Redshift Best Practices w.r.t selection of Distribution style, Sort key, COPY command which allows parallelism
        • Know Redshift views to control access to data.
      • Amazon Machine Learning
      • Know Data Pipeline for data transfer
      • QuickSight
      • Know Glue as the ETL tool
    • Security, Identity & Compliance
    • Management & Governance Tools
      • Understand AWS CloudWatch for Logs and Metrics. Also, CloudWatch Events more real time alerts as compared to CloudTrail
    • Storage
    • Compute
      • Know EC2 access to services using IAM Role and Lambda using Execution role.
      • Lambda esp. how to improve performance batching, breaking functions etc.

AWS Certified Big Data – Speciality (BDS-C00) Exam Resources

AWS Systems Manager Overview

AWS Systems Manager

  • provides visibility and control of the infrastructure on AWS
  • helps to view operational data from multiple AWS services and automate operational tasks across AWS resources.
  • works with managed instances, which are configured for use with Systems Manager
  • helps configure and maintain managed instances.
  • helps maintain security and compliance by scanning the managed instances and reporting on (or taking corrective action on) any policy violations it detects.
  • supports machine types include EC2 instances, on-premises servers, and virtual machines (VMs), including VMs in other cloud environments. Supported operating system types include Windows Server, multiple distributions of Linux, and Raspbian.

Systems Manager Capabilities

Operations Management

Capabilities that help manage the AWS resources

  • Trusted Advisor is an online tool that provides you real time guidance to help you provision your resources following AWS best practices
  • AWS Personal Health Dashboard provides information about AWS Health events that can affect your account
  • OpsCenter provides a central location where operations engineers and IT professionals can view, investigate, and resolve operational work items (OpsItems) related to AWS resources

Actions & Change

Capabilities for taking action against or changing the AWS resources

Systems Manager Automation

  • helps automate common maintenance and deployment tasks for e.g. create and update AMIs, apply driver and agent updates, reset passwords on Windows instance, reset SSH keys on Linux instances, and apply OS patches or application updates.

Maintenance Windows

  •  helps set up recurring schedules for managed instances to run administrative tasks like installing patches and updates without interrupting business-critical operations.

Instances & Nodes

Capabilities for managing the EC2 instances, on-premises servers and virtual machines (VMs) in the hybrid environment, and other types of AWS resources (nodes)

Systems Manager Configuration Compliance

  • helps scan fleet of managed instances for patch compliance and configuration inconsistencies.
  • helps collect and aggregate data from multiple AWS accounts and Regions, and then drill down into specific resources that aren’t compliant.
  • provides, by default, displays compliance data about Patch Manager patching and State Manager associations, but can be customized

Session Manager

  • helps manage EC2 instances through an interactive one-click browser-based shell or through the AWS CLI.
  • provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
  • helps comply with corporate policies that require controlled access to instances, strict security practices, and fully auditable logs with instance access details, while still providing end users with simple one-click cross-platform access to the EC2 instances.

Systems Manager Run Command

  • helps to remotely and securely manage the configuration of the managed instances at scale.
  • helps perform on-demand changes like updating applications or running Linux shell scripts and Windows PowerShell commands on a target set of dozens or hundreds of instances.

Patch Manager

  • helps automate process of patching managed instances with both security related and other types of updates.
  • helps apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for Microsoft applications.)
  • enables scanning of instances for missing patches and applies them individually or to large groups of instances by using EC2 instance tags.
  • uses patch baselines, which can include rules for auto-approving patches within days of their release, as well as a list of approved and rejected patches.
  • helps install security patches on a regular basis by scheduling patching to run as a Systems Manager maintenance window task.

Systems Manager Inventory

  • provides visibility into your Amazon EC2 and on-premises computing environment
  • collect metadata from the managed instances about applications, files, components, patches, and more on your managed instances

Systems Manager State Manager

  • helps automate the process of keeping the managed instances in a defined state.
  • helps ensure that the instances are bootstrapped with specific software at startup, joined to a Windows domain (Windows instances only), or patched with specific software updates.

Shared Resources

Capabilities for managing and configuring the AWS resources

Systems Manager document (SSM document)

  • defines the actions that Systems Manager performs.
  • SSM document types include 
    • Command documents, which are used by State Manager and Run Command, and 
    • Automation documents, which are used by Systems Manager Automation.

Parameter Store

  • provides secure, hierarchical storage for configuration data and secrets management.
  • can store data such as passwords, database strings, and license codes as parameter values.
  • supports values as plain text or encrypted data, referenced by using the specified unique name

Systems Manager Agent

  • is software that can be installed and configured on an EC2 instance, an on-premises server, or a virtual machine (VM)
  • makes it possible for Systems Manager to update, manage, and configure these resources
  • must be installed on each instance to use with Systems Manager
  • usually comes preinstalled with lot of Amazon Machine Images (AMIs), while it must be installed manually on other AMIs, and on on-premises servers and virtual machines for your hybrid environment

[do_widget id=text-15]

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following tools from AWS allows the automatic collection of software inventory from EC2 instances and helps apply OS patches?
    1. AWS Code Deploy 
    2. Systems Manager
    3. EC2 AMI’s
    4. AWS Code Pipeline
  2. A Developer is writing several Lambda functions that each access data in a common RDS DB instance. They must share a connection string that contains the database credentials, which are a secret. A company policy requires that all secrets be stored encrypted. Which solution will minimize the amount of code the Developer must write?
    1. Use common DynamoDB table to store settings
    2. Use AWS Lambda environment variables
    3. Use Systems Manager Parameter Store secure strings
    4. Use a table in a separate RDS database
  3. A company has a fleet of EC2 instances and needs to remotely execute scripts for all of the instances. Which Amazon EC2 systems Manager feature allows this?
    1. Systems Manager Automation
    2. Systems Manager Run Command
    3. Systems Manager Parameter Store
    4. Systems Manager Inventory
  4. As a part of compliance check it was found that EC2 instances launched by the deployment team were not in compliance to latest security patches. The team had all tagged the resources. Which AWS service can help make the instances complaint?
    1. AWS Inspector
    2. AWS GuardDuty
    3. AWS Systems Manager
    4. AWS Shield

References

AWS Organizations Service Control Policies – Certification

AWS Organizations Service Control Policies

  • are one type of policy that help manage the organization.
  • offers central control over the maximum available permissions for all accounts in your organization, ensuring member accounts stay within the organization’s access control guidelines
  • are available only in an organization that has all features enabled
  • are NOT sufficient for granting access in the accounts in the organization.
  • defines a guardrail for what actions accounts within the organization root or OU can do, but IAM policies need to be attached to the users and roles in the organization’s accounts to grant permissions to them
  • with an SCP attached to member accounts, identity-based and resource-based policies grant permissions to entities only if those policies and the SCP allow the action

Effects on Permissions

  • SCP never grants permissions
  • limits permissions for entities in member accounts, including each AWS account root user
  • does not limit actions performed by the master account.
  • does not affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can’t be restricted by SCPs.
  • affect only principals that are managed by accounts that are part of the organization. They don’t affect users or roles from accounts outside the organization
  • Users and roles must still be granted permissions with appropriate IAM permission policies. A user without any IAM permission policies has no access at all, even if the applicable SCPs allow all services and all actions.

Strategies for Using SCPs

  • By default, an SCP named FullAWSAccess is attached to every root, OU, and account, which allows all actions and all services.
  • Blacklist Strategy
    • actions are allowed by default, and specify what services and actions are prohibited
    • blacklist permissions using deny statements can be assigned in combination with the default FullAWSAccess SCP
    • using deny statements in SCPs require less maintenance, because they don’t need to updated when AWS adds new services.
    • deny statements usually use less space, thus making it easier to stay within SCP size limits.
  • Whitelist Strategy
    • actions are prohibited by default, and you specify what services and actions are allowed
    • whitelist permissions can be assigned, by removing the default FullAWSAccess SCP
    • allows SCP that explicitly permits only those allowed services and actions

Testing Effects of SCPs

  • don’t attach SCPs to the root of the organization without thoroughly testing the impact that the policy has on accounts.
  • Create an OU that the accounts can be moved into one at a time, or at least in small numbers, to ensure that users are not inadvertently locked out of key services.

[do_widget id=text-15]

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your company is planning on setting up multiple accounts in AWS. The IT Security department has a requirement to ensure that certain services and actions are not allowed across all accounts. How would the system admin achieve this in the most EFFECTIVE way possible?
    1. Create a common IAM policy that can be applied across all accounts
    2. Create an IAM policy per account and apply them accordingly​
    3. Deny the services to be used across accounts by contacting AWS​ support
    4. Use AWS Organizations and Service Control Policies
  2. You are in the process of implementing AWS Organizations for your company. At your previous company, you saw an Organizations implementation go bad when an SCP (Service Control Policy) was applied at the root of the organization before being thoroughly tested. In what way can an SCP be properly tested and implemented?
    1. Back up your entire Organization to S3 and restore rollback and restore if something goes wrong
    2. The SCP must be verified with AWS before it is implemented to avoid any problems.
    3. Mirror your Organizational Unit in another region. Apply the SCP and test it. Once testing is complete, attach the SCP to the root of your organization.
    4. Create an Organizational Unit (OU). Attach the SCP to this new OU. Move your accounts in one at a time to ensure that you don’t inadvertently lock users out of key services.

AWS Cloud Migration – Certification

AWS Cloud Migration

Some of the key drivers to moving to cloud is

  • Operational Costs – Key components of operational costs are unit price of infrastructure, the ability to match supply and demand, finding a pathway to optionality, employing an elastic cost base, and transparency
  • Workforce Productivity – getting up and ready in seconds and various service availability.
  • Cost Avoidance – eliminating the need for hardware refresh programs and constant maintenance programs
  • Operational Resilience – increases resilience and thereby reducing organization’s risk profile
  • Business Agility – react to market conditions more quickly 

Cloud Stages of Adoption

Cloud Stages of Adoption

PROJECT

  • In the project phase, execute projects to get familiar and experience benefits from the cloud.

FOUNDATION

  • After experiencing the benefits of cloud, build the foundation to scale the cloud adoption.
  • This includes creating a landing zone (a pre-configured, secure, multi-account AWS environment), Cloud Center of Excellence (CCoE), operations model, as well as assuring security and compliance readiness.

MIGRATION

  • Migrate existing applications including mission-critical applications or entire data centers to the cloud as you scale your adoption across a growing portion of the IT portfolio. 

REINVENTION

  • Now that the operations are in the cloud, focus on reinvention by taking advantage of the flexibility and capabilities of AWS to transform business by speeding time to market and increasing the attention on innovation.

Migration Process

Migration Process

Phase 1: Migration Preparation and Business Planning

  • Determine the right objectives and begin to get an idea of the types of benefits you will see.
  • Starts with some foundational experience and developing a preliminary business case for a migration, which requires taking objectives into account, along with the age and architecture of the existing applications, and their constraints.

Phase 2: Portfolio Discovery and Planning

  • Understand the IT portfolio, the dependencies between applications, and begin to consider what types of migration strategies needed to meet the business case objectives.
  • With the portfolio discovery and migration approach, you are in a good position to build a full business case.

Phase 3 & Phase 4: Designing, Migrating, and Validating Application

  • Move focus from the portfolio level to the individual application level and design, migrate, and validate each application.
  • Each application is designed, migrated, and validated according to one of the six common application strategies (“The 6 R’s”).
  • Once you have some foundational experience from migrating a few apps and a plan in place that the organization can get behind – it’s time to accelerate the migration and achieve scale.
  • AWS provides migration services that help for moving applications and data from on-premises to AWS – AWS Server Migration Service (SMS)AWS Database Migration Service (DMS)

Phase 5: Operate

  • Once applications are migrated, iterate on the new foundation, turn off old systems, and constantly iterate toward a modern operating model.
  • Operating model becomes an evergreen set of people, process, and technology that constantly improves as you migrate more applications.

Application Migration Strategies

Migration strategies depend upon what is in your environment and the what is suitable for the portfolio, taking into account the business and technical requirements.

Below are the Six common migration strategies employed and build upon “The 5 R’s” that Gartner outlined in 2011.

Application Migration Strategies

1. Rehost (“lift and shift”)

  • Moving your application as is to the Cloud.
  • helps to quickly implement the migration and scale to meet a business case
  • provides better opportunity for re-architect the applications once they are already running in cloud, with the organization having already developed cloud skills and the application with its data is migrated and handling traffic.
  • Rehosting can be automated with tools such as AWS Server Migration Service, or can be done manually

2. Replatform (“lift, tinker and shift”)

  • Moving your application to the Cloud with optimizations, without any major changes.
  • Replatform helps achieve some tangible benefit without changing the core architecture of the application. For e.g., using RDS for database or Elastic Beanstalk for applications.

3. Repurchase (“drop and shop”)

  • Dropping the application and Moving to a complete new Solution
  • More of an Buy in a Build vs Buy model, might be expensive in short team but faster time to market.
  • Move to a different product, which likely means the organization is willing to change the existing used licensing model

4. Refactor / Re-architect

  • Moving the application to Cloud, with major changes.
  • More of a Build in a Build vs Buy model, and would take time.
  • driven by a strong business need to add features, scale, or performance with agility and improvement in business continuity that would otherwise be difficult to achieve in the application’s existing environment.

5. Retire

  • Decommission the applications, not needed anymore.
  • Identifying IT assets that are no longer useful and can be turned off will help boost your business case and direct your attention towards maintaining the resources that are widely used.

6. Retain

  • Keep the applications as is in the current environment
  • Retain portions of the IT portfolio, which have tight dependencies, difficult, not in priority or ready for migration

[do_widget id=text-15]

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is planning the migration of several lab environments used for software testing. An assortment of custom tooling is used to manage the test runs for each lab. The labs use immutable infrastructure for the software test runs, and the results are stored in a highly available SQL database cluster. Although completely rewriting the custom tooling is out of scope for the migration project, the company would like to optimize workloads during the migration. Which application migration strategy meets this requirement?
    1. Re-host
    2. Re-platform
    3. Re-factor/re-architect
    4. Retire

References

AWS CloudFormation Best Practices – Certification

AWS CloudFormation Best Practices

  • AWS CloudFormation Best Practices are based on real-world experience from current AWS CloudFormation customers
  • AWS CloudFormation Best Practices help provide guidelines on
    • how to plan and organize stacks,
    • create templates that describe resources and the software applications that run on them,
    • and manage stacks and their resources

Required Mainly for Developer, SysOps Associate & DevOps Professional Exam

Planning and Organizing

Organize Your Stacks By Lifecycle and Ownership

  • Use the lifecycle and ownership of the AWS resources to help you decide what resources should go in each stack.
  • By grouping resources with common lifecycles and ownership, owners can make changes to their set of resources by using their own process and schedule without affecting other resources.
  • For e.g. Consider an Application using Web and Database instances. Both the Web and Database have a different lifecycle and usually the ownership lies with different teams. Maintaining both in a single stack would need communication and co-ordination between different teams introducing complexity. It would be best to have different stacks owned by the respective teams, so that they can update their resources without impacting each others’s stack.

Use Cross-Stack References to Export Shared Resources

  • With multiple stacks, there is usually a need to refer values and resources across stacks.
  • Use cross-stack references to export resources from a stack so that other stacks can use them
  • Stacks can use the exported resources by calling them using the Fn::ImportValue function.
  • For e.g. Web stack would always need resources from the Network stack like VPC, Subnets etc.

Use IAM to Control Access

  • Use IAM to control access to
    • what AWS CloudFormation actions users can perform, such as viewing stack templates, creating stacks, or deleting stacks
    • what actions CloudFormation can perform on resources on their behalf
  • Remember, having access to CloudFormation does not provide user with access to AWS resources. That needs to be provided separately.
  • To separate permissions between a user and the AWS CloudFormation service, use a service role. AWS CloudFormation uses the service role’s policy to make calls instead of the user’s policy.

Verify Quotas for All Resource Types

  • Ensure that stack can create all the required resources without hitting the AWS account limits.

Reuse Templates to Replicate Stacks in Multiple Environments

  • Reuse templates to replicate infrastructure in multiple environments
  • Use parameters, mappings, and conditions sections to customize and make templates reusable
  • for e.g. creating the same stack in development, staging and production environment with different instance types, instance counts etc.

Use Nested Stacks to Reuse Common Template Patterns

  • Nested stacks are stacks that create other stacks.
  • Nested stacks separate out the common patterns and components to create dedicated templates for them, preventing copy pasting across stacks.
  • for e.g. a standard load balancer configuration can be created as nested stack and just used by other stacks

Creating templates

Do Not Embed Credentials in Your Templates

  • Use input parameters to pass in sensitive information such as DB password whenever you create or update a stack.
  • Use the NoEcho property to obfuscate the parameter value.

Use AWS-Specific Parameter Types

  • For existing AWS-specific values, such as existing Virtual Private Cloud IDs or an EC2 key pair name, use AWS-specific parameter types
  • AWS CloudFormation can quickly validate values for AWS-specific parameter types before creating your stack.

Use Parameter Constraints

  • Use Parameter constraints to describe allowed input values so that CloudFormation catches any invalid values before creating a stack.
  • For e.g. constraints for database user name with min and max length

Use AWS::CloudFormation::Init to Deploy Software Applications on Amazon EC2 Instances

  • Use AWS::CloudFormation::Init resource and the cfn-init helper script to install and configure software applications on EC2 instances

Validate Templates Before Using Them

  • Validate templates before creating or updating a stack
  • Validating a template helps catch syntax and some semantic errors, such as circular dependencies, before AWS CloudFormation creates any resources.
  • During validation, AWS CloudFormation first checks if the template is valid JSON or a valid YAML. If both checks fail, AWS CloudFormation returns a template validation error.

Managing stacks

Manage All Stack Resources Through AWS CloudFormation

  • After launching the stack, any further updates should be done through CloudFormation only.
  • Doing changes outside the stack can create a mismatch between the stack’s template and the current state of the stack resources, which can cause errors if you update or delete the stack.

Create Change Sets Before Updating Your Stacks

  • Change sets provides a preview of how the proposed changes to a stack might impact the running resources before you implement them
  • CloudFormation doesn’t make any changes to the stack until you execute the change set, allowing you to decide whether to proceed with the proposed changes or create another change set.

Use Stack Policies

  • Stack policies help protect critical stack resources from unintentional updates that could cause resources to be interrupted or even replaced
  • During a stack update, you must explicitly specify the protected resources that you want to update; otherwise, no changes are made to protected resources

Use AWS CloudTrail to Log AWS CloudFormation Calls

  • AWS CloudTrail tracks anyone making AWS CloudFormation API calls in the AWS account.
  • API calls are logged whenever anyone uses the AWS CloudFormation API, the AWS CloudFormation console, a back-end console, or AWS CloudFormation AWS CLI commands.
  • Enable logging and specify an Amazon S3 bucket to store the logs.

Use Code Reviews and Revision Controls to Manage Your Templates

  • Using code reviews and revision controls help track changes between different versions of your templates and changes to stack resources
  • Maintaining history can help revert the stack to a certain version of the template.

[do_widget id=text-15]

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company has deployed their application using CloudFormation. They want to update their stack. However, they want to understand how the changes will affect running resources before implementing the updated. How can the company achieve the same?
    1. Use CloudFormation Validate Stack feature
    2. Use CloudFormation Dry Run feature
    3. Use CloudFormation Stage feature
    4. Use CloudFormation Change Sets feature
  2. You have multiple similar three-tier applications and have decided to use CloudFormation to maintain version control and achieve automation. How can you best use CloudFormation to keep everything agile and maintain multiple environments while keeping cost down?
    1. Create multiple templates in one CloudFormation stack.
    2. Combine all resources into one template for version control and automation.
    3. Use CloudFormation custom resources to handle dependencies between stacks
    4. Create separate templates based on functionality, create nested stacks with CloudFormation.
  3. You are working as an AWS DevOps admins for your company. You are in-charge of building the infrastructure for the company’s development teams using CloudFormation. The template will include building the VPC and networking components, installing a LAMP stack and securing the created resources. As per the AWS best practices what is the best way to design this template?
    1. Create a single CloudFormation template to create all the resources since it would be easier from the maintenance perspective.
    2. Create multiple CloudFormation templates based on the number of VPC’s in the environment.
    3. Create multiple CloudFormation templates based on the number of development groups in the environment.
    4. Create multiple CloudFormation templates for each set of logical resources, one for networking, and the other for LAMP stack creation.

References

AWS X-Ray – Certification

AWS X-Ray

  • AWS X-Ray helps developers analyze and debug production, distributed applications for e.g. built using a microservices lambda architecture
  • X-Ray provides an end-to-end view of requests as they travel through the application, and shows a map of the application’s underlying components.
  • X-Ray helps to understand how the application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors.
  • X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.
  • X-Ray can be used with distributed applications of any size to trace and debug both synchronous requests and asynchronous events.
  • X-Ray can be used to track requests flowing through applications or services across multiple regions. X-Ray data is stored locally to the processed region and customers can build a solution over it to combined the data
  • Trace data sent to X-Ray is generally available for retrieval and filtering within 30 seconds of it being received by the service.
  • X-Ray stores trace data for the last 30 days. This enables you to query trace data going back 30 days.
  • Integration
    • X-Ray integrates with applications running on EC2, ECS, Lambda, and Elastic Beanstalk.
    • X-Ray SDK automatically captures metadata for API calls made to AWS services using the AWS SDK
    • X-Ray SDK provides add-ons for MySQL and PostgreSQL drivers.
    • For Elastic Beanstalk, include the language-specific X-Ray libraries in your application code.
    • Applications running on other AWS services, such as EC2 or ECS, install the X-Ray agent and instrument the application code

X-Ray Core Concepts

  • Trace
    • An X-Ray trace is a set of data points that share the same trace ID.
    • Trace helps track the request, which is assigned a unique trace id, while it navigates through services
    • Piece of information relayed by each service in the application to X-Ray is a segment, and a trace is a collection of segments.
  • Segment
    • An X-Ray segment encapsulates all the data points for a single component of the distributed application for e.g. authorization component
    • Segments include system-defined and user-defined data in the form of annotations and are composed of one or more sub-segments that represent remote calls made from the service.  for e.g. database call and its result within the overall request/response
  • Annotation
    • An X-Ray annotation is system-defined or user-defined data
    • Annotation is associated with a segment and a segment can contain multiple annotations.
    • System-defined annotations include data added to the segment by AWS services
    • User-defined annotations are metadata added to a segment by a developer
  • Errors
    • X-Ray errors are system annotations associated with a segment for a call that results in an error response.
    • Error includes the error message, stack trace, and any additional information for e.g, version to associate the error with a source file.
  • Sampling
    • X-Ray collects data for significant number of requests, instead of each request sent to an application, for performant and cost-effectiveness
    • X-Ray should not be used as an audit or compliance tool because it does not guarantee data completeness.
  • X-Ray agent
    • X-Ray agent helps collect data from log files and sends them to the X-Ray service for aggregation, analysis, and storage.
    • Agent makes it easier for you to send data to the X-Ray service, instead of using the APIs directly, and is
    • Agent is available for Amazon Linux AMI, Red Hat Enterprise Linux (RHEL), and Windows Server 2012 R2 or later operating systems.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is facing performance issues with their microservices architecture deployed on AWS. Which service can help them debug and analyze the issue? [CCP]
    1. AWS Inspector
    2. CodeDeploy
    3. X-Ray
    4. AWS Config

AWS Network Load Balancer – NLB

AWS Network Load Balancer – NLB

  • Network Load Balancer operates at the connection level (Layer 4), routing connections to targets – EC2 instances, containers and IP addresses based on IP protocol data.
  • Network Load Balancer is suited for load balancing of TCP traffic
  • Network Load Balancer is capable of handling millions of requests per second while maintaining ultra-low latencies.
  • Network Load Balancer is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone.
  • Network Load Balancer also supports TLS termination, preserves the source IP of the clients, and provides stable IP support and Zonal isolation.
  • NLB supports long-running connections that are very useful for WebSocket type applications.
  • NLB is integrated with other AWS services such as Auto Scaling, EC2 Container Service (ECS), and CloudFormation.
  • NLB support connections from clients over VPC peering, AWS managed VPN, and third-party VPN solutions.
  • For TCP traffic,
    • the load balancer selects a target using a flow hash algorithm based on the protocol, source IP address, source port, destination IP address, destination port, and TCP sequence number.
    • TCP connections from a client have different source ports and sequence numbers, and can be routed to different targets.
    • Each individual TCP connection is routed to a single target for the life of the connection.
  • For UDP traffic,
    • the load balancer selects a target using a flow hash algorithm based on the protocol, source IP address, source port, destination IP address, and destination port.
    • A UDP flow has the same source and destination, so it is consistently routed to a single target throughout its lifetime.
    • Different UDP flows have different source IP addresses and ports, so they can be routed to different targets.
  • back-end server authentication is not supported
  • session stickiness is not support

Refer Blog Post @ Classic Load Balancer vs Application Load Balancer vs Network Load Balancer

Network Load Balancer Features

Connection-based Load Balancing

  • Allows load balancing of  TCP traffic, routing connections to targets – EC2 instances, microservices and containers, and IP addresses.

High Availability

  • is highly available.
  • accepts incoming traffic from clients and distributes this traffic across the targets within the same Availability Zone.
  • monitors the health of its registered targets and routes the traffic only to healthy targets
  • if a health check fails and an unhealthy target is detected, it stops routing traffic to that target and reroutes traffic to remaining healthy targets.
  • if configured with multiple AZs and if all the targets in a single AZ fail, it routes traffic to healthy targets in the other AZs

High Throughput

  • is designed to handle traffic as it grows and can load balance millions of requests/sec.
  • can also handle sudden volatile traffic patterns.

Low Latency

  • offers extremely low latencies for latency-sensitive applications.

Cross Zone Load Balancing

  • enable cross-zone loading balancing only after creating the NLB
  • is disabled, by default, and charges apply for inter-az traffic.

Load Balancing using IP addresses as Targets

  • allows load balancing of any application hosted in AWS or on-premises using IP addresses of the application backends as targets.
  • allows load balancing to an application backend hosted on any IP address and any interface on an instance.
  • ability to load balance across AWS and on-premises resources helps migrate-to-cloud, burst-to-cloud or failover-to-cloud
  • applications hosted in on-premises locations can be used as targets over a Direct Connect connection and EC2-Classic (using ClassicLink).

Preserve source IP address

  • preserves client side source IP allowing the back-end to see client IP address
  • Target groups can be created with target type as instance ID or IP address.
    • If targets registered by instance ID, the source IP addresses of the clients are preserved and provided to the applications.
    • If register targets registered by IP address, the source IP addresses are the private IP addresses of the load balancer nodes.

Static IP support

  • automatically provides a static IP per Availability Zone (subnet) that can be used by applications as the front-end IP of the load balancer.
  • Elastic Load Balancing creates a network interface for each enabled Availability Zone. Each load balancer node in the AZ uses this network interface to get a static IP address.
  • Internet-facing load balancer can optionally associate one Elastic IP address per subnet.

Elastic IP support

  • an Elastic IP per Availability Zone (subnet) can also be assigned, optionally, thereby providing a fixed IP.

Health Checks

  • supports both network and application target health checks.
  • Network-level health check
    • is based on the overall response of the underlying target (instance or a container) to normal traffic.
    • target is marked unavailable if it is slow or unable to respond to new connection requests
  • Application-level health check
    • is based on a specific URL on a given target to test the application health deeper

DNS Fail-over

  • integrates with Route 53
  • Route 53 will direct traffic to load balancer nodes in other AZs, if there are no healthy targets with NLB or if the NLB itself is unhealthy
  • if NLB is unresponsive, Route 53 will remove the unavailable load balancer IP address from service and direct traffic to an alternate Network Load Balancer in another region.

Integration with AWS Services

  • is integrated with other AWS services such as Auto Scaling, EC2 Container Service (ECS), CloudFormation, CodeDeploy, and AWS Config.

Long-lived TCP Connections

  • supports long-lived TCP connections ideal for WebSocket type of applications

Central API Support

  • uses the same API as Application Load Balancer.
  • enables you to work with target groups, health checks, and load balance across multiple ports on the same EC2 instance to support containerized applications.

Robust Monitoring and Auditing

  • integrated with CloudWatch to report Network Load Balancer metrics.
  • CloudWatch provides metrics such as Active Flow count, Healthy Host Count, New Flow Count, Processed bytes, and more.
  • integrated with CloudTrail to track API calls to the NLB

Enhanced Logging

  • use the Flow Logs feature to record all requests sent to the load balancer.
  • Flow Logs capture information about the IP traffic going to and from network interfaces in the VPC
  • Flow log data is stored using CloudWatch Logs

Zonal Isolation

  • is designed for application architectures in a single zone.
  • can be enabled in a single AZ to support architectures that require zonal isolation
  • automatically fails-over to other healthy AZs, if something fails in a AZ
  • its recommended to configure the load balancer and targets in multiple AZs for achieving high availability

Advantages over Classic Load Balancer

  • Ability to handle volatile workloads and scale to millions of requests per second, without the need of pre-warming
  • Support for static IP/Elastic IP addresses for the load balancer
  • Support for registering targets by IP address, including targets outside the VPC (on-premises) for the load balancer.
  • Support for routing requests to multiple applications on a single EC2 instance. Single instance or IP address can be registered with the same target group using multiple ports.
  • Support for containerized applications. Using Dynamic port mapping, ECS can select an unused port when scheduling a task and register the task with a target group using this port.
  • Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables scaling each service dynamically based on demand

Network Load Balancer Pricing

  • charged for each hour or partial hour that an NLB is running and the number of Load Balancer Capacity Units (LCU) used per hour.
  • An LCU is a new metric for determining NLB pricing
  • An LCU defines the maximum resource consumed in any one of the dimensions (new connections/flows, active connections/flows, bandwidth and rule evaluations) the Network Load Balancer processes your traffic.

[do_widget id=text-15]

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants to use load balancer for their application. However, the company wants to forward the requests without any header modification. What service should the company use?
    1. Classic Load Balancer
    2. Network Load Balancer
    3. Application Load Balancer
    4. Use Route 53
  2. A company is hosting an application in AWS for third party access. The third party needs to whitelist the application based on the IP. Which AWS service can the company use in the whitelisting of the IP address?
    1. AWS Application Load Balancer
    2. AWS Classic Load balancer
    3. AWS Network Load Balancer
    4. AWS Route 53

References

AWS Documentation – ELB_Network_Load_Balancer

AWS Organizations – Certification

AWS Organizations

AWS Organizations

  • AWS Organizations is an account management service that enables consolidating multiple AWS accounts into an organization that can be created and centrally managed.
  • AWS Organizations includes consolidated billing and account management capabilities that enable one to better meet the budgetary, security, and compliance needs of your business.
  • As an administrator of an organization, new accounts can be created in an organization and invite existing accounts to join the organization.
  • AWS Organizations enables you to
    • Centrally manage policies across multiple AWS accounts
    • Control access to AWS services
    • Automate AWS account creation and management
    • Consolidate billing across multiple AWS accounts

AWS Organizations

AWS Organization Features

Centralized management of all of your AWS accounts

  • Combine existing accounts into or create new ones within an organization that enables them to be managed centrally
  • Policies can be attached to accounts that affect some or all of the accounts

Consolidated billing for all member accounts

  • Consolidated billing is a feature of AWS Organizations.
  • Master account of the organization can be used to consolidate and pay for all member accounts.

Hierarchical grouping of accounts to meet budgetary, security, or compliance needs

  • Accounts can be grouped into organizational units (OUs) and each OU can be attached different access policies.
  • OUs can also be nested to a depth of five levels, providing flexibility in how you structure your account groups.

Control over AWS services and API actions that each account can access

  • As an administrator of the master account of an organization, access to users and roles in each member account can be restricted to which AWS services and individual API actions
  • Organization permissions overrule account permissions.
  • This restriction even overrides the administrators of member accounts in the organization.
  • When AWS Organizations blocks access to a service or API action for a member account, a user or role in that account can’t access any prohibited service or API action, even if an administrator of a member account explicitly grants such permissions in an IAM policy.

Integration and support for AWS IAM

  • IAM provides granular control over users and roles in individual accounts.
  • AWS Organizations expands that control to account level by giving control over what users and roles in an account or a group of accounts can do
  • User can access only what is allowed by both the AWS Organizations policies and IAM policies.
  • Resulting permissions are the logical intersection of what is allowed by AWS Organizations at the account level, and what permissions are explicitly granted by IAM at the user or role level within that account.
  • If either blocks an operation, the user can’t access that operation.

Integration with other AWS services

  • Select AWS services can be enabled to access accounts in the organization and perform actions on the resources in the accounts.
  • When another service is configured and authorized to access with the organization, AWS Organizations creates an IAM service-linked role for that service in each member account.
  • Service-linked role has predefined IAM permissions that allow the other AWS service to perform specific tasks in the organization and its accounts
  • All accounts in an organization automatically have a service-linked role created, which enables the AWS Organizations service to create the service-linked roles required by AWS services for which you enable trusted access
  • These additional service-linked roles come with policies that enable the specified service to perform only those required tasks

Data replication that is eventually consistent

  • AWS Organizations is eventually consistent.
  • AWS Organizations achieves high availability by replicating data across multiple servers in AWS data centers within its region.
  • If a request to change some data is successful, the change is committed and safely stored.
  • However, the change must then be replicated across the multiple servers.

AWS Organizations Terminology and Concepts

AWS Organizations Concepts

Organization

  • An entity created to consolidate AWS accounts.
  • An organization has one master account along with zero or more member accounts.
  • An organization has the functionality that is determined by the feature set that you enable i.e. All features or Consolidated Billing only

Root

  • Parent container for all the accounts for the organization.
  • Policy applied to the root is applied to all the organizational units (OUs) and accounts in the organization.
  • There can be only one root currently and AWS Organization automatically creates it when an organization is created

Organization unit (OU)

  • A container for accounts within a root.
  • An OU also can contain other OUs, enabling hierarchy creation that resembles an upside-down tree, with a root at the top and branches of OUs that reach down, ending in accounts that are the leaves of the tree.
  • A policy attached to one of the nodes in the hierarchy, flows down and affects all branches (OUs) and leaves (accounts) beneath it
  • An OU can have exactly one parent, and currently each account can be a member of exactly one OU.

Account

  • A standard AWS account that contains AWS resources.
  • Each account can be directly in the root, or placed in one of the OUs in the hierarchy.
  • Policy can be attached to an account to apply controls to only that one account.
  • Accounts can be organized in a hierarchical, tree-like structure with a root at the top and organizational units nested under the root.
  • Master account
    • Primary account which creates the organization
    • can create new accounts in the organization, invite existing accounts, remove accounts, manage invitations, apply policies to entities within the organization.
    • has the responsibilities of a payer account and is responsible for paying all charges that are accrued by the member accounts.
  • Member account
    • Rest of the accounts within the organization are member accounts.
    • An account can be a member of only one organization at a time.

Invitation

  • Process of asking another account to join an organization.
  • An invitation can be issued only by the organization’s master account and is extended to either the account ID or the email address that is associated with the invited account.
  • Invited account becomes a member account in the organization, after it accepts the invitation.
  • Invitations can be sent to existing member accounts as well, to approve the change from supporting only consolidated billing feature to supporting all features
  • Invitations work by accounts exchanging handshakes.

Handshake

  • A multi-step process of exchanging information between two parties
  • Primary use in AWS Organizations is to serve as the underlying implementation for invitations.
  • Handshake messages are passed between and responded to by the handshake initiator (master account) and the recipient (member account) in such a way that it ensures that both parties always know what the current status is.

Available feature sets

Consolidated billing

  • provides shared billing functionality

All features

  • includes all the functionality of consolidated billing,
  • includes advanced features that gives more control over accounts in the organization.
  • allows master account to have full control over what member accounts can do
  • master account can apply SCPs to restrict the services and actions that users (including the root user) and roles in an account can access, and it can prevent member accounts from leaving the organization.

Service control policy (SCP)

  • Service control policy specifies the services and actions that users and roles can use in the accounts that the SCP affects.
  • SCPs are similar to IAM permission policies except that they don’t grant any permissions.
  • SCPs are filters that allow only the specified services and actions to be used in affected accounts.
  • SCPs override IAM permission policy. So even if a user is granted full administrator permissions with an IAM permission policy, any access that is not explicitly allowed or that is explicitly denied by the SCPs affecting that account is blocked.
  • For e.g., if you assign an SCP that allows only database service access to your “database” account, then any user, group, or role in that account is denied access to any other service’s operations.
  • SCP can be attached to
    • A root, which affects all accounts in the organization
    • An OU, which affects all accounts in that OU and all accounts in any OUs in that OU subtree
    • An individual account
  • Master account of the organization is not affected by any SCPs that are attached either to it or to any root or OU the master account might be in.

Whitelisting vs. blacklisting

Whitelisting and blacklisting are complementary techniques used to apply SCPs to filter the permissions available to accounts.

Whitelisting

  • Explicitly specify the access that is allowed.
  • All other access is implicitly blocked.
  • By default, all permissions are whitelisted.
  • AWS Organizations attaches an AWS managed policy called FullAWSAccess to all roots, OUs, and accounts, which ensures building of the organizations.
  • For restricting permissions, replace the FullAWSAccess policy with one that allows only the more limited, desired set of permissions.
  • Users and roles in the affected accounts can then exercise only that level of access, even if their IAM policies allow all actions.
  • If you replace the default policy on the root, all accounts in the organization are affected by the restrictions.
  • You can’t add them back at a lower level in the hierarchy because an SCP never grants permissions; it only filters them.

Blacklisting

  • Default behavior of AWS Organizations.
  • Explicitly specify the access that is not allowed.
  • Explicit deny of a service action overrides any allow of that action.
  • All other permissions are allowed unless explicitly blocked
  • By default, AWS Organizations attaches an AWS managed policy called FullAWSAccess to all roots, OUs, and accounts. This allows any account to access any service or operation with no AWS Organizations–imposed restrictions.
  • With blacklisting, additional policies are attached that explicitly deny access to the unwanted services and actions

Refer Blog Post AWS Organizations – Service Control Policies

[do_widget id=text-15]

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An organization that is currently using consolidated billing has recently acquired another company that already has a number of AWS accounts. How could an Administrator ensure that all AWS accounts, from both the existing company and the acquired company, are billed to a single account?
    1. Merge the two companies, AWS accounts by going to the AWS console and selecting the “Merge accounts” option.
    2. Invite the acquired company’s AWS account to join the existing company’s organization using AWS Organizations.
    3. Migrate all AWS resources from the acquired company’s AWS account to the master payer account of the existing company.
    4. Create a new AWS account and set it up as the master payer. Move the AWS resources from both the existing and acquired companies’ AWS accounts to the new account.
  2. Which of the following are the benefits of AWS Organizations? Choose the 2 correct answers:
    1. Centrally manage access polices across multiple AWS accounts.
    2. Automate AWS account creation and management.
    3. Analyze cost across all multiple AWS accounts.
    4. Provide technical help (by AWS) for issues in your AWS account.
  3. A company has several departments with separate AWS accounts. Which feature would allow the company to enable consolidate billing?
    1. AWS Inspector
    2. AWS Shield
    3. AWS Organizations
    4. AWS Lightsail

References