AWS Client VPN is a managed client-based VPN service that enables secure access to AWS resources and resources in the on-premises network
Client VPN allows accessing the resources from any location using an OpenVPN-based VPN client.
Client VPN establishes a secure TLS connection from any location using the OpenVPN client.
Client VPN automatically scales to the number of users connecting to the AWS resources and on-premises resources.
Client VPN supports client authentication using Active Directory, federated authentication, and certificate-based authentication.
Client VPN provides manageability with the ability to manage active client connections, with the ability to terminate active client connections and to view connection logs, which provide details on client connection attempts
Client VPN Components
Client VPN endpoint
is the resource that is created and configured to enable and manage client VPN sessions.
is the resource where all client VPN sessions are terminated.
Target network
is the network associated with a Client VPN endpoint.
is a subnet from a VPC that enables establishing VPN sessions.
Multiple subnets can be associated with the Client VPN endpoint, however, each subnet must belong to a different Availability Zone.
Route
describes the available destination network routes.
Each route in the route table specifies the path for traffic to specific resources or networks.
Authorization rules
restrict the users who can access a network.
helps configure the AD or IdP group that is allowed access. Only users belonging to this group can access the specified network.
Client
end-user connecting to the Client VPN endpoint to establish a VPN session.
need to download an OpenVPN client and use the Client VPN configuration file to establish a VPN session.
Client VPN Authentication & Authorization
Client VPN provides authentication and authorization capabilities.
Authentication determines whether clients are allowed to connect to the Client VPN endpoint
Client VPN offers the following types of client authentication:
Active Directory authentication (user-based)
Mutual authentication (certificate-based)
Single sign-on (SAML-based federated authentication) (user-based)
allows mapping of the Active Directory group or the SAML-based IdP group to the network they can have access to.
Client VPN Split Tunnel
Client VPN endpoint, by default, routes all traffic over the VPN tunnel.
Split-tunnel Client VPN endpoint helps when you do not want all user traffic to route through the Client VPN endpoint.
Split tunnel ensures only traffic with a destination to the network matching a route from the Client VPN endpoint route table is routed over the Client VPN tunnel.
Split-tunnel offers the following benefits:
Optimized routing of traffic from clients by having only the AWS destined traffic traverse the VPN tunnel.
Reduced volume of outgoing traffic from AWS, therefore reducing the data transfer cost.
Client VPN Limitations
Client CIDR ranges cannot overlap with the local CIDR of the VPC in which the associated subnet is located, or any routes manually added to the Client VPN endpoint’s route table.
Client CIDR ranges must have a block size between /22 and /12.
Client CIDR range cannot be changed after Client VPN endpoint creation.
Subnets associated with a Client VPN endpoint must be in the same VPC.
Multiple subnets from the same AZ cannot be associated with a Client VPN endpoint.
A Client VPN endpoint does not support subnet associations in a dedicated tenancy VPC.
Client VPN supports IPv4 traffic only.
Client VPN is not Federal Information Processing Standards (FIPS) compliant.
As Client VPN is a managed service and the IP address to which the DNS name resolves might change. Hence, it is not recommended to connect to the Client VPN endpoint by using IP addresses. Use DNS instead.
IP forwarding is currently disabled when using the AWS Client VPN Desktop Application.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
A company is developing an application on AWS. For analysis, the application transmits log files to an Amazon Elasticsearch Service (Amazon ES) cluster. Each piece of data must be contained inside a VPC. A number of the company’s developers work remotely. Other developers are based at three distinct business locations. The developers must connect to Amazon ES directly from their local development computers in order to study and display logs. Which solution will satisfy these criteria?
Configure and set up an AWS Client VPN endpoint. Associate the Client VPN endpoint with a subnet in the VPC. Configure a Client VPN self-service portal. Instruct the developers to connect by using the client for Client VPN.
Create a transit gateway, and connect it to the VPC. Create an AWS Site-to-Site VPN. Create an attachment to the transit gateway. Instruct the developers to connect by using an OpenVPN client.
Create a transit gateway, and connect it to the VPC. Order an AWS Direct Connect connection. Set up a public VIF on the Direct Connect connection. Associate the public VIF with the transit gateway. Instruct the developers to connect to the Direct Connect connection.
Create and configure a bastion host in a public subnet of the VPC. Configure the bastion host security group to allow SSH access from the company CIDR ranges. Instruct the developers to connect by using SSH.
Transit Gateway can be used instead of Transit VPC. AWS Transit Gateway offers the same advantages as transit VPC, but it is a managed service that scales elastically in a highly available product.
Transit VPC helps connect multiple, geographically disperse VPCs and remote networks in order to create a global network transit center.
Transit VPC can solve some of the shortcomings of VPC peering by introducing a hub and spoke design for inter-VPC connectivity.
A transit VPC simplifies network management and minimizes the number of connections required to connect multiple VPCs and remote networks.
Transit VPC allows an easy way to implement shared services or packet inspection/replication in a VPC.
Transit VPC can be used to support important use cases
Private Networking – build a private network that spans two or more AWS Regions.
Shared Connectivity – Multiple VPCs can share connections to data centers, partner networks, and other clouds.
Cross-Account AWS Usage – The VPCs and the AWS resources within them can reside in multiple AWS accounts.
Transit VPC design helps implement more complex routing rules, such as network address translation between overlapping network ranges, or to add additional network-level packet filtering or inspection
Transit VPC Configuration
Transit VPC network consists of a central VPC (the hub VPC) connecting with every other VPC (spoke VPC) through a VPN connection typically leveraging BGP over IPsec.
Central VPC contains EC2 instances running software appliances that route incoming traffic to their destinations using the VPN overlay.
Transit VPC Advantages & Disadvantages
supports Transitive routing using the overlay VPN network — allowing for a simpler hub and spoke design. Can be used to provide shared services for VPC Endpoints, Direct Connect connection, etc.
supports network address translation between overlapping network ranges.
supports vendor functionality around advanced security (layer 7 firewall/Intrusion Prevention System (IPS)/Intrusion Detection System (IDS) ) using third-party software on EC2
leverages instance-based routing that increases costs while lowering availability and limiting the bandwidth.
Customers are responsible for managing the HA and redundancy of EC2 instances running the third-party vendor virtual appliances
Transit VPC High Availability
Transit VPC vs VPC Peering vs Transit Gateway
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
Under increased cyber security concerns, a company is deploying a near real-time intrusion detection system (IDS) solution. A system must be put in place as soon as possible. The architecture consists of many AWS accounts, and all results must be delivered to a central location. Which solution will meet this requirement, while minimizing downtime and costs?
Deploy a third-party vendor solution to perform deep packet inspection in a transit VPC.
Enable VPC Flow Logs on each VPC. Set up a stream of the flow logs to a central Amazon Elasticsearch cluster.
Enable Amazon Macie on each AWS account and configure central reporting.
Enable Amazon GuardDuty on each account as members of a central account.
Your company has set up a VPN connection between their on-premises infrastructure and AWS. They have multiple VPCs defined. They also need to ensure that all traffic flows through a security VPC from their on-premise infrastructure. How would you architect the solution? (Select TWO)
Create a VPN connection between the On-premise environment and the Security VPC (Transit VPC pattern)
Create a VPN connection between the On-premise environment to all other VPC’s
Create a VPN connection between the Security VPC to all other VPC’s (Transit VPC pattern)
Create a VPC peering connection between the Security VPC and all other VPC’s
Guest post by Dustin Albertson – Manager of Cloud & Applications, Product Management -Veeam.
I want to discuss something that’s important to me, security. Far too often I have discussions with customers and other engineers where they’re discussing an architecture or problem they are running into, and I spot issues with the design or holes in the thought process. One of the best things about the cloud model is also one of its worst traits: it’s “easy.” What I mean by this is that it’s easy to log into AWS and set up an EC2 instance, connect it to the internet and configure basic settings. This usually leads to issues down the road because the basic security or architectural best practices were not followed. Therefore, I want to talk about a few things that everyone should be aware of.
The Well-Architected Framework
AWS has done a great job at creating a framework for its customer to adhere to when planning and deploying workloads in AWS. This framework is called the AWS Well-Architected Framework. The framework has 6 pillars that helps you learn architectural best practices for designing and operating secure, reliable, efficient, cost-effective, and sustainable workloads in the AWS Cloud. The pillars are :
Operational Excellence: The ability to support the development and run workloads effectively, gain insight into their operations, and continuously improve supporting processes and procedures to deliver business value.
Security: The security pillar describes how to take advantage of cloud technologies to protect data, systems, and assets in a way that can improve your security posture.
Reliability: The reliability pillar encompasses the ability of a workload to perform its intended function correctly and consistently when it’s expected to. This includes the ability to operate and test the workload through its total lifecycle. This paper provides in-depth, best practice guidance for implementing reliable workloads on AWS.
Performance Efficiency: The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
Cost Optimization: The ability to run systems to deliver business value at the lowest price point.
Sustainability: The ability to continually improve sustainability impacts by reducing energy consumption and increasing efficiency across all components of a workload by maximizing the benefits from the provisioned resources and minimizing the total resources required.
This framework is important to read and understand for not only a customer but a software vendor or a services provider as well. As a company that provides software in the AWS marketplace, Veeam must go through a few processes prior to listing in the marketplace. Those processes are what’s called a W.A.R (Well-Architected Review) and a T.F.R (Technical Foundation Review). A W.A.R. is a deep dive into the product and APIs to make sure that the best practices are being used in the way the products not only interact with the APIs in AWS but also how the software is deployed and the architecture it uses. The T.F.R. is a review to validate that all the appropriate documentation and help guides are in place so that a customer can easily find out how to deploy, protect, secure, and obtain support when using a product deployed via the AWS Marketplace. This can give customers peace of mind when deploying software from the marketplace because they’ll know that it has been rigorously tested and validated.
I have mostly been talking at a high level here and want to break this down into a real-world example. Veeam has a product in the AWS Marketplace called Veeam Backup for AWS. One of the best practices for this product is to deploy it into a separate AWS account than your production account.
The reason for this is that the software will reach into the production account and back up the instances you wish to protect into an isolated protection account where you can limit the number of people who have access. It’s also a best practice to have your backup data stored away from production data. Now here is where the story gets interesting, a lot of people like to use encryption on their EBS volumes. But since it’s so easy to enable encryption, most people just turn it on and move on. The root of the issue is that AWS has made it easy to encrypt a volume since they have a default key that you choose when creating an instance.
They have also made it easy to set a policy that every new volume is encrypted and the default choice is the default key.
This is where the problem begins. Now, this may be fine for now or for a lot of users, but what this does is create issues later down the road. Default encryption keys cannot be shared outside of the account that the key resides in. This means that you would not be able to back that instance up to another account, you can’t rotate the keys, you can’t delete the keys, you can’t audit the keys, and more. Customer managed keys (CMK) give you the ability to create, rotate, disable, enable and audit the encryption key used to protect the data. I don’t want to go too deep here but this is an example that I run into a lot and people don’t realize the impact of this setting until it’s too late. To change from a default key to a CMK requires downtime of the instance and is a very manual process, although it can be scripted out, it still can be a very cumbersome task if we are talking about hundreds to thousands of instances.
I’ am not trying to scare people or shame people for not knowing this information. A lot of the time in the field, we are so busy and just get things working and move on. My goal here is to try to get you to stop for a second and think about if the choices you are making are the best ones for your security. Take advantage of the resources and help that companies like AWS and Veeam are offering and learn about data protection and security best practices. Take a step back from time to time and evaluate the architecture or design that you are implementing. Get a second set of eyes on the project. It may sound complicated or confusing, but I promise it’s not that hard and the best bet is to just ask others. Also, don’t forget to check the “Choose Your Cloud Adventure” interactive e-book to learn how to manage your AWS data like a hero.
AWS Key Management Service – KMS is a managed encryption service that allows the creation and control of encryption keys to enable data encryption.
provides a highly available key storage, management, and auditing solution to encrypt the data across AWS services & within applications.
uses hardware security modules (HSMs) to protect and validate the keys by the FIPS 140-2 Cryptographic Module Validation Program.
seamlessly integrates with several AWS services to make encrypting data in those services easy.
is integrated with AWS CloudTrail to provide encryption key usage logs to help meet auditing, regulatory, and compliance needs.
is regional and keys are only stored and used in the region in which they are created. They cannot be transferred to another region.
enforces usage and management policies, to control which IAM user, role from the account, or other accounts can manage and use keys.
can create and manage keys by
Create, edit, and view symmetric and asymmetric keys, including HMAC keys.
Control access to the keys by using key policies, IAM policies, and grants. Policies can be further refined using condition keys.
Supports attribute-based access control (ABAC).
Create, delete, list, and update aliases for the keys.
Tag the keys for identification, automation, and cost tracking.
Enable and disable keys.
Enable and disable automatic rotation of the cryptographic material in keys.
Delete keys to complete the key lifecycle.
supports the following cryptographic operations
Encrypt, decrypt, and re-encrypt data with symmetric or asymmetric keys.
Sign and verify messages with asymmetric keys.
Generate exportable symmetric data keys and asymmetric data key pairs.
Generate and verify HMAC codes.
Generate random numbers suitable for cryptographic applications
supports multi-region keys, which act like copies of the same KMS key in different AWS Regions that can be used interchangeably – as though you had the same key in multiple Regions.
AWS cloud services integrated with AWS KMS use a method called envelope encryption to protect the data.
Envelope encryption is an optimized method for encrypting data that uses two different keys (Master key and Data key)
With Envelop encryption
A data key is generated and used by the AWS service to encrypt each piece of data or resource.
Data key is encrypted under a defined master key.
Encrypted data key is then stored by the AWS service.
For data decryption by the AWS service, the encrypted data key is passed to KMS and decrypted under the master key that was originally encrypted so the service can then decrypt the data.
When the data is encrypted directly with KMS it must be transferred over the network.
Envelope encryption can offer significant performance benefits as KMS only supports sending data less than 4 KB to be encrypted.
Envelope encryption reduces the network load for the application or AWS cloud service as only the request and fulfillment of the data key must go over the network.
KMS Service Concepts
KMS Keys OR Customer Master Keys (CMKs)
AWS KMS key is a logical representation of a cryptographic key.
KMS Keys can be used to create symmetric or asymmetric keys for encryption or signing OR HMAC keys to generate and verify HMAC tags.
Symmetric keys and the private keys of asymmetric keys never leave AWS KMS unencrypted.
A KMS key contains metadata, such as the key ID, key spec, key usage, creation date, description, key state, and a reference to the key material that is used to run cryptographic operations with the KMS key.
Symmetric keys are 256-bit AES keys that are not exportable.
KMS keys can be used to generate, encrypt, and decrypt the data keys, used outside of AWS KMS to encrypt the data [Envelope Encryption]
Key material for symmetric keys and the private keys of asymmetric key never leaves AWS KMS unencrypted.
Customer Keys and AWS Keys
AWS Managed Keys
AWS Managed keys are created, managed, and used on your behalf by AWS services in your AWS account.
keys are automatically rotated every year (~365 days) and the rotation schedule cannot be changed.
have permission to view the AWS managed keys in your account, view their key policies, and audit their use in CloudTrail logs.
cannot manage or rotate these keys, change their key policies, or use them in cryptographic operations directly; the service that creates them uses them on your behalf.
Customer managed keys
Customer managed keys are created by you to encrypt your service resources in your account.
Automatic rotation is Optional and if enabled, keys are automatically rotated every year.
provides full control over these keys, including establishing and maintaining their key policies, IAM policies, and grants, enabling and disabling them, rotating their cryptographic material, adding tags, creating aliases refering the KMS keys, and scheduling the KMS keys for deletion.
AWS Owned Keys
AWS owned keys are a collection of KMS keys that an AWS service owns and manages for use in multiple AWS accounts.
AWS owned keys are not in your AWS account, however, an AWS service can use the associated AWS owned keys to protect the resources in your account.
cannot view, use, track, or audit them
Key Material
KMS keys contain a reference to the key material used to encrypt and decrypt data.
By default, AWS KMS generates the key material for a newly created key.
KMS key can be created without key material and then your own key material can be imported or created in the AWS CloudHSM cluster associated with an AWS KMS custom key store.
Key material cannot be extracted, exported, viewed, or managed.
Key material cannot be deleted; you must delete the KMS key.
Key Material Origin
Key material origin is a KMS key property that identifies the source of the key material in the KMS key.
Symmetric encryption KMS keys can have one of the following key material origin values.
AWS_KMS
AWS KMS creates and manages the key material for the KMS key in AWS KMS.
EXTERNAL
Key has imported key material.
Management and security of the key are the customer’s responsibility.
Only symmetric keys are supported.
Automatic rotation is not supported and needs to be manually rotated.
AWS_CLOUDHSM
AWS KMS created the key material for the KMS key in the AWS CloudHSM cluster associated with the custom key store.
EXTERNAL_KEY_STORE
Key material is a cryptographic key in an external key manager outside of AWS.
This origin is supported only for KMS keys in an external key store.
Data Keys
Data keys are encryption keys that you can use to encrypt data, including large amounts of data and other data encryption keys.
KMS does not store, manage, or track your data keys.
Data keys must be used by services outside of KMS.
Encryption Context
Encryption context provides an optional set of key–value pairs that can contain additional contextual information about the data.
AWS KMS uses the encryption context as additional authenticated data (AAD) to support authenticated encryption.
Encryption context is not secret and not encrypted and appears in plaintext in CloudTrail Logs so you can use it to identify and categorize your cryptographic operations.
Encryption context should not include sensitive information.
Encryption context usage
When an encryption context is included in an encryption request, it is cryptographically bound to the ciphertext such that the same encryption context is required to decrypt the data.
If the encryption context provided in the decryption request is not an exact, case-sensitive match, the decrypt request fails.
Only the order of the key-value pairs in the encryption context can vary.
Key Policies
help determine who can use and manage those keys.
can add, remove, or change permissions at any time for a customer-managed key.
cannot edit the key policy for AWS owned or managed keys.
Grants
provides permissions, an alternative to the key policy and IAM policy, that allows AWS principals to use the KMS keys.
are often used for temporary permissions because you can create one, use its permissions, and delete it without changing the key policies or IAM policies.
permissions specified in the grant might not take effect immediately due to eventual consistency.
Grant Tokens
help mitigate the potential delay with grants.
use the grant token received in the response to CreateGrant API request to make the permissions in the grant take effect immediately.
Alias
Alias helps provide a friendly name for a KMS key.
can be used to refer to different KMS keys in each AWS Region.
can be used to point to different keys without changing the code.
can allow and deny access to KMS keys based on their aliases without editing policies or managing grants.
aliases are independent resources, not properties of a KMS key, and can be added, changed, and deleted without affecting the associated KMS key.
Encryption & Decryption Process
Use KMS to get encrypted and plaintext data keys using CMK.
Use the plaintext data key to encrypt the data and store the encrypted data key with the data.
Use KMS decrypt to get the plaintext data key and decrypt the data.
Remove the plaintext data key from memory, once the operation is completed.
KMS Working
KMS centrally manages and securely stores the keys.
Keys can be generated or imported from the key management infrastructure (KMI).
Keys can be used from within the applications and supported AWS services to protect the data, but the key never leaves KMS.
Data is submitted to KMS to be encrypted, or decrypted, under keys that you control.
Usage policies on these keys can be set that determines which users can use them to encrypt and decrypt data.
KMS Access Control
Primary way to manage access to AWS KMS keys is with policies.
KMS keys access can be controlled using
Key Policies
are resource-based policies
every KMS key has a key policy
is a primary mechanism for controlling access to a key.
can be used alone to control access to the keys.
IAM policies
use IAM policies in combination with the key policy to control access to keys.
helps manage all of the permissions for your IAM identities in IAM.
Grants
Use grants in combination with the key policy and IAM policies to allow access to keys.
helps allow access to the keys in the key policy, and to allow users to delegate their access to others.
To allow access to a KMS CMK, a key policy MUST be used, either alone or in combination with IAM policies or grants.
IAM policies by themselves are not sufficient to allow access to keys, though they can be used in combination with a key policy.
IAM user who creates a KMS key is not considered to be the key owner and they don’t automatically have permission to use or manage the KMS key that they created.
Rotating KMS or Customer Master Keys
Key rotation changes only the key material, which is the cryptographic secret that is used in encryption operations.
KMS keys can be enabled for automatic key rotation, where KMS generates new cryptographic material for the key every year.
KMS saves all previous versions of the cryptographic material in perpetuity so it can decrypt any data encrypted with that key.
KMS does not delete any rotated key material until you delete the KMS key.
All new encryption requests against a key are encrypted under the newest version of the key.
properties of the KMS key like ID, ARN, region, policies, and permissions do not change.
applications or aliases refering the key do not need to change
Rotating key material does not affect the use of the KMS key in any AWS service.
Automatic key rotation is supported only on symmetric encryption KMS keys with key material that KMS generates i.e. Origin = AWS_KMS.
Automatic key rotation is not supported for
asymmetric keys,
HMAC keys,
keys in custom key stores, and
keys with imported key material.
AWS managed keys
automatically rotated every 1 year (updated from 3 years before)
rotation cannot be enabled or disabled
Customer Managed keys
automatic key rotation is supported but is optional.
automatic key rotation is disabled, by default, and needs to be enabled.
keys can be rotated every year.
CMKs with imported key material or keys generated in a CloudHSM cluster using the KMS custom key store feature
do not support automatic key rotation.
provide flexibility to manually rotate keys as required.
Manual Key Rotation
Manual key rotation can be performed by creating a KMS key and updating the applications or aliases to point to the new key.
does not retain the ID, ARN, and policies of the key.
can help control the rotation frequency esp. if the frequency required is less than a year.
is also a good solution for KMS keys that are not eligible for automatic key rotation, such as asymmetric keys, HMAC keys, keys in custom key stores, and keys with imported key material.
For manually rotated keys, data has to be re-encrypted depending on the application’s configuration.
KMS Deletion
KMS key deletion deletes the key material and all metadata associated with the key and is irreversible.
Data encrypted by the deleted key cannot be recovered, once the key is deleted.
AWS recommends disabling the key before deleting it.
AWS Managed and Owned keys cannot be deleted. Only Customer managed keys can be scheduled for deletion.
KMS never deletes the keys unless you explicitly schedule them for deletion and the mandatory waiting period expires.
KMS requires setting a waiting period of 7-30 days for key deletion. During the waiting period, the KMS key status and key state is Pending deletion.
Key pending deletion cannot be used in any cryptographic operations.
Key material of keys that are pending deletion is not rotated.
KMS Multi-Region Keys
AWS KMS supports multi-region keys, which are AWS KMS keys in different AWS Regions that can be used interchangeably – as though you had the same key in multiple Regions.
Multi-Region keys have the same key material and key ID, so data can be encrypted in one AWS Region and decrypted in a different AWS Region without re-encrypting or making a cross-Region call to AWS KMS.
Multi-Region keys never leave AWS KMS unencrypted.
Multi-Region keys are not global and each multi-region key needs to be replicated and managed independently.
KMS Features
Create keys with a unique alias and description
Import your own keys
Control which IAM users and roles can manage keys
Control which IAM users and roles can use keys to encrypt & decrypt data
Choose to have AWS KMS automatically rotate keys on an annual basis
Temporarily disable keys so they cannot be used by anyone
Re-enable disabled keys
Delete keys that you no longer use
Audit use of keys by inspecting logs in AWS CloudTrail
Interface VPC endpoint ensures the communication between the VPC and AWS KMS is conducted entirely within the AWS network.
Interface VPC endpoint connects the VPC directly to KMS without an internet gateway, NAT device, VPN, or Direct Connect connection.
Instances in the VPC do not need public IP addresses to communicate with AWS KMS.
KMS vs CloudHSM
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
You are designing a personal document-archiving solution for your global enterprise with thousands of employee. Each employee has potentially gigabytes of data to be backed up in this archiving solution. The solution will be exposed to he employees as an application, where they can just drag and drop their files to the archiving system. Employees can retrieve their archives through a web interface. The corporate network has high bandwidth AWS DirectConnect connectivity to AWS. You have regulatory requirements that all data needs to be encrypted before being uploaded to the cloud. How do you implement this in a highly available and cost efficient way?
Manage encryption keys on-premise in an encrypted relational database. Set up an on-premises server with sufficient storage to temporarily store files and then upload them to Amazon S3, providing a client-side master key. (Storing temporary increases cost and not a high availability option)
Manage encryption keys in a Hardware Security Module (HSM) appliance on-premise server with sufficient storage to temporarily store, encrypt, and upload files directly into amazon Glacier. (Not cost effective)
Manage encryption keys in amazon Key Management Service (KMS), upload to amazon simple storage service (s3) with client-side encryption using a KMS customer master key ID and configure Amazon S3 lifecycle policies to store each object using the amazon glacier storage tier. (With CSE-KMS the encryption happens at client side before the object is upload to S3 and KMS is cost effective as well)
Manage encryption keys in an AWS CloudHSM appliance. Encrypt files prior to uploading on the employee desktop and then upload directly into amazon glacier (Not cost effective)
An AWS customer is deploying an application that is composed of an Auto Scaling group of EC2 Instances. The customers security policy requires that every outbound connection from these instances to any other service within the customers Virtual Private Cloud must be authenticated using a unique x 509 certificate that contains the specific instance-id. In addition an x 509 certificates must be designed by the customer’s Key management service in order to be trusted for authentication. Which of the following configurations will support these requirements?
Configure an IAM Role that grants access to an Amazon S3 object containing a signed certificate and configure the Auto Scaling group to launch instances with this role. Have the instances bootstrap get the certificate from Amazon S3 upon first boot.
Embed a certificate into the Amazon Machine Image that is used by the Auto Scaling group Have the launched instances generate a certificate signature request with the instance’s assigned instance-id to the Key management service for signature.
Configure the Auto Scaling group to send an SNS notification of the launch of a new instance to the trusted key management service. Have the Key management service generate a signed certificate and send it directly to the newly launched instance.
Configure the launched instances to generate a new certificate upon first boot. Have the Key management service poll the AutoScaling group for associated instances and send new instances a certificate signature that contains the specific instance-id.
A company has a customer master key (CMK) with imported key materials. Company policy requires that all encryption keys must be rotated every year. What can be done to implement the above policy?
Enable automatic key rotation annually for the CMK.
Use AWS Command Line interface to create an AWS Lambda function to rotate the existing CMK annually.
Import new key material to the existing CMK and manually rotate the CMK.
Create a new CMK, import new key material to it, and point the key alias to the new CMK.
An organization policy states that all encryption keys must be automatically rotated every 12 months. Which AWS Key Management Service (KMS) key type should be used to meet this requirement? (Select TWO)
AWS managed Customer Master Key (CMK) (Now supports every year. It was every 3 years before.)
Customer managed CMK with AWS generated key material
Identity Provider can be used to grant external user identity permissions to AWS resources without having to be created within your AWS account.
External user identities can be authenticated either through the organization’s authentication system or through a well-known identity provider such as Amazon, Google, etc.
Identity providers help keep the AWS account secure without having the need to distribute or embed long-term in the application
To use an IdP, an IAM identity provider entity can be created to establish a trust relationship between the AWS account and the IdP.
IAM supports IdPs that are compatible with OpenID Connect (OIDC) or SAML 2.0 (Security Assertion Markup Language 2.0)
Web Identity Federation without Cognito
Mobile or Web Application needs to be configured with the IdP which gives each application a unique ID or client ID (also called audience)
Create an Identity Provider entity for OIDC compatible IdP in IAM.
Create an IAM role and define the
Trust policy – specify the IdP (like Amazon) as the Principal (the trusted entity), and include a Condition that matches the IdP assigned app ID
Permission policy – specify the permissions the application can assume
Application calls the sign-in interface for the IdP to login
IdP authenticates the user and returns an authentication token (OAuth access token or OIDC ID token) with information about the user to the application
Application then makes an unsigned call to the STS service with the AssumeRoleWithWebIdentity action to request temporary security credentials.
Application passes the IdP’s authentication token along with the Amazon Resource Name (ARN) for the IAM role created for that IdP.
AWS verifies that the token is trusted and valid and if so, returns temporary security credentials (access key, secret access key, session token, expiry time) to the application that has the permissions for the role that you name in the request.
STS response also includes metadata about the user from the IdP, such as the unique user ID that the IdP associates with the user.
Application makes signed requests to AWS using the Temporary credentials
User ID information from the identity provider can distinguish users in the app for e.g., objects can be put into S3 folders that include the user ID as prefixes or suffixes. This lets you create access control policies that lock the folder so only the user with that ID can access it.
Application can cache the temporary security credentials and refresh them before their expiry accordingly. Temporary credentials, by default, are good for an hour.
Amazon Cognito as the identity broker is a recommended for almost all web identity federation scenarios
Cognito is easy to use and provides additional capabilities like anonymous (unauthenticated) access
Cognito supports anonymous users, MFA and also helps synchronizing user data across devices and providers
SAML 2.0-based Federation
AWS supports identity federation with SAML 2.0 (Security Assertion Markup Language 2.0), an open standard used by many identity providers (IdPs).
SAML 2.0 based federation feature enables federated single sign-on (SSO), so users can log into the AWS Management Console or call the AWS APIs without having to create an IAM user for everyone in the organization
SAML helps simplify the process of configuring federation with AWS by using the IdP’s service instead of writing custom identity proxy code.
This is useful in organizations that have integrated their identity systems (such as Windows Active Directory or OpenLDAP) with software that can produce SAML assertions to provide information about user identity and permissions (such as Active Directory Federation Services or Shibboleth)
Create a SAML provider entity in AWS using the SAML metadata document provided by the Organizations IdP to establish a “trust” between your AWS account and the IdP
SAML metadata document includes the issuer name, a creation date, an expiration date, and keys that AWS can use to validate authentication responses (assertions) from your organization.
Create IAM roles which define
Trust policy with the SAML provider as the principal, which establishes a trust relationship between the organization and AWS
Permission policy establishes what users from the organization are allowed to do in AWS
SAML trust is completed by configuring the Organization’s IdP with information about AWS and the role(s) that you want the federated users to use. This is referred to as configuring relying party trust between your IdP and AWS
Application calls the sign-in interface for the Organization IdP to login
IdP authenticates the user and generates a SAML authentication response which includes assertions that identify the user and include attributes about the user
Application then makes an unsigned call to the STS service with the AssumeRoleWithSAML action to request temporary security credentials.
Application passes the ARN of the SAML provider, the ARN of the role to assume, the SAML assertion about the current user returned by IdP, and the time for which the credentials should be valid. An optional IAM Policy parameter can be provided to further restrict the permissions to the user
AWS verifies that the SAML assertion is trusted and valid and if so, returns temporary security credentials (access key, secret access key, session token, expiry time) to the application that has the permissions for the role named in the request.
STS response also includes metadata about the user from the IdP, such as the unique user ID that the IdP associates with the user.
Using the Temporary credentials, the application makes signed requests to AWS to access the services
Application can cache the temporary security credentials and refresh them before their expiry accordingly. Temporary credentials, by default, are good for an hour.
AWS SSO with SAML
SAML 2.0 based federation can also be used to grant access to the federated users to the AWS Management console.
This requires the use of the AWS SSO endpoint instead of directly calling the AssumeRoleWithSAML API.
The endpoint calls the API for the user and returns a URL that automatically redirects the user’s browser to the AWS Management Console.
User browses the organization’s portal and selects the option to go to the AWS Management Console.
Portal performs the function of the identity provider (IdP) that handles the exchange of trust between the organization and AWS.
Portal verifies the user’s identity in the organization.
Portal generates a SAML authentication response that includes assertions that identify the user and include attributes about the user.
Portal sends this response to the client browser.
Client browser is redirected to the AWS SSO endpoint and posts the SAML assertion.
AWS SSO endpoint handles the call for the AssumeRoleWithSAML API action on the user’s behalf and requests temporary security credentials from STS and creates a console sign-in URL that uses those credentials.
AWS sends the sign-in URL back to the client as a redirect.
Client browser is redirected to the AWS Management Console. If the SAML authentication response includes attributes that map to multiple IAM roles, the user is first prompted to select the role to use for access to the console.
Custom Identity Broker Federation
If the Organization doesn’t support SAML-compatible IdP, a Custom Identity Broker can be used to provide the access.
Custom Identity Broker should perform the following steps
Verify that the user is authenticated by the local identity system.
Call the AWS STS AssumeRole (recommended) or GetFederationToken (by default, has an expiration period of 36 hours) APIs to obtain temporary security credentials for the user.
Temporary credentials limit the permissions a user has to the AWS resource
Call an AWS federation endpoint and supply the temporary security credentials to get a sign-in token.
Construct a URL for the console that includes the token.
URL that the federation endpoint provides is valid for 15 minutes after it is created.
Give the URL to the user or invoke the URL on the user’s behalf.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
A photo-sharing service stores pictures in Amazon Simple Storage Service (S3) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for the Amazon S3 operations?
SAML-based Identity Federation
Cross-Account Access
AWS IAM users
Web Identity Federation
Which technique can be used to integrate AWS IAM (Identity and Access Management) with an on-premise LDAP (Lightweight Directory Access Protocol) directory service?
Use an IAM policy that references the LDAP account identifiers and the AWS credentials.
Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP
Use AWS Security Token Service from an identity broker to issue short-lived AWS credentials. (Refer Link)
Use IAM roles to automatically rotate the IAM credentials when LDAP credentials are updated.
Use the LDAP credentials to restrict a group of users from launching specific EC2 instance types.
You are designing a photo sharing mobile app the application will store all pictures in a single Amazon S3 bucket. Users will upload pictures from their mobile device directly to Amazon S3 and will be able to view and download their own pictures directly from Amazon S3. You want to configure security to handle potentially millions of users in the most secure manner possible. What should your server-side application do when a new user registers on the photo-sharing mobile application? [PROFESSIONAL]
Create a set of long-term credentials using AWS Security Token Service with appropriate permissions Store these credentials in the mobile app and use them to access Amazon S3.
Record the user’s Information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app create temporary credentials using the AWS Security Token Service ‘AssumeRole’ function. Store these credentials in the mobile app’s memory and use them to access Amazon S3. Generate new credentials the next time the user runs the mobile app.
Record the user’s Information in Amazon DynamoDB. When the user uses their mobile app create temporary credentials using AWS Security Token Service with appropriate permissions. Store these credentials in the mobile app’s memory and use them to access Amazon S3 Generate new credentials the next time the user runs the mobile app.
Create IAM user. Assign appropriate permissions to the IAM user Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3.
Create an IAM user. Update the bucket policy with appropriate permissions for the IAM user Generate an access Key and secret Key for the IAM user, store them In the mobile app and use these credentials to access Amazon S3.
Your company has recently extended its datacenter into a VPC on AWS to add burst computing capacity as needed Members of your Network Operations Center need to be able to go to the AWS Management Console and administer Amazon EC2 instances as necessary. You don’t want to create new IAM users for each NOC member and make those users sign in again to the AWS Management Console. Which option below will meet the needs for your NOC members? [PROFESSIONAL]
Use OAuth 2.0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AWS Management Console.
Use Web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console.
Use your on-premises SAML 2.O-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.
Use your on-premises SAML 2.0-compliant identity provider (IDP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console
A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an iPsec VPN. The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) keyspace specific to that user. Which two approaches can satisfy these objectives? (Choose 2 answers) [PROFESSIONAL]
Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket. (Needs to authenticate against LDAP and not IAM)
The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket. (Authenticates with LDAP and calls the AssumeRole)
Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket. (Custom Identity broker implementation, with authentication with LDAP and using federated token)
The application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security Token service to log in to IAM using the LDAP credentials the application can use the IAM temporary credentials to access the appropriate S3 bucket. (Can’t login to IAM using LDAP credentials)
The application authenticates against IAM Security Token Service using the LDAP credentials the application uses those temporary AWS security credentials to access the appropriate S3 bucket. (Need to authenticate with LDAP)
Company B is launching a new game app for mobile devices. Users will log into the game using their existing social media account to streamline data capture. Company B would like to directly save player data and scoring information from the mobile app to a DynamoDB table named Score Data When a user saves their game the progress data will be stored to the Game state S3 bucket. what is the best approach for storing data to DynamoDB and S3? [PROFESSIONAL]
Use an EC2 Instance that is launched with an EC2 role providing access to the Score Data DynamoDB table and the GameState S3 bucket that communicates with the mobile app via web services.
Use temporary security credentials that assume a role providing access to the Score Data DynamoDB table and the Game State S3 bucket using web identity federation
Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile app with access to the Score Data DynamoDB table and the Game State S3 bucket.
Use an IAM user with access credentials assigned a role providing access to the Score Data DynamoDB table and the Game State S3 bucket for distribution with the mobile app.
A user has created a mobile application which makes calls to DynamoDB to fetch certain data. The application is using the DynamoDB SDK and root account access/secret access key to connect to DynamoDB from mobile. Which of the below mentioned statements is true with respect to the best practice for security in this scenario?
User should create a separate IAM user for each mobile application and provide DynamoDB access with it
User should create an IAM role with DynamoDB and EC2 access. Attach the role with EC2 and route all calls from the mobile through EC2
The application should use an IAM role with web identity federation which validates calls to DynamoDB with identity providers, such as Google, Amazon, and Facebook
Create an IAM Role with DynamoDB access and attach it with the mobile application
You are managing the AWS account of a big organization. The organization has more than 1000+ employees and they want to provide access to the various services to most of the employees. Which of the below mentioned options is the best possible solution in this case?
The user should create a separate IAM user for each employee and provide access to them as per the policy
The user should create an IAM role and attach STS with the role. The user should attach that role to the EC2 instance and setup AWS authentication on that server
The user should create IAM groups as per the organization’s departments and add each user to the group for better access control
Attach an IAM role with the organization’s authentication service to authorize each user for various AWS services
Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon S3 versus acquiring more hardware The outcome was that all employees would be granted access to use Amazon S3 for storage of their personal documents. Which of the following will you need to consider so you can set up a solution that incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? (Choose 3 Answers) [PROFESSIONAL]
Setting up a federation proxy or identity provider
Using AWS Security Token Service to generate temporary tokens
Tagging each folder in the bucket
Configuring IAM role
Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket
An AWS customer is deploying a web application that is composed of a front-end running on Amazon EC2 and of confidential data that is stored on Amazon S3. The customer security policy that all access operations to this sensitive data must be authenticated and authorized by a centralized access management system that is operated by a separate security team. In addition, the web application team that owns and administers the EC2 web front-end instances is prohibited from having any ability to access the data that circumvents this centralized access management system. Which of the following configurations will support these requirements? [PROFESSIONAL]
Encrypt the data on Amazon S3 using a CloudHSM that is operated by the separate security team. Configure the web application to integrate with the CloudHSM for decrypting approved data access operations for trusted end-users. (S3 doesn’t integrate directly with CloudHSM, also there is no centralized access management system control)
Configure the web application to authenticate end-users against the centralized access management system. Have the web application provision trusted users STS tokens entitling the download of approved data directly from Amazon S3 (Controlled access and admins cannot access the data as it needs authentication)
Have the separate security team create and IAM role that is entitled to access the data on Amazon S3. Have the web application team provision their instances with this role while denying their IAM users access to the data on Amazon S3 (Web team would have access to the data)
Configure the web application to authenticate end-users against the centralized access management system using SAML. Have the end-users authenticate to IAM using their SAML token and download the approved data directly from S3. (not the way SAML auth works and not sure if the centralized access management system is SAML complaint)
What is web identity federation?
Use of an identity provider like Google or Facebook to become an AWS IAM User.
Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials.
Use of AWS IAM User tokens to log in as a Google or Facebook user.
Use of AWS STS Tokens to log in as a Google or Facebook user.
Games-R-Us is launching a new game app for mobile devices. Users will log into the game using their existing Facebook account and the game will record player data and scoring information directly to a DynamoDB table. What is the most secure approach for signing requests to the DynamoDB API?
Create an IAM user with access credentials that are distributed with the mobile app to sign the requests
Distribute the AWS root account access credentials with the mobile app to sign the requests
Request temporary security credentials using web identity federation to sign the requests
Establish cross account access between the mobile app and the DynamoDB table to sign the requests
You are building a mobile app for consumers to post cat pictures online. You will be storing the images in AWS S3. You want to run the system very cheaply and simply. Which one of these options allows you to build a photo sharing application without needing to worry about scaling expensive uploads processes, authentication/authorization and so forth?
Build the application out using AWS Cognito and web identity federation to allow users to log in using Facebook or Google Accounts. Once they are logged in, the secret token passed to that user is used to directly access resources on AWS, like AWS S3. (Amazon Cognito is a superset of the functionality provided by web identity federation.Referlink)
Use JWT or SAML compliant systems to build authorization policies. Users log in with a username and password, and are given a token they can use indefinitely to make calls against the photo infrastructure.
Use AWS API Gateway with a constantly rotating API Key to allow access from the client-side. Construct a custom build of the SDK and include S3 access in it.
Create an AWS oAuth Service Domain ad grant public signup and access to the domain. During setup, add at least one major social media site as a trusted Identity Provider for users.
The Marketing Director in your company asked you to create a mobile app that lets users post sightings of good deeds known as random acts of kindness in 80-character summaries. You decided to write the application in JavaScript so that it would run on the broadest range of phones, browsers, and tablets. Your application should provide access to Amazon DynamoDB to store the good deed summaries. Initial testing of a prototype shows that there aren’t large spikes in usage. Which option provides the most cost-effective and scalable architecture for this application? [PROFESSIONAL]
Provide the JavaScript client with temporary credentials from the Security Token Service using a Token Vending Machine (TVM) on an EC2 instance to provide signed credentials mapped to an Amazon Identity and Access Management (IAM) user allowing DynamoDB puts and S3 gets. You serve your mobile application out of an S3 bucket enabled as a web site. Your client updates DynamoDB. (Single EC2 instance not a scalable architecture)
Register the application with a Web Identity Provider like Amazon, Google, or Facebook, create an IAM role for that provider, and set up permissions for the IAM role to allow S3 gets and DynamoDB puts. You serve your mobile application out of an S3 bucket enabled as a web site. Your client updates DynamoDB. (Can work with JavaScript SDK, is scalable and cost effective)
Provide the JavaScript client with temporary credentials from the Security Token Service using a Token Vending Machine (TVM) to provide signed credentials mapped to an IAM user allowing DynamoDB puts. You serve your mobile application out of Apache EC2 instances that are load-balanced and autoscaled. Your EC2 instances are configured with an IAM role that allows DynamoDB puts. Your server updates DynamoDB. (Is Scalable but Not cost effective)
Register the JavaScript application with a Web Identity Provider like Amazon, Google, or Facebook, create an IAM role for that provider, and set up permissions for the IAM role to allow DynamoDB puts. You serve your mobile application out of Apache EC2 instances that are load-balanced and autoscaled. Your EC2 instances are configured with an IAM role that allows DynamoDB puts. Your server updates DynamoDB. (Is Scalable but Not cost effective)
AWS Snow Family helps physically transport up to exabytes of data into and out of AWS.
AWS Snow Family helps customers that need to run operations in austere, non-data center environments, and in locations where there’s a lack of consistent network connectivity.
Snow Family devices are AWS owned & managed and integrate with AWS security, monitoring, storage management, and computing capabilities.
AWS Snow Family, comprised of AWS Snowcone, AWS Snowball, and AWS Snowmobile, offers a number of physical devices and capacity points, most with built-in computing capabilities.
AWS Snowcone
AWS Snowcone is portable, rugged, and secure that provides edge computing and data transfer devices.
Snowcone can be used to collect, process, and move data to AWS, either offline by shipping the device, or online with AWS DataSync.
AWS Snowcone stores data securely in edge locations, and can run edge computing workloads that use AWS IoT Greengrass or EC2 instances.
Snowcone devices are small and weigh 4.5 lbs. (2.1 kg), so you can carry one in a backpack or fit it in tight spaces for IoT, vehicular, or even drone use cases.
AWS Snowball
AWS Snowball is a data migration and edge computing device that comes in two device options:
Compute Optimized
Snowball Edge Compute Optimized devices provide 52 vCPUs, 42 terabytes of usable block or object storage, and an optional GPU for use cases such as advanced machine learning and full-motion video analysis in disconnected environments.
Storage Optimized.
Snowball Edge Storage Optimized devices provide 40 vCPUs of compute capacity coupled with 80 terabytes of usable block or S3-compatible object storage.
It is well-suited for local storage and large-scale data transfer.
Customers can use these two options for data collection, machine learning and processing, and storage in environments with intermittent connectivity (such as manufacturing, industrial, and transportation) or in extremely remote locations (such as military or maritime operations) before shipping it back to AWS.
Snowball devices may also be rack mounted and clustered together to build larger, temporary installations.
AWS Snowmobile
AWS Snowmobile moves up to 100 PB of data in a 45-foot long ruggedized shipping container and is ideal for multi-petabyte or Exabyte-scale digital media migrations and data center shutdowns.
A Snowmobile arrives at the customer site and appears as a network-attached data store for more secure, high-speed data transfer.
After data is transferred to Snowmobile, it is driven back to an AWS Region where the data is loaded into S3.
Snowmobile is tamper-resistant, waterproof, and temperature controlled with multiple layers of logical and physical security – including encryption, fire suppression, dedicated security personnel, GPS tracking, alarm monitoring, 24/7 video surveillance, and an escort security vehicle during transit.
AWS Snow Family Feature Comparision
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
A company wants to transfer petabyte-scale of data to AWS for their analytics, however are constrained on their internet connectivity? Which AWS service can help them transfer the data quickly?
S3 enhanced uploader
Snowmobile
Snowball
Direct Connect
A company wants to transfer its video library data, which runs in exabytes, to AWS. Which AWS service can help the company transfer the data?
As the adoption of the public cloud grows, more and more people are finding themselves adopting a hybrid cloud model. This adoption is being driven out of necessity more than any architectural design decision. When you start to look closely at what makes a hybrid cloud, it’s increasingly understandable why more and more people are using AWS in this hybrid model.
Of course, when we think about why you would use a hybrid cloud, it’s the mixture of different computing models that is the most obvious reason. With a hybrid cloud model consisting of on-premises infrastructures, private cloud services, and public cloud offerings, and being able to orchestrate the deployment of workloads and components across these, it becomes increasingly easy to distribute an application in a resilient manner.
AWS services
AWS provides so many different services from IaaS virtual machines, PaaS relational database offerings, through to all different types of storage offerings at extremely low costs. When we start to look at how we can leverage AWS in a hybrid cloud world, it becomes extremely obvious that storage is at the forefront of these decisions, but the key driver behind everything in the cloud is agility. It’s a buzzword that has been around for quite some time, but the ability to spin up and destroy workloads on-demand without having to invest a large amount of capital to deliver a new service or capability to a business is what is driving this hybrid cloud adoption. Of course, when you start to introduce different platforms into your already existing environment, complexities can exist. Overcoming these complexities is what makes a successful hybrid cloud implementation. Considerations around security, authentication, networking, and connectivity must be looked at. In short, these considerations are the same as when a new data center is implemented, it is just simply a different platform.
Veeam, AWS, and the Hybrid Cloud
As more and more businesses adopt this hybrid cloud approach, protecting and migrating workloads across these different platforms becomes extremely complex. This is where Veeam can help businesses deliver on the true promise of a hybrid cloud. Veeam offers multiple products that can be used individually in a modular way, to provide data protection and management of individual resources and services or combined to provide a centralized data management solution.
Let’s look at a real-world scenario. In this example, we have some workloads running out in AWS that need to be migrated to our on-premises data center. Maybe we are facing latency issues, or we have a security requirement for this application to be closer to some services running on-premises. Using Veeam, we can easily protect those workloads and migrate those to another data center.
The diagram above shows how simple this can be carried out. By protecting workloads in AWS and using Veeam, you can simply move workloads across different platforms. It doesn’t matter which direction you want to move workloads either, you can just as easily take a virtual machine running on VMware vSphere or Microsoft Hyper-V and migrate that to AWS EC2 as an instance. You can move workloads across multiple platforms or hypervisors extremely easily.
Summary
Introducing and implementing a hybrid cloud with AWS may be daunting, but it needn’t be complex. By taking a considered approach to aspects such as networking, connectivity, and migration, leveraging AWS in a hybrid cloud model with your existing on-premises implementation can provide anyone with an agile, simple, quick, and easy approach to delivering new services and capabilities. Combine that with products from companies like Veeam, and implementing a true hybrid cloud data management solution is extremely simple, providing you with the flexibility of moving workloads across multiple platforms, while implementing a reliable service to the end customers of the business.
Elastic File Store – EFS provides a simple, fully managed, easy to set up, scalable, serverless, and cost-optimized file storage for use with AWS Cloud and on-premises resources.
can automatically scale from gigabytes to petabytes of data without needing to provision storage.
provides managed NFS (network file system) that can be mounted on and accessed by multiple EC2 in multiple AZs simultaneously.
offers highly durable, highly scalable, and highly available.
stores data redundantly across multiple AZs in the same region
grows and shrinks automatically as files are added and removed, so there is no need to manage storage procurement or provisioning.
supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol
provides file system access semantics, such as strong data consistency and file locking
is compatible with all Linux-based AMIs for EC2, POSIX file system (~Linux) that has a standard file API
is a shared POSIX system for Linux systems and does not work for Windows
offers the ability to encrypt data at rest using KMS and in transit.
can be accessed from on-premises using an AWS Direct Connect or AWS VPN connection between the on-premises datacenter and VPC.
can be accessed concurrently from servers in the on-premises data center as well as EC2 instances in the VPC
EFS Storage Classes
Standard storage classes
EFS Standard and Standard-Infrequent Access (Standard-IA), offer multi-AZ resilience and the highest levels of durability and availability.
For file systems using Standard storage classes, a mount target can be created in each availability Zone in the AWS Region.
Standard
regional storage class for frequently accessed data.
offers the highest levels of availability and durability by storing file system data redundantly across multiple AZs in an AWS Region.
ideal for active file system workloads and you pay only for the file system storage you use per month
Standard-Infrequent Access (Standard-IA)
regional, low-cost storage class that’s cost-optimized for files infrequently accessed i.e. not accessed every day
offers the highest levels of availability and durability by storing file system data redundantly across multiple AZs in an AWS Region
cost to retrieve files, lower price to store
One Zone storage classes
EFS One Zone and One Zone-Infrequent Access (One Zone-IA) offer additional savings by saving the data in a single AZ.
For file systems using One Zone storage classes, only a single mount target that is in the same Availability Zone as the file system needs to be created.
EFS One Zone
For frequently accessed files stored redundantly within a single AZ in an AWS Region.
EFS One Zone-IA (One Zone-IA)
A lower-cost storage class for infrequently accessed files stored redundantly within a single AZ in an AWS Region.
EFS Lifecycle Management
EFS lifecycle management automatically manages cost-effective file storage for the file systems.
When enabled, lifecycle management migrates files that haven’t been accessed for a set period of time to an infrequent access storage class, Standard-IA or One Zone-IA
Lifecycle Management automatically moves the data to the EFS IA storage class according to the lifecycle policy. for e.g., you can move files automatically into EFS IA fourteen days after not being accessed.
Lifecycle management uses an internal timer to track when a file was last accessed and not the POSIX file system attribute that is publicly viewable.
Whenever a file in Standard or One Zone storage is accessed, the lifecycle management timer is reset.
After lifecycle management moves a file into one of the IA storage classes, the file remains there indefinitely if EFS Intelligent-Tiering is not enabled.
EFS Performance Modes
General Purpose (Default)
latency-sensitive use cases
ideal for web serving environments, content management systems, home directories, and general file serving, etc.
Max I/O
can scale to higher levels of aggregate throughput and operations per second.
with a tradeoff of slightly higher latencies for file metadata operations
ideal for highly parallelized applications and workloads, such as big data analysis, media processing, and genomic analysis
is not available for file systems using One Zone storage classes.
EFS Throughput Modes
Provisioned Throughput
throughput of the file system (in MiB/s) can be instantly provisioned independent of the amount of data stored.
Bursting Throughput
throughput on EFS scales as the size of the file system in the EFS Standard or One Zone storage class grows
EFS Security
EFS supports authentication, authorization, and encryption capabilities to help meet security and compliance requirements.
EFS supports two forms of encryption for file systems,
Encryption in transit
Encryption in Transit can be enabled when you mount the file system.
Encryption at rest.
encrypts all the data and metadata
can be enabled only when creating an EFS file system.
to encrypt an existing unencrypted EFS file system, create a new encrypted EFS file system, and migrate the data using AWS DataSync.
NFS client access to EFS is controlled by both AWS IAM policies and network security policies like security groups.
EFS Access Points
EFS access points are application-specific entry points into an EFS file system that make it easier to manage application access to shared datasets.
Access points can enforce a user identity, including the user’s POSIX groups, for all file system requests that are made through the access point.
Access points can enforce a different root directory for the file system so that clients can only access data in the specified directory or its subdirectories.
AWS IAM policies can be used to enforce that specific application use a specific access point.
IAM policies with access points provide secure access to specific datasets for the applications.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
An administrator runs a highly available application in AWS. A file storage layer is needed that can share between instances and scale the platform more easily. The storage should also be POSIX compliant. Which AWS service can perform this action?
AWS CloudFormation helper scripts can be used to install software and start services on an EC2 instance created as a part of the stack
CloudFormation Helper scripts aren’t executed by default and calls must be included in the template to execute specific helper scripts.
CloudFormation helper scripts are preinstalled on Amazon Linux AMI images.
cfn-init
cfn-init can be used to retrieve and interpret resource metadata, install packages, create files, and start services.
cfn-init helper script reads template metadata from the AWS::CloudFormation::Init key and acts accordingly to:
Fetch and parse metadata from CloudFormation
Install packages
Write files to disk
Enable/disable and start/stop services
cfn-signal
cfn-signal can be used to signal with a CreationPolicy or WaitCondition, so you can synchronize other resources in the stack when the prerequisite resource or application is ready.
cfn-signal script is used in conjunction with a CreationPolicy or an Auto Scaling group with a WaitOnResourceSignals update policy.
When CloudFormation creates or updates resources with those policies, it suspends work on the stack until the resource receives the requisite number of signals or until the timeout period is exceeded.
For each valid signal that CloudFormation receives, CloudFormation publishes the signals to the stack events so that you track each signal.
Troubleshoot Failed to receive X resource signal(s) within the specified duration
cfn-signalscript isn’t installed on one or more instances of the AWS CloudFormation stack.
There are syntax errors or incorrect values in the AWS CloudFormation template
Value of the Timeout property for the CreationPolicy attribute is too low.
Check the logs /var/log/cloud-init.log and /var/log/cfn-init.log
Logs can be checked only if the instance is not terminated, by using Rollback on failure option of the AWS CloudFormation stack to No
cfn-signal isn’t sent from the EC2 instance.
Verify the instances have internet connectivity
cfn-get-metadata
cfn-get-metadata helper script helps to retrieve metadata for a resource or path to a specific key.
cfn-get-metadata helper script can be used to fetch a metadata block from CloudFormation and print it to standard out.
You can also print a sub-tree of the metadata block if you specify a key.
However, only top-level keys are supported.
cfn-hup
Use to check for updates to metadata and execute custom hooks when changes are detected.
cfn-hup helper is a daemon that detects changes in resource metadata and runs user-specified actions when a change is detected.
This allows you to make configuration updates on the running EC2 instances through the UpdateStack API action.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
Which of these is not a CloudFormation Helper Script?
You are designing a CloudFormation template to create a set of EC2 Instance and install an application package. You need to ensure that the stack is only successful if the software package gets installed successfully. Which of the following would assign in achieving this requirement?
Use the Change sets feature
Use CloudWatch logs to signal the completion
Use CloudTrail to signal the completion
Use the cfn-signal helper script
You are in charge of designing a CloudFormation template, which deploys a LAMP stack. After deploying a stack, you see that the status of the stack is showing as CREATE_COMPLETE, but the apache server is still not up and running and is experiencing issues while starting up. You want to ensure that the stack creation only shows the status of CREATE_COMPLETE after all resources defined in the stack are up and running. How can you achieve this? (Select TWO)
Define a stack policy, which defines that all underlying resources should be up and running before showing a status of
CREATE_COMPLETE.
Use lifecycle hooks to mark the completion of the creation and configuration of the underlying resource.
Use the CreationPolicy to ensure it is associated with the EC2 Instance resource.
Use the cfn helper scripts to signal once the resource configuration is complete.
AWS Partnerships That Have Upgraded the Way Businesses Collaborate
Through the AWS Partner Network (APN), brands have been able to reach customers and help improve businesses across the globe. AWS channel chief Doug Yeum explains that the benefits to brands in the APN can be tremendous. In his keynote presentation at the AWS re:Invest 2020 conference, he said, “Companies are looking for AWS partners who can deliver end-to-end solutions, develop cloud-native applications in addition to managing the cloud infrastructure, and have deep specializations across industries, use cases, specific workloads like SAP and AWS services. Partners who can meet these customer requirements and truly understand that speed matters — there will be lots of opportunities.”
Indeed, we’ve seen some great AWS partnerships present innovative ways to solve every business need imaginable. Here are some of the most memorable AWS partnerships to have upgraded the way businesses collaborate:
AliCloud
Having been an AWS Premier Consulting Partner and AWS Managed Services Provider since 2008, AliCloud is at the forefront of AWS partnerships. With the help of AWS resources like training, marketing, and solutions development assistance, the company aims to improve AWS adoption by onboarding new startup and enterprise customers. AliCloud hopes to use its many years of experience to introduce new customers to the wonders of cloud services.
Altium
Altium is a leading provider of electronics designing software that aims to streamline the process for both beginning and experienced engineers. In an effort to make the development and realization of printed circuit boards more streamlined, they developed Altium 365, a completely cloud-based design platform that creates seamless collaboration points across the electronics development process. In 2020, Altium selected AWS to host Altium 365, making it more accessible to individual and enterprise clients alike.
Deloitte
Professional services network Deloitte is known for providing support to companies across over 150 countries worldwide, and through its collaboration with AWS, it has developed Smart Factory Fabric. With it, they now empower smart factory transformations at both the plant and enterprise level. The Smart Factory Fabric is a pre-configured suite of cloud-based applications that help industrial enterprises quickly transition to the digital world, improving operational performance and reducing costs.
Infostretch
Offering three services on AWS, InfoStretch aims to enable enterprise clients to accelerate their digital initiatives through DevSecOps, Internet of Things (IoT) offerings, data engineering, and data analytics services, among others. Through their “Go, Be, Evolve” digital approach, they assist clients in the digital maturity journey from strategy, planning, migration, and execution, all the way to automation.
Lemongrass
Specializing in providing SAP solutions for enterprises on AWS, Lemongrass became an AWS partner because of the latter’s position as a leader in cloud infrastructure. Eamonn O’Neill, director of Lemongrass Consulting, has stated that the infrastructure of AWS was perfect for their services, as it’s “incredibly resilient, and incredibly well built.”
Whats Next !
But what’s next for AWS? Following amazing partnerships that seek to revolutionize the way we do business, Amazon is rolling out some great features, including the mainstreaming of Graviton2 processors, which promise to streamline cloud computing. AWS is also constantly working to re-evaluate its systems to ensure better cost-savings management for customers. Improvements are also being made to Aurora Serverless, enabling it to support customers who want to continue scaling up.
AWS can be a game-changer for many businesses. With a robust operations hub like AWS Systems Manager, businesses have great control over operational tasks, troubleshooting, and resource and application management. With AWS announcing that it adds 50 new partners to the APN daily, the network has become a great resource for end-to-end solutions and cloud-native application development.