Instance metadata and user data can be used for Self Configuration allowing EC2 instances answer the question Who am I? What should I do?
Instance metadata and user data can be accessed from within the instance itself
Data is not protected by authentication or cryptographic methods. Anyone who can access the instance can view its metadata and should not be used to any store sensitive data, such as passwords, as user data.
Both the metadata and user data are available from the IP address 169.254.169.254 and have the latest as well as previous versions available
Metadata and User data can be retrieved using simple curl or GET command and these requests are not billed
Instance Metadata
Instance metadata is data about the instance and allows you to get answers to the Who am I?
is divided into two categories
Instance metadata
includes metadata about the instance such as instance id, AMI id, hostname, IP address, role, etc
Can be accessed from http://169.254.169.254/latest/meta-data/
Dynamic data
is generated when the instances are launched such as instance identity documents, instance monitoring, etc
Can be accessed from http://169.254.169.254/latest/dynamic/
can be used for managing and configuring running instances
allows access to user data that specified when launching the instance
Instance Metadata Access Methods
Instance metadata can be accessed from a running instance using one of the following methods:
Instance Metadata Service Version 2 (IMDSv2) – a session-oriented method
Instance Metadata Service Version 1 (IMDSv1) – a request/response method
By default, either IMDSv1 or IMDSv2, or both can be used.
Instance metadata service distinguishes between IMDSv1 and IMDSv2 requests based on whether, for any given request, either the PUT or GET headers, which are unique to IMDSv2, are present in that request.
Instance metadata service can be configured on each instance so that local code or users must use IMDSv2. When IMDSv2 is enforced, IMDSv1 no longer works.
IMDSv2
IMDSv2 uses session-oriented requests.
With session-oriented requests, a session token that defines the session duration is created, which can be a minimum of one second and a maximum of six hours.
During the specified duration, the same session token can be used for subsequent requests.
After the specified duration expires, a new session token to use for future requests must be created.
User Data
User data can be used for bootstrapping (launching commands when the machine starts) EC2 instance and helps answer the What should I do?
is supplied when launching a EC2 instance and executed at boot time
can be in the form of parameters or user defined script executed when the instance is launched for e.g. perform software patch updates, load and update the application from an S3 bucket etc
can be used to build more generic AMIs, which can then be configured at launch time dynamically
can be retrieved from http://169.254.169.254/latest/user-data
By default, user data scripts and cloud-init directives run only during the first boot cycle when an EC2 instance is launched.
If you stop an instance, modify the user data, and start the instance, the new user data is not executed automatically.
is limited to 16 KB. This limit applies to the data in raw form, not base64-encoded form.
must be base64-encoded before being submitted to the API. EC2 command line tools perform the base64 encoding. The data is decoded before being presented to the instance.
Cloud-Init & EC2Config
Cloud-Init and EC2Config provides the ability to parse the user-data script on the instance and run the instructions
Cloud-Init
Amazon Linux AMI supports Cloud-Init, which is an open source application built by Canonical.
is installed on Amazon Linux, Ubuntu and RHEL AMIs
enables using the EC2 UserData parameter to specify actions to run on the instance at boot time
User data is executed on first boot using Cloud-Init, if the user data begins with #!
EC2Config
EC2Config is installed on Windows Server AMIs
User data is executed on first boot using Cloud-Init (technically EC2Config parses the instructions) if the user data begins with <script> or <powershell>
EC2Config service is started when the instance is booted. It performs tasks during initial instance startup (once) and each time you stop and start the instance.
It can also perform tasks on demand. Some of these tasks are enabled automatically, while others must be enabled manually.
uses settings files to control its operation
service runs Sysprep, a Microsoft tool that enables creation of customized Windows AMI that can be reused.
When EC2Config calls Sysprep, it uses the settings files in EC2ConfigService\Settings to determine which operations to perform.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
How can software determine the public and private IP addresses of the Amazon EC2 instance that it is running on?
Query the local instance metadata
Query the appropriate Amazon CloudWatch metric.
Query the local instance userdata.
Use ipconfig or ifconfig command.
The base URI for all requests for instance metadata is ___________
http://254.169.169.254/latest/
http://169.169.254.254/latest/
http://127.0.0.1/latest/
http://169.254.169.254/latest/
Which Amazon Elastic Compute Cloud feature can you query from within the instance to access instance properties?
Instance user data
Resource tags
Instance metadata
Amazon Machine Image
You need to pass a custom script to new Amazon Linux instances created in your Auto Scaling group. Which feature allows you to accomplish this?
User data
EC2Config service
IAM roles
AWS Config
By default, when an EBS volume is attached to a Windows instance, it may show up as any drive letter on the instance. You can change the settings of the _____ Service to set the drive letters of the EBS volumes per your specifications.
EBSConfig Service
AMIConfig Service
EC2Config Service
Ec2-AMIConfig Service
How can software determine the public and private IP addresses of the Amazon EC2 instance that it is running on?
I recently re-certified AWS Certified Security – Specialty (SCS-C01) after first clearing the same in 2019 and the format, and domains are pretty much the same however has been enhanced to cover all the latest services.
The AWS Certified Security – Specialty (SCS-C01) exam focuses on the AWS Security and Compliance concepts. It basically validates
An understanding of specialized data classifications and AWS data protection mechanisms.
An understanding of data-encryption methods and AWS mechanisms to implement them.
An understanding of secure Internet protocols and AWS mechanisms to implement them.
A working knowledge of AWS security services and features of services to provide a secure production environment.
Competency gained from two or more years of production deployment experience using AWS security services and features.
The ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements. An understanding of security operations and risks
Specialty exams are tough, lengthy, and tiresome. Most of the questions and answers options have a lot of prose and a lot of reading that needs to be done, so be sure you are prepared and manage your time well.
SCS-C01 exam has 65 questions to be solved in 170 minutes which gives you roughly 2 1/2 minutes to attempt each question.
SCS-C01 exam includes two types of questions, multiple-choice and multiple-response.
SCS-C01 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 750.
Associate exams currently cost $ 300 + tax.
You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
As always, mark the questions for review and move on and come back to them after you are done with all.
As always, having a rough architecture or mental picture of the setup helps focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.
AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
AWS Certified Security – Specialty (SCS-C01) exam focuses a lot on Security & Compliance concepts involving Data Encryption at rest or in transit, Data protection, Auditing, Compliance and regulatory requirements, and automated remediation.
IAM Roles to grant the service, users temporary access to AWS services.
IAM Role can be used to give cross-account access and usually involves creating a role within the trusting account with a trust and permission policy and granting the user in the trusted account permissions to assume the trusting account role.
Identity Providers & Federation to grant external user identity (SAML or Open ID compatible IdPs) permissions to AWS resources without having to be created within the AWS account.
IAM Policies help define who has access & what actions can they perform.
Key policies are the primary way to control access to KMS keys. Unless the key policy explicitly allows it, you cannot use IAM policies to allow access to a KMS key.
are regional, however, supports multi-region keys, which are KMS keys in different AWS Regions that can be used interchangeably – as though you had the same key in multiple Regions.
is a threat detection service that continuously monitors the AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.
supports CloudTrail S3 data events and management event logs, DNS logs, EKS audit logs, and VPC flow logs.
is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in S3.
AWS Artifact is a central resource for compliance-related information that provides on-demand access to AWS’ security and compliance reports and select online agreements
protects from common attack techniques like SQL injection and XSS, Conditions based include IP addresses, HTTP headers, HTTP body, and URI strings.
integrates with CloudFront, ALB, and API Gateway.
supports Web ACLs and can block traffic based on IPs, Rate limits, and specific countries as well
allows IP match set rule to allow/deny specific IP addresses and rate-based rule to limit the number of requests.
logs can be sent to the CloudWatch Logs log group, an S3 bucket, or Kinesis Data Firehose.
AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation.
AWS Network Firewall is a stateful, fully managed, network firewall and intrusion detection and prevention service (IDS/IPS) for VPCs.
AWS Resource Access Manager helps you securely share your resources across AWS accounts, within your organization or organizational units (OUs), and with IAM roles and users for supported resource types.
AWS Signer is a fully managed code-signing service to ensure the trust and integrity of your code.
AWS Audit Manager to map your compliance requirements to AWS usage data with prebuilt and custom frameworks and automated evidence collection.
Firewall Manager helps centrally configure and manage firewall rules across the accounts and applications in AWS Organizations which includes a variety of protections, including WAF, Shield Advanced, VPC security groups, Network Firewall, and Route 53 Resolver DNS Firewall.
helps improve the cache hit ratio and reduce the load on the origin.
requests from other regional caches would hit the Origin shield rather than the Origin.
should be placed at the regional cache and not in the edge cache
should be deployed to the region closer to the origin server
CloudFront provides Encryption at Rest
uses SSDs which are encrypted for edge location points of presence (POPs), and encrypted EBS volumes for Regional Edge Caches (RECs).
Function code and configuration are always stored in an encrypted format on the encrypted SSDs on the edge location POPs, and in other storage locations used by CloudFront.
Restricting access to content
Configure HTTPS connections
Use signed URLs or cookies to restrict access for selected users
Restrict access to content in S3 buckets using origin access identity – OAI, to prevent users from using the direct URL of the file.
Set up field-level encryption for specific content fields
Use AWS WAF web ACLs to create a web access control list (web ACL) to restrict access to your content.
Use Geo-restriction, also known as geoblocking, to prevent users in specific geographic locations from accessing content served through a CloudFront distribution.
is a highly available and scalable DNS web service.
Resolver Query logging
logs the queries that originate in specified VPCs, on-premises resources that use inbound resolver or ones using outbound resolver as well as the responses to those DNS queries.
can be logged to CloudWatch logs, S3, and Kinesis Data Firehose
Route 53 DNSSEC secures DNS traffic, and helps protect a domain from DNS spoofing man-in-the-middle attacks.
AWS Config rules can be used to alert for any changes and Config can be used to check the history of changes. AWS Config can also help check approved AMIs compliance
allows you to remediate noncompliant resources using AWS Systems Manager Automation documents.
Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management. Does not support secrets rotation. Use Secrets Manager instead
Systems Manager Patch Manager helps select and deploy the operating system and software patches automatically across large groups of EC2 or on-premises instances
Systems Manager Run Command provides safe, secure remote management of your instances at scale without logging into the servers, replacing the need for bastion hosts, SSH, or remote PowerShell
Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
Deletion Policy to prevent, retain, or backup RDS, EBS Volumes
Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update. Stack Policy only applies for Stack updates and not stack deletion.
S3 Object Lock helps to store objects using a WORM model and can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
S3 Block Public Access provides controls across an entire AWS Account or at the individual S3 bucket level to ensure that objects never have public access, now and in the future.
S3 Access Points simplify data access for any AWS service or customer application that stores data in S3.
S3 Versioning with MFA Delete can be enabled on a bucket to ensure that data in the bucket cannot be accidentally overwritten or deleted.
S3 Access Analyzer monitors the access policies, ensuring that the policies provide only the intended access to your S3 resources.
is a web service that makes it easier to set up, operate, and scale a relational database in the cloud.
supports the same encryption at rest methods as EBS
does not support enabling encryption after creation. Need to create a snapshot, copy the snapshot to an encrypted snapshot and restore it as an encrypted DB.
Compute
EC2 access using IAM Role, Lambda using the Execution role & ECS using the Task role.
Simple Notification Service – SNS is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients.
SNS provides the ability to create a Topic which is a logical access point and communication channel.
Each topic has a unique name that identifies the SNS endpoint for publishers to post messages and subscribers to register for notifications.
Producers and Consumers communicate asynchronously with subscribers by producing and sending a message on a topic.
Producers push messages to the topic, they created or have access to, and SNS matches the topic to a list of subscribers who have subscribed to that topic and delivers the message to each of those subscribers.
Subscribers receive all messages published to the topics to which they subscribe, and all subscribers to a topic receive the same messages.
Subscribers (i.e., web servers, email addresses, SQS queues, AWS Lambda functions) consume or receive the message or notification over one of the supported protocols (i.e., SQS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic.
Accessing SNS
Amazon Management console
Amazon Management console is the web-based user interface that can be used to manage SNS
AWS Command-line Interface (CLI)
Provides commands for a broad set of AWS products, and is supported on Windows, Mac, and Linux.
AWS Tools for Windows Powershell
Provides commands for a broad set of AWS products for those who script in the PowerShell environment
AWS SNS Query API
Query API allows for requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action
AWS SDK libraries
AWS provides libraries in various languages which provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses
SNS Supported Transport Protocols
HTTP, HTTPS – Subscribers specify a URL as part of the subscription registration; notifications will be delivered through an HTTP POST to the specified URL.
Email, Email-JSON – Messages are sent to registered addresses as email. Email-JSON sends notifications as a JSON object, while Email sends text-based email.
SQS – Users can specify an SQS queue as the endpoint; SNS will enqueue a notification message to the specified queue (which subscribers can then process using SQS APIs such as ReceiveMessage, DeleteMessage, etc.)
SMS – Messages are sent to registered phone numbers as SMS text messages
SNS Supported Endpoints
Email Notifications
SNS provides the ability to send Email notifications
Mobile Push Notifications
SNS provides an ability to send push notification messages directly to apps on mobile devices. Push notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts
Supported push notification services
Amazon Device Messaging (ADM)
Apple Push Notification Service (APNS)
Google Cloud Messaging (GCM)
Windows Push Notification Service (WNS) for Windows 8+ and Windows Phone 8.1+
Microsoft Push Notification Service (MPNS) for Windows Phone 7+
Baidu Cloud Push for Android devices in China
SQS Queues
SNS with SQS provides the ability for messages to be delivered to applications that require immediate notification of an event, and also persist in an SQS queue for other applications to process at a later time
SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates.
SQS can be used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components, without requiring each component to be concurrently available.
SMS Notifications
SNS provides the ability to send and receive Short Message Service (SMS) notifications to SMS-enabled mobile phones and smart phones
HTTP/HTTPS Endpoints
SNS provides the ability to send notification messages to one or more HTTP or HTTPS endpoints.When you subscribe an endpoint to a topic, you can publish a notification to the topic and Amazon SNS sends an HTTP POST request delivering the contents of the notification to the subscribed endpoint
Lambda
SNS and Lambda are integrated so Lambda functions can be invoked with SNS notifications.
When a message is published to an SNS topic that has a Lambda function subscribed to it, the Lambda function is invoked with the payload of the published message
Kinesis Data Firehose
Deliver events to delivery streams for archiving and analysis purposes.
Through delivery streams, events can be delivered to AWS destinations like S3, Redshift, and OpenSearch Service, or to third-party destinations such as Datadog, New Relic, MongoDB, and Splunk.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
Which of the following notification endpoints or clients does Amazon Simple Notification Service support? Choose 2 answers
Email
CloudFront distribution
File Transfer Protocol
Short Message Service
Simple Network Management Protocol
What happens when you create a topic on Amazon SNS?
The topic is created, and it has the name you specified for it.
An ARN (Amazon Resource Name) is created
You can create a topic on Amazon SQS, not on Amazon SNS.
This question doesn’t make sense.
A user has deployed an application on his private cloud. The user is using his own monitoring tool. He wants to configure that whenever there is an error, the monitoring tool should notify him via SMS. Which of the below mentioned AWS services will help in this scenario?
None because the user infrastructure is in the private cloud/
AWS SNS
AWS SES
AWS SMS
A user wants to make so that whenever the CPU utilization of the AWS EC2 instance is above 90%, the redlight of his bedroom turns on. Which of the below mentioned AWS services is helpful for this purpose?
AWS CloudWatch + AWS SES
AWS CloudWatch + AWS SNS
It is not possible to configure the light with the AWS infrastructure services
AWS CloudWatch and a dedicated software turning on the light
A user is trying to understand AWS SNS. To which of the below mentioned end points is SNS unable to send a notification?
Email JSON
HTTP
AWS SQS
AWS SES
A user is running a webserver on EC2. The user wants to receive the SMS when the EC2 instance utilization is above the threshold limit. Which AWS services should the user configure in this case?
AWS CloudWatch + AWS SES
AWS CloudWatch + AWS SNS
AWS CloudWatch + AWS SQS
AWS EC2 + AWS CloudWatch
A user is planning to host a mobile game on EC2 which sends notifications to active users on either high score or the addition of new features. The user should get this notification when he is online on his mobile device. Which of the below mentioned AWS services can help achieve this functionality?
AWS Simple Notification Service
AWS Simple Queue Service
AWS Mobile Communication Service
AWS Simple Email Service
You are providing AWS consulting service for a company developing a new mobile application that will be leveraging amazon SNS push for push notifications. In order to send direct notification messages to individual devices each device registration identifier or token needs to be registered with SNS, however the developers are not sure of the best way to do this. You advise them to: –
Bulk upload the device tokens contained in a CSV file via the AWS Management Console
Let the push notification service (e.g. Amazon Device messaging) handle the registration
Implement a token vending service to handle the registration
Call the CreatePlatformEndpoint API function to register multiple device tokens. (Refer documentation)
A company is running a batch analysis every hour on their main transactional DB running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible?
Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
Which of the following are valid SNS delivery transports? Choose 2 answers.
HTTP
UDP
SMS
DynamoDB
Named Pipes
What is the format of structured notification messages sent by Amazon SNS?
An XML object containing MessageId, UnsubscribeURL, Subject, Message and other values
An JSON object containing MessageId, DuplicateFlag, Message and other values
An XML object containing MessageId, DuplicateFlag, Message and other values
An JSON object containing MessageId, unsubscribeURL, Subject, Message and other values
Which of the following are valid arguments for an SNS Publish request? Choose 3 answers.
AWS Certified Solutions Architect – Professional (SAP-C02) Exam Learning Path
AWS Certified Solutions Architect – Professional (SAP-C02) exam is the upgraded pattern of the previous Solution Architect – Professional SAP-C01 exam and was released in Nov. 2022.
SAP-C02 is quite similar to SAP-C01 but has included some new services.
AWS Certified Solutions Architect – Professional (SAP-C02) Exam Content
AWS Certified Solutions Architect – Professional (SAP-C02) exam validates the ability to complete tasks within the scope of the AWS Well-Architected Framework
AWS Certified Solutions Architect – Professional (SAP-C02) Exam Summary
Professional exams are tough, lengthy, and tiresome. Most of the questions and answers options have a lot of prose and a lot of reading that needs to be done, so be sure you are prepared and manage your time well.
Each solution involves multiple AWS services.
AWS Certified Solutions Architect – Professional (SAP-C02) exam has 65 questions to be solved in 170 minutes.
SAP-C02 exam includes two types of questions, multiple-choice and multiple-response.
SAP-C02 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 750.
Each question mainly touches multiple AWS services.
Associate exams currently cost $ 300 + tax.
You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
As always, mark the questions for review and move on and come back to them after you are done with all.
As always, having a rough architecture or mental picture of the setup helps focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.
AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
AWS Certified Solutions Architect – Professional (SAP-C02) Exam Topics
AWS Certified Solutions Architect – Professional (SAP-C02) focuses a lot on concepts and services related to Architecture & Design, Scalability, High Availability, Disaster Recovery, Migration, Security, and Cost Control.
S3 Access Logs enable tracking access requests to an S3 bucket.
supports S3 Select feature to query selective data from a single object.
S3 Event Notification enables notifications to be triggered when certain events happen in the bucket and support SNS, SQS, and Lambda as the destination.
File Gateways provides a file interface into S3 and allows storing and retrieving of objects in S3 using industry-standard file protocols such as NFS and SMB.
enables quick and secure data migration with minimal to zero downtime
supports Full and Change Data Capture – CDC migration to support continuous replication for zero downtime migration.
homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations (using SCT) between different database platforms, such as Oracle or Microsoft SQL Server to Aurora.
Ideal for one-time huge data transfers usually for use cases with limited bandwidth from on-premises to AWS.
Understand use cases for data transfer using VPN (quick, slow, uses the Internet), Direct Connect (time to set up, private, recurring transfers), Snow Family (moderate time, private, one-time huge data transfers)
Agent ones can be used for hyper-v and physical services
Agentless can be used for VMware but does not track processes
AWS Migration Hub provides a central location to collect server and application inventory data for the assessment, planning, and tracking of migrations to AWS and also helps accelerate application modernization following migration.
VPN can provide a cost-effective, quick failover for Direct Connect.
VPN over Direct Connect provides a secure dedicated connection and requires a public virtual interface.
Direct Connect Gateway is a global network device that helps establish connectivity that spans VPCs spread across multiple AWS Regions with a single Direct Connect connection.
Secrets Manager supports random generation and automatic rotation of secrets, which is not provided by SSM Parameter Store.
Costs more than SSM Parameter Store.
Amazon Macie is a data security and data privacy service that uses ML and pattern matching to discover and protect sensitive data in S3.
AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation.
Lambda running in VPC requires NAT Gateway to communicate with external public services
Lambda CPU can be increased by increasing memory only.
helps define reserved concurrency limits to reduce the impact
Lambda Alias now supports canary deployments
Lambda supports docker containers
Reserved Concurrency guarantees the maximum number of concurrent instances for the function
Provisioned Concurrency provides greater control over the performance of serverless applications and helps keep functions initialized and hyper-ready to respond in double-digit milliseconds.
Step Functions helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines.
For least privilege, the role should be assigned to the Task.
awsvpc network mode gives ECS tasks the same networking properties as EC2 instances.
Disaster Recovery
Disaster Recovery whitepaper, although outdated, make sure you understand the differences and implementation for each type esp. pilot light, warm standby w.r.t RTO, and RPO.
Compute
Make components available in an alternate region,
Backup and Restore using either snapshots or AMIs that can be restored.
Use minimal low-scale capacity running which can be scaled once the failover happens
Use fully running compute in active-active confirmation with health checks.
CloudFormation to create, and scale infra as needed
Storage
S3 and EFS support cross-region replication
DynamoDB supports Global tables for multi-master, active-active inter-region storage needs.
RDS supports cross-region read replicas which can be promoted to master in case of a disaster. This can be done using Route 53, CloudWatch, and lambda functions.
Network
Route 53 failover routing with health checks to failover across regions.
CloudFront Origin Groups support primary and secondary endpoints with failover.
AWS Systems Manager and its various services like parameter store, patch manager
Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management. Does not support secrets rotation. Use Secrets Manager instead
Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
Patch Manager helps automate the process of patching managed instances with both security-related and other types of updates.
Handle disaster Recovery by automating the infra to replicate the environment across regions.
Deletion Policy to prevent, retain, or backup RDS, EBS Volumes
Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update. Stack Policy only applies for Stack updates and not stack deletion.
StackSets helps to create, update, or delete stacks across multiple accounts and Regions with a single operation.
helps with cost optimization and service limits in addition to security, performance and fault tolerance.
Compute Optimizer recommends optimal AWS resources for the workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics.
AWS Budgets to see usage-to-date and current estimated charges from AWS, set limits and provide alerts or notifications.
Cost Allocation Tags can be used to organize AWS resources, and cost allocation tags to track the AWS costs on a detailed level.
Cost Explorer helps visualize, understand, manage and forecast the AWS costs and usage over time.
Amazon WorkSpaces provides a virtual workspace for varied worker types, especially hybrid and remote workers.
Amazon Timestream is a fast, scalable, and serverless time-series database service that makes it easier to store and analyze trillions of events per day.
Amazon Connect is an omnichannel cloud contact center.
Amazon Pinpoint is a flexible, scalable marketing communications service that helps connects customers over email, SMS, push notifications or voice
Amazon Rekognition offers pre-trained and customizable computer vision capabilities to extract information and insights from images and videos
I just cleared the AWS Solutions Architect – Associate SAA-C03 exam with a score of 914/1000.
AWS Solutions Architect – Associate SAA-C03 exam is the latest AWS exam released on 30th August 2022 and has replaced the previous AWS Solutions Architect – SAA-C02 certification exam.
It basically validates the ability to effectively demonstrate knowledge of how to design, architect, and deploy secure, cost-effective, and robust applications on AWS technologies
The exam also validates a candidate’s ability to complete the following tasks:
Design solutions that incorporate AWS services to meet current business requirements and future projected needs
Design architectures that are secure, resilient, high-performing, and cost-optimized
Review existing solutions and determine improvements
SAA-C03 exam consists of 65 questions in 130 minutes, and the time is more than sufficient if you are well-prepared.
SAA-C03 exam includes two types of questions, multiple-choice and multiple-response.
SAA-C03 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 720.
Associate exams currently cost $ 150 + tax.
You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
Signed up with AWS for the Free Tier account which provides a lot of Services to be tried for free with certain limits which are more than enough to get things going. Be sure to decommission services beyond the free limits, preventing any surprises 🙂
Also, use QwikLabs for introductory courses which are free
Read the FAQs at least for the important topics, as they cover important points and are good for quick review
SAA-C03 Exam covers the design and architecture aspects in deep, so you must be able to visualize the architecture, even draw them out or prepare a mental picture just to understand how it would work and how different services relate.
SAA-C03 exam concepts cover solutions that fall within AWS Well-Architected framework to cover scalable, highly available, cost-effective, performant, and resilient pillars.
If you had been preparing for the SAA-C02, SAA-C03 is pretty much similar to SAA-C02 except for the addition of some new services Aurora Serverless, AWS Global Accelerator, FSx for Windows, and FSx for Lustre.
Create a VPC from scratch with public, private, and dedicated subnets with proper route tables, security groups, and NACLs.
Understand what a CIDR is and address patterns.
Subnets are public or private depending on whether they can route traffic directly through an Internet gateway
Understand how communication happens between the Internet, Public subnets, Private subnets, NAT, Bastion, etc.
Bastion (also referred to as a Jump server) can be used to securely access instances in the private subnets.
Create two-tier architecture with application in public and database in private subnets
Create three-tier architecture with web servers in public, application, and database servers in private. (hint: focus on security group configuration with least privilege)
enable the creation of a private connection between VPC to supported AWS services and VPC endpoint services powered by PrivateLink using its private IP address without needing an Internet or NAT Gateway.
VPC Gateway Endpoints supports S3 and DynamoDB.
VPC Interface Endpoints OR Private Links supports others
Multi-Attach EBS feature allows attaching an EBS volume to multiple instances within the same AZ only.
EBS fast snapshot restore feature helps ensure that the EBS volumes created from a snapshot are fully-initialized at creation and instantly deliver all of their provisioned performance.
S3 Client-side encryption encrypts data before storing it in S3
S3 features including
S3 provides cost-effective static website hosting. However, it does not support HTTPS endpoint. Can be integrated with CloudFront for HTTPS, caching, performance, and low-latency access.
S3 versioning provides protection against accidental overwrites and deletions. Used with MFA Delete feature.
S3 Pre-Signed URLs for both upload and download provide access without needing AWS credentials.
simple, fully managed, scalable, serverless, and cost-optimized file storage for use with AWS Cloud and on-premises resources.
provides shared volume across multiple EC2 instances, while EBS can be attached to a single instance within the same AZ or EBS Multi-Attach can be attached to multiple instances within the same AZ
supports the NFS protocol, and is compatible with Linux-based AMIs
supports cross-region replication, storage classes for cost.
fully managed service that provides AWS resource inventory, configuration history, and configuration change notifications to enable security, compliance, and governance.
SES is a fully managed service that provides an email platform with an easy, cost-effective way to send and receive email using your own email addresses and domains.
can be used to send both transactional and promotional emails securely, and globally at scale.
acts as an outbound email server and eliminates the need to support its own software or applications to do the heavy lifting of email transport.
acts as an inbound email server to receive emails that can help develop software solutions such as email autoresponders, email unsubscribe systems, and applications that generate customer support tickets from incoming emails.
existing email server can also be configured to send outgoing emails through SES with no change in any settings in the email clients
Maximum message size including attachments is 10 MB per message (after base64 encoding).
provides statistics on email deliveries, bounces, feedback loop results, emails opened, etc.
supports DomainKeys Identified Mail (DKIM) and Sender Policy Framework (SPF)
supports flexible deployment: shared, dedicated, and customer-owned IPs
supports attachments with many popular content formats, including documents, images, audio, and video, and scans every attachment for viruses and malware.
integrates with KMS to provide the ability to encrypt the mail that it writes to the S3 bucket.
uses client-side encryption to encrypt the mail before it sends the email to S3.
Sending Limits
Production SES has a set of sending limits which include
Sending Quota – max number of emails in a 24-hour period
Maximum Send Rate – max number of emails per second
SES automatically adjusts the limits upward as long as emails are of high quality and they are sent in a controlled manner, as any spike in the email sent might be considered to be spam.
Limits can also be raised by submitting a Quota increase request
SES Best Practices
Send high-quality and real production content that the recipients want
Only send to those who have signed up for the mail
Unsubscribe recipients who have not interacted with the business recently
Have low bounce and compliant rates and remove bounced or complained addresses, using SNS to monitor bounces and complaints, treating them as an opt-out
Monitor the sending activity
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
What does Amazon SES stand for?
Simple Elastic Server
Simple Email Service
Software Email Solution
Software Enabled Server
Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are notified via email about order status and any critical issues with their orders such as payment failure. Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders. How can you implement the order fulfillment process while making sure that the emails are delivered reliably? [PROFESSIONAL]
Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.
Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 Use the decider instance to send emails to customers.
Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.
Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.
I recently certified/recertified for the AWS Certified Advanced Networking – Specialty (ANS-C01). Frankly, Networking is something that I am still diving deep into and I just about managed to get through. So a word of caution, this exam is inline or tougher than the professional exams, especially for the reason that some of the Networking concepts covered are not something you can get your hands dirty with easily.
Specialty exams are tough, lengthy, and tiresome. Most of the questions and answers options have a lot of prose and a lot of reading that needs to be done, so be sure you are prepared and manage your time well.
ANS-C01 exam has 65 questions to be solved in 170 minutes which gives you roughly 2 1/2 minutes to attempt each question. 65 questions consists of 50 scored and 15 unscored questions.
ANS-C01 exam includes two types of questions, multiple-choice and multiple-response.
ANS-C01 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 750.
Each question mainly touches multiple AWS services.
Specialty exams currently cost $ 300 + tax.
You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
As always, mark the questions for review and move on and come back to them after you are done with all.
As always, having a rough architecture or mental picture of the setup helps focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.
AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
AWS Certified Networking – Specialty (ANS-C01) exam focuses a lot on Networking concepts involving Hybrid Connectivity with Direct Connect, VPN, Transit Gateway, Direct Connect Gateway, and a bit of VPC, Route 53, ALB, NLB & CloudFront.
help capture information about the IP traffic going to and from network interfaces in the VPC and can help in monitoring the traffic or troubleshooting any connectivity issues
NACLs are stateless and how it is reflected in VPC Flow Logs
If ACCEPT followed by REJECT, inbound was accepted by Security Groups and ACLs. However, rejected by NACLs outbound
If REJECT, inbound was either rejected by Security Groups OR NACLs.
Use pkt-dstaddr instead of dstaddr to track the destination address as dstaddr refers to the primary ENI address always and not the secondary addresses.
VPC Gateway Endpoints for connectivity with S3 & DynamoDB i.e. VPC -> VPC Gateway Endpoints -> S3/DynamoDB.
VPC Interface Endpoints or Private Links for other AWS services and custom hosted services i.e. VPC -> VPC Interface Endpoint OR Private Link -> S3/Kinesis/SQS/CloudWatch/Any custom endpoint.
S3 gateway endpoints cannot be accessed through VPC Peering, VPN, or Direct Connect. Need HTTP proxy to route traffic.
S3 Private Link can be accessed through VPC Peering, VPN, or Direct Connect. Need to use an endpoint-specific DNS name.
VPC endpoint policy can be configured to control which S3 buckets can be accessed and the S3 Bucket policy can be used to control which VPC (includes all VPC Endpoints) or VPC Endpoint can access it.
Private Link Patterns
Private links allow connectivity for overlapping CIDRs which VPC peering would not.
Connections can be initiated in only one direction i.e. consumer to provider
Provides fine-grained access control and only the endpoint is shared and nothing else.
for HA, Scalable, Outgoing traffic. Does not support Security Groups or ICMP pings.
times out the connection if it is idle for 350 seconds or more. To prevent the connection from being dropped, initiate more traffic over the connection or enable TCP keepalive on the instance with a value of less than 350 seconds.
supports Private NAT Gateways for internal communication.
supports MACsec which delivers native, near line-rate, point-to-point encryption ensuring that data communications between AWS and the data center, office, or colocation facility remain protected.
BGP prefers the shortest AS PATH to get to the destination. Traffic from the VPC to on-premises uses the primary router. This is because the secondary router advertises a longer AS-PATH.
AS PATH prepending doesn’t work when the Direct Connect connections are in different AWS Regions than the VPC.
AS PATH works from AWS to on-premises and Local Pref from on-premises to AWS
Use Local Preference BGP community tags to configure Active/Passive when the connections are from different regions. The higher tag has a higher preference for 7224:7300 > 7224:7100
NO_EXPORT works only for Public VIFs
7224:9100, 7224:9200, and 7224:9300 apply only to public prefixes. Usually used to restrict traffic to regions. Can help control if routes should propagate to the local Region only, all Regions within a continent, or all public Regions.
7224:9100 — Local AWS Region
7224:9200 — All AWS Regions for a continent, North America–wide, Asia Pacific, Europe, the Middle East and Africa
7224:9300 — Global (all public AWS Regions)
7224:8100 — Routes that originate from the same AWS Region in which the AWS Direct Connect point of presence is associated.
7224:8200 — Routes that originate from the same continent with which the AWS Direct Connect point of presence is associated.
provides a highly available and scalable DNS web service.
Routing Policies and their use cases Focus on Weighted, Latency, and Failover routing policies.
supports Alias resource record sets, which enables routing of queries to a CloudFront distribution, Elastic Beanstalk, ELB, an S3 bucket configured as a static website, or another Route 53 resource record set.
ALB provides Content, Host, and Path-based Routing while NLB provides the ability to have a static IP address
Maintain original Client IP to the backend instances using X-Forwarded-for and Proxy Protocol
ALB/NLB do not support TLS renegotiation or mutual TLS authentication (mTLS). For implementing mTLS, use NLB with TCP listener on port 443 and terminate on the instances.
NLB
also provides local zonal endpoints to keep the traffic within AZ
can front Private Link endpoints and provide static IPs.
ALB supports Forward Secrecy, through Security Policies, that provide additional safeguards against the eavesdropping of encrypted data, through the use of a unique random session key.
Supports sticky session feature (session affinity) to enable the LB to bind a user’s session to a specific target. This ensures that all requests from the user during the session are sent to the same target. Sticky Sessions is configured on the target groups.
AWS Shield Advanced provides 24×7 access to the AWS Shield Response Team (SRT), protection against DDoS-related spike, and DDoS cost protection to safeguard against scaling charges.
helps protect web applications from attacks by allowing rules configuration that allow, block, or monitor (count) web requests based on defined conditions.
integrates with CloudFront, ALB, API Gateway to dynamically detect and prevent attacks
is a vulnerability management service that continuously scans the AWS workloads for vulnerabilities
Monitoring & Management Tools
Understand AWS CloudFormation esp. in terms of Network creation.
Custom resources can be used to handle activities not supported by AWS
While configuring VPN connections use depends_on on route tables to define a dependency on other resources as the VPN gateway route propagation depends on a VPC-gateway attachment when you have a VPN gateway.
fully managed service that provides AWS resource inventory, configuration history, and configuration change notifications to enable security, compliance, and governance.
can be used to monitor resource changes e.g. Security Groups and invoke Systems Manager Automation scripts for remediation.
AWS Certified Solutions Architect – Professional (SAP-C01) exam is the upgraded pattern of the previous Solution Architect – Professional exam which was released in the year (2018) and would be upgraded this year (Nov. 2022).
I recently recertified the existing pattern and the difference is quite a lot between the previous pattern and the latest pattern. The amount of overlap between the associates and professional exams and even the Solutions Architect and DevOps has drastically reduced.
AWS Certified Solutions Architect – Professional (SAP-C01) exam basically validates
Design and deploy dynamically scalable, highly available, fault-tolerant, and reliable applications on AWS
Select appropriate AWS services to design and deploy an application based on given requirements
Migrate complex, multi-tier applications on AWS
Design and deploy enterprise-wide scalable operations on AWS
AWS Certified Solutions Architect – Professional (SAP-C01) Exam Summary
AWS Certified Solutions Architect – Professional (SAP-C01) exam was for a total of 170 minutes and it had 75 questions.
AWS Certified Solutions Architect – Professional (SAP-C01) focuses a lot on concepts and services related to Architecture & Design, Scalability, High Availability, Disaster Recovery, Migration, Security and Cost Control.
Each question mainly touches multiple AWS services.
Questions and answers options have a lot of prose and a lot of reading that needs to be done, so be sure you are prepared and manage your time well.
As always, mark the questions for review and move on and come back to them after you are done with all.
As always, having a rough architecture or mental picture of the setup helps focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.
AWS Certified Solutions Architect – Professional (SAP-C01) Exam Topics
Aurora Global Database consists of one primary AWS Region where the data is mastered, and up to five read-only, secondary AWS Regions. It is a multi-master setup but can be used for disaster recovery.
enables quick and secure data migration with minimal to zero downtime
supports Full and Change Data Capture – CDC migration to support continuous replication for zero downtime migration.
homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations (using SCT) between different database platforms, such as Oracle or Microsoft SQL Server to Aurora.
Hint: Elasticsearch is not supported as a target by DMS
Agent ones can be used for hyper-v and physical services
Agentless can be used for VMware but does not track processes.
Disaster Recovery
Disaster Recovery whitepaper, although outdated, make sure you understand the difference between each type esp. pilot light, warm standby w.r.t RTO and RPO.
Compute
Make components available in an alternate region,
either as AMIs that can be restored
CloudFormation to create infra as needed
partial which can be scaled once the failover happens
or fully running compute in active-active confirmation with health checks.
Storage
S3 and EFS support cross-region replication
DynamoDB supports Global tables for multi-master, active-active inter-region storage needs.
Aurora Global Database provides a multi-master setup but can be used for disaster recovery.
RDS supports cross-region read replicas which can be promoted to master in case of a disaster. This can be done using Route 53, CloudWatch and lambda functions.
Network
Route 53 failover routing with health checks to failover across regions.
Understand VPC Peering to enable communication between VPCs within the same or different regions. (hint: VPC peering does not support transitive routing)
VPN can provide a cost-effective, quick failover for Direct Connect.
VPN over Direct Connect provides a secure dedicated connection and requires a public virtual interface.
Direct Connect Gateway is a global network device that helps establish connectivity that spans VPCs spread across multiple AWS Regions with a single Direct Connect connection.
protects from common attack techniques like SQL injection and Cross-Site Scripting (XSS), Conditions based include IP addresses, HTTP headers, HTTP body, and URI strings.
integrates with CloudFront, ALB, and API Gateway.
supports Web ACLs and can block traffic based on IPs, Rate limits, and specific countries as well.
AWS Systems Manager and its various services like parameter store, patch manager
Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management. Does not support secrets rotation. Use Secrets Manager.
Session Manager helps manage EC2 instances through an interactive one-click browser-based shell or through the AWS CLI without opening ports or creating bastion hosts.
Patch Manager helps automate the process of patching managed instances with both security-related and other types of updates.
Handle disaster Recovery by automating the infra to replicate the environment across regions.
Deletion Policy to prevent, retain or backup RDS, EBS Volumes
Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update. Stack Policy only applies for Stack updates and not stack deletion.
StackSets helps to create, update, or delete stacks across multiple accounts and Regions with a single operation.
helps with cost optimization and service limits in addition to security, performance and fault tolerance.
Compute Optimizer recommends optimal AWS resources for the workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics.
AWS Budgets to see usage-to-date and current estimated charges from AWS, set limits and provide alerts or notifications.
Cost Allocation Tags can be used to organize AWS resources, and cost allocation tags to track the AWS costs on a detailed level.
Cost Explorer helps visualize, understand, manage and forecast the AWS costs and usage over time.
Google Cloud – Professional Cloud DevOps Engineer Certification learning path
Continuing on the Google Cloud Journey, glad to have passed the 8th certification with the Professional Cloud DevOps Engineer certification. Google Cloud – Professional Cloud DevOps Engineer certification exam focuses on almost all of the Google Cloud DevOps services with Cloud Developer tools, Operations Suite, and SRE concepts.
Google Cloud -Professional Cloud DevOps Engineer Certification Summary
Had 50 questions to be answered in 2 hours.
Covers a wide range of Google Cloud services mainly focusing on DevOps toolset including Cloud Developer tools, Operations Suite with a focus on monitoring and logging, and SRE concepts.
The exam has been updated to use
Cloud Operations, Cloud Monitoring & Logging and does not refer to Stackdriver in any of the questions.
Artifact Registry instead of Container Registry.
There are no case studies for the exam.
As mentioned for all the exams, Hands-on is a MUST, if you have not worked on GCP before make sure you do lots of labs else you would be absolutely clueless about some of the questions and commands
I did Coursera and ACloud Guru which is really vast, but hands-on or practical knowledge is MUST.
Google Cloud – Professional Cloud DevOps Engineer Certification Resources
Cloud Build integrates with Cloud Source Repository, Github, and Gitlab and can be used for Continous Integration and Deployments.
Cloud Build can import source code, execute build to the specifications, and produce artifacts such as Docker containers or Java archives
Cloud Build can trigger builds on source commits in Cloud Source Repositories or other git repositories.
Cloud Build build config file specifies the instructions to perform, with steps defined to each task like the test, build and deploy.
Cloud Build step specifies an action to be performed and is run in a Docker container.
Cloud Build supports custom images as well for the steps
Cloud Build integrates with Pub/Sub to publish messages on build’s state changes.
Cloud Build can trigger the Spinnaker pipeline through Cloud Pub/Sub notifications.
Cloud Build should use a Service Account with a Container Developer role to perform deployments on GKE
Cloud Build uses a directory named /workspace as a working directory and the assets produced by one step can be passed to the next one via the persistence of the /workspace directory.
Binary Authorization provides software supply-chain security for container-based applications. It enables you to configure a policy that the service enforces when an attempt is made to deploy a container image on one of the supported container-based platforms.
Binary Authorization uses attestations to verify that an image was built by a specific build system or continuous integration (CI) pipeline.
Vulnerability scanning helps scan images for vulnerabilities by Container Analysis.
Hint: For Security and compliance reasons if the image deployed needs to be trusted, use Binary Authorization
Google Artifact Registry supports all types of artifacts as compared to Container Registry which was limited to container images
Container Registry is not referred to in the exam
Artifact Registry supports both regional and multi-regional repositories
Google Cloud Code
Cloud Code helps write, debug, and deploy the cloud-based applications for IntelliJ, VS Code, or in the browser.
Google Cloud Client Libraries
Google Cloud Client Libraries provide client libraries and SDKs in various languages for calling Google Cloud APIs.
If the language is not supported, Cloud Rest APIs can be used.
Deployment Techniques
Recreate deployment – fully scale down the existing application version before you scale up the new application version.
Rolling update – update a subset of running application instances instead of simultaneously updating every application instance
Blue/Green deployment – (also known as a red/black deployment), you perform two identical deployments of your application
GKE supports Rolling and Recreate deployments.
Rolling deployments support maxSurge (new pods would be created) and maxUnavailable (existing pods would be deleted)
Managed Instance groups support Rolling deployments using the
maxSurge (new pods would be created) and maxUnavailable (existing pods would be deleted) configurations
Testing Strategies
Canary testing – partially roll out a change and then evaluate its performance against a baseline deployment
A/B testing – test a hypothesis by using variant implementations. A/B testing is used to make business decisions (not only predictions) based on the results derived from data.
Cloud Monitoring helps gain visibility into the performance, availability, and health of your applications and infrastructure.
Cloud Monitoring Agent/Ops Agent helps capture additional metrics like Memory utilization, Disk IOPS, etc.
Cloud Monitoring supports log exports where the logs can be sunk to Cloud Storage, Pub/Sub, BigQuery, or an external destination like Splunk.
Cloud Monitoring API supports push or export custom metrics
Uptime checks help check if the resource responds. It can check the availability of any public service on VM, App Engine, URL, GKE, or AWS Load Balancer.
Process health checks can be used to check if any process is healthy
Cloud Logging provides real-time log management and analysis
Cloud Logging allows ingestion of custom log data from any source
Logs can be exported by configuring log sinks to BigQuery, Cloud Storage, or Pub/Sub.
Cloud Logging Agent can be installed for logging and capturing application logs.
Cloud Logging Agent uses fluentd and fluentd filter can be applied to filter, modify logs before being pushed to Cloud Logging.
VPC Flow Logs helps record network flows sent from and received by VM instances.
Cloud Logging Log-based metrics can be used to create alerts on logs.
Hint: If the logs from VM do not appear on Cloud Logging, check if the agent is installed and running and it has proper permissions to write the logs to Cloud Logging.
is a feature of Google Cloud that lets you inspect the state of a running application in real-time, without stopping or slowing it down
Debug Logpoints allow logging injection into running services without restarting or interfering with the normal function of the service
Debug Snapshots help capture local variables and the call stack at a specific line location in your app’s source code
Compute Services
Compute services like Google Compute Engine and Google Kubernetes Engine are lightly covered more from the security aspects
Google Compute Engine
Google Compute Engine is the best IaaS option for computing and provides fine-grained control
Preemptible VMs and their use cases. HINT – use for short term needs
Committed Usage Discounts – CUD help provide cost benefits for long-term stable and predictable usage.
Managed Instance Group can help scale VMs as per the demand. It also helps provide auto-healing and high availability with health checks, in case an application fails.
Vertical Pod Scaler to scale the pods with increasing resource needs
Horizontal Pod Autoscaler helps scale Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload’s CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics from sources outside of your cluster.
Kubernetes Secrets can be used to store secrets (although they are just base64 encoded values)
Kubernetes supports rolling and recreate deployment strategies.
SRE is a DevOps implementation and focuses on increasing reliability and observability, collaboration, and reducing toil using automation.
SLOs help specify a target level for the reliability of your service using SLIs which provide actual measurements.
SLI Types
Availability
Freshness
Latency
Quality
SLOs – Choosing the measurement method
Synthetic clients to measure user experience
Client-side instrumentation
Application and Infrastructure metrics
Logs processing
SLOs help defines Error Budget and Error Budget Policy which need to be aligned with all the stakeholders and help plan releases to focus on features vs reliability.
SRE focuses on Reducing Toil – Identifying repetitive tasks and automating them.
Production Readiness Review – PRR
Applications should be performance tested for volumes before being deployed to production
SLOs should not be modified/adjusted to facilitate production deployments. Teams should work to make the applications SLO compliant before they are deployed to production.
Google Cloud – Cloud Digital Leader Certification Learning Path
Continuing on the Google Cloud Journey, glad to have passed the seventh certification with the Professional Cloud Digital Leader certification. Google Cloud was missing the initial entry-level certification similar to AWS Cloud Practitioner certification, which was introduced as the Cloud Digital Leader certification. Cloud Digital Leader focuses on general Cloud knowledge, Google Cloud knowledge with its products and services.
Google Cloud – Cloud Digital Leader Certification Summary
Had 59 questions (somewhat odd !!) to be answered in 90 minutes.
Covers a wide range of General Cloud and Google Cloud services and products knowledge.
This exam does not require much Hands-on and theoretical knowledge is good enough to clear the exam.
Google Cloud – Cloud Digital Leader Certification Resources
Sustained-use discounts [SUD] are automatic discounts for running specific resources for a significant portion of the billing month
Committed use discounts [CUD] help with committed use contracts in return for deeply discounted prices for VM usage
Describe Google Cloud’s geographical segmentation strategy. Considerations include:
Regions are collections of zones. Zones have high-bandwidth, low-latency network connections to other zones in the same region. Regions help design fault-tolerant and highly available solutions.
Zones are deployment areas within a region and provide the lowest latency usually less than 10ms
Regional resources are accessible by any resources within the same region
Zonal resources are hosted in a zone are called per-zone resources.
Multiregional resources or Global resources are accessible by any resource in any zone within the same project.
Define Google Cloud support options. Considerations include:
Distinguish between billing support, technical support, role-based support, and enterprise support
Role-Based Support provides more predictable rates and a flexible configuration. Although they are legacy, the exam does cover these.
Enterprise Support provides the fastest case response times and a dedicated Technical Account Management (TAM) contact who helps you execute a Google Cloud strategy.
Recognize a variety of Service Level Agreement (SLA) applications
Google Cloud products and services
Describe the benefits of Google Cloud virtual machine (VM)-based compute options. Considerations include:
Compute Engineprovides virtual machines (VM) hosted on Google’s infrastructure.
Google Cloud VMware Enginehelps easy lift and shift VMware-based applications to Google Cloud without changes to the apps, tools, or processes
Bare Metallets businesses run specialized workloads such as Oracle databases close to Google Cloud while lowering overall costs and reducing risks associated with migration
Preemptible VMsis an instance that can be created and run at a much lower price than normal instances.
Identify and evaluate container-based compute options. Considerations include:
Define the function of a container registry
Container Registry is a single place to manage Docker images, perform vulnerability analysis, and decide who can access what with fine-grained access control.
Cloud SQLprovides fully managed, relational SQL databases and offers MySQL, PostgreSQL, MSSQL databases as a service
Cloud Spannerprovides fully managed, relational SQL databases with joins and secondary indexes
Cloud Bigtableprovides a scalable, fully managed, non-relational NoSQL wide-column analytical big data database service suitable for low-latency single-point lookups and precalculated analytics
BigQueryprovides fully managed, no-ops, OLAP, enterprise data warehouse (EDW) with SQL and fast ad-hoc queries.
Distinguish between ML/AI offerings. Considerations include:
Describe the differences and benefits of Google Cloud’s hardware accelerators (e.g., Vision API, AI Platform, TPUs)
Identify when to train your own model, use a Google Cloud pre-trained model, or build on an existing model
Vision API provides out-of-the-box pre-trained models to extract data from images
AutoML provides the ability to train models
BigQuery Machine Learning provides support for limited models and SQL interface
Differentiate between data movement and data pipelines. Considerations include:
Describe Google Cloud’s data pipeline offerings
Cloud Pub/Subprovides reliable, many-to-many, asynchronous messaging between applications. By decoupling senders and receivers, Google Cloud Pub/Sub allows developers to communicate between independently written applications.
Cloud Dataflowis a fully managed service for strongly consistent, parallel data-processing pipelines
Cloud Data Fusionis a fully managed, cloud-native, enterprise data integration service for quickly building & managing data pipelines
BigQueryService is a fully managed, highly scalable data analysis service that enables businesses to analyze Big Data.
Looker provides an enterprise platform for business intelligence, data applications, and embedded analytics.
Define data ingestion options
Apply use cases to a high-level Google Cloud architecture. Considerations include:
Define Google Cloud’s offerings around the Software Development Life Cycle (SDLC)
Describe solutions for migrating workloads to Google Cloud. Considerations include:
Identify data migration options
Differentiate when to use Migrate for Compute Engine versus Migrate for Anthos
Migrate for Compute Engine provides fast, flexible, and safe migration to Google Cloud
Migrate for Anthos and GKE makes it fast and easy to modernize traditional applications away from virtual machines and into native containers. This significantly reduces the cost and labor that would be required for a manual application modernization project.
Distinguish between lift and shift versus application modernization
involves lift and shift migration with zero to minimal changes and is usually performed with time constraints
Application modernization requires a redesign of infra and applications and takes time. It can include moving legacy monolithic architecture to microservices architecture, building CI/CD pipelines for automated builds and deployments, frequent releases with zero downtime, etc.
Describe networking to on-premises locations. Considerations include:
Define Software-Defined WAN (SD-WAN) – did not have any questions regarding the same.
Private Google Accessprovides access from VM instances to Google provides services like Cloud Storage or third-party provided services
Define identity and access features. Considerations include:
Cloud Identity & Access Management(Cloud IAM)provides administrators the ability to manage cloud resources centrally by controlling who can take what action on specific resources.
Google Cloud Directory Syncenables administrators to synchronize users, groups, and other data from an Active Directory/LDAP service to their Google Cloud domain directory.