Black Friday & Cyber Monday Deals

Udemy – 19th to 26th Nov

DolfinEd Courses

Black Friday Deal – 50% Off

Braincert – Black Friday Sale – ends 26th Nov

Use Coupon Code – BLACK_FRIDAY_2021

AWS Certifications

GCP Certifications

Whizlabs – Black Friday Sale – 19th Nov to 2nd Dec

Both Subscriptions (W970 H90).png

Coursera

Limited-time offer: Get Coursera Plus Monthly for $1!

CCNF – Cyber Monday

AWS Snow Family

AWS Snow Family

  • AWS Snow Family helps physically transport up to exabytes of data into and out of AWS.
  • AWS Snow Family helps customers that need to run operations in austere, non-data center environments, and in locations where there’s a lack of consistent network connectivity.
  • Snow Family devices are owned and managed by AWS and integrate with AWS security, monitoring, storage management, and computing capabilities.
  • AWS Snow Family, comprised of AWS Snowcone, AWS Snowball, and AWS Snowmobile, offers a number of physical devices and capacity points, most with built-in computing capabilities.

AWS Snowcone

  • AWS Snowcone is portable, rugged, and secure that provides edge computing and data transfer devices.
  • Snowcone can be used to collect, process, and move data to AWS, either offline by shipping the device, or online with AWS DataSync.
  • AWS Snowcone stores data securely in edge locations, and can run edge computing workloads that use AWS IoT Greengrass or EC2 instances.
  • Snowcone devices are small and weigh 4.5 lbs. (2.1 kg), so you can carry one in a backpack or fit it in tight spaces for IoT, vehicular, or even drone use cases.

AWS Snowball

  • AWS Snowball is a data migration and edge computing device that comes in two device options:
    • Compute Optimized
      • Snowball Edge Compute Optimized devices provide 52 vCPUs, 42 terabytes of usable block or object storage, and an optional GPU for use cases such as advanced machine learning and full-motion video analysis in disconnected environments.
    • Storage Optimized.
      • Snowball Edge Storage Optimized devices provide 40 vCPUs of compute capacity coupled with 80 terabytes of usable block or S3-compatible object storage.
      • It is well-suited for local storage and large-scale data transfer.
  • Customers can use these two options for data collection, machine learning and processing, and storage in environments with intermittent connectivity (such as manufacturing, industrial, and transportation) or in extremely remote locations (such as military or maritime operations) before shipping it back to AWS.
  • Snowball devices may also be rack mounted and clustered together to build larger, temporary installations.

AWS Snowmobile

  • AWS Snowmobile moves up to 100 PB of data in a 45-foot long ruggedized shipping container and is ideal for multi-petabyte or Exabyte-scale digital media migrations and data center shutdowns.
  • A Snowmobile arrives at the customer site and appears as a network-attached data store for more secure, high-speed data transfer.
  • After data is transferred to Snowmobile, it is driven back to an AWS Region where the data is loaded into S3.
  • Snowmobile is tamper-resistant, waterproof, and temperature controlled with multiple layers of logical and physical security – including encryption, fire suppression, dedicated security personnel, GPS tracking, alarm monitoring, 24/7 video surveillance, and an escort security vehicle during transit.

AWS Snow Family Feature Comparision

AWS Snow Family Feature Comparision

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants to transfer petabyte-scale of data to AWS for their analytics, however are constrained on their internet connectivity? Which AWS service can help them transfer the data quickly?
    1. S3 enhanced uploader
    2. Snowmobile
    3. Snowball
    4. Direct Connect
  2. A company wants to transfer its video library data, which runs in exabytes, to AWS. Which AWS service can help the company transfer the data?
    1. Snowmobile
    2. Snowball
    3. S3 upload
    4. S3 enhanced uploader

References

AWS_Snow

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Learning Path

AWS SysOps Administor - Associate SOA-C02 Certification

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Learning Path

I recently recertified for the AWS Certified SysOps Administrator – Associate (SOA-C02) exam. SOA-C02 is the updated version of the SOA-C01 AWS exam with hands-on labs included, which is the first with AWS.

SOA-C02 basically validates

  • Deploy, manage, and operate workloads on AWS
  • Support and maintain AWS workloads according to the AWS Well-Architected Framework
  • Perform operations by using the AWS Management Console and the AWS CLI
  • Implement security controls to meet compliance requirements
  • Monitor, log, and troubleshoot systems
  • Apply networking concepts (for example, DNS, TCP/IP, firewalls)
  • Implement architectural requirements (for example, high availability, performance, capacity)
  • Perform business continuity and disaster recovery procedures
  • Identify, classify, and remediate incidents

Refer AWS Certified SysOps – Associate (SOA-C02) Exam Guide Sep 18

SOA-C02 Exam Domains

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Summary

  • SOA-C02 is the first AWS exam that includes 2 sections
    • Objective questions
    • Hands-on labs
  • SOA-C02 Exam is for 190 minutes with 51 (somewhat odd !!) objective-type questions and 3 Hands-on labs.
  • Labs are performed in a separate instance. Copy-paste works, so make sure you copy the exact names on resource creation.
  • Labs are pretty easy if you have worked on AWS.
  • NOTE: Once you complete a section and click next you cannot go back to the section. The same is for the labs. Once a lab is completed, you cannot return back to the lab.
  • Practice the Sample Lab provided when you book the exam, which would give you a feel of how the hands-on exam would actually be.

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Resources

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Topics

Practice Labs

  • Create IAM users, IAM roles with specific limited policies.
  • Create a private S3 bucket
    • enable versioning
    • enable default encryption
    • enable lifecycle policies to transition and expire the objects
    • enable same region replication
  • Create a public S3 bucket with static website hosting
  • Set up a VPC with public and private subnets with Routes, SGs, NACLs.
  • Set up a VPC with public and private subnets and enable communication from private subnets to the Internet using NAT gateway
  • Create EC2 instance, create a Snapshot and restore it as a new instance.
  • Set up Security Groups for ALB and Target Groups, and create ALB, Launch Template, Auto Scaling Group, and target groups with sample applications. Test the flow.
  • Create Multi-AZ RDS instance and instance force failover.
  • Set up SNS topic. Use Cloud Watch Metrics to create a CloudWatch alarm on specific thresholds and send notifications to the SNS topic
  • Set up SNS topic. Use Cloud Watch Logs to create a CloudWatch alarm on log patterns and send notifications to the SNS topic.
  • Update a CloudFormation template and re-run the stack and check the impact.
  • Use AWS Data Lifecycle Manager to define snapshot lifecycle.
  • Use AWS Backup to define EFS backup with hourly and daily backup rules.

Management & Governance Tools

  • CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, and visualizes it.
    • EC2 metrics can track (disk, network, CPU, status checks) but do not capture metrics like memory, disk swap, disk storage, etc.
    • CloudWatch unified agent can be used to gather custom metrics like memory, disk swap, disk storage, etc.
    • CloudWatch Alarm actions can be configured to perform actions based on various metrics for e.g. CPU below 5%
    • CloudWatch alarm can monitor StatusCheckFailed_System status on an EC2 instance and automatically recover the instance if it becomes impaired due to an underlying hardware failure or a problem that requires AWS involvement to repair
    • Know ELB monitoring
      • Load Balancer metrics SurgeQueueLength and SpilloverCount
      • HealthyHostCount, UnHealthyHostCount determines the number of healthy and unhealthy instances registered with the load balancer.
      • Reasons for 4XX and 5XX errors
  • Understand CloudTrail for audit and governance
    • CloudTrail log file integrity validation can be used to check whether a log file was modified, deleted, or unchanged after being delivered.
  • Understand AWS CloudFormation as an Infrastructure as a Code service
    • Know templates, stacks, nested stacks.
    • DependsOn attribute can specify the resource creation order and control the creation of a specific resource follows another.
    • Deletion Policies help control deletion behavior (delete, retain, snapshot) for the resources.
    • Nested stacks can separate out reusable, common components and create dedicated templates to mix and match different templates but use nested stacks to create a single, unified stack
    • Change Sets presents a summary or preview of the proposed changes that CloudFormation will make when a stack is updated
    • Drift detection enables you to detect whether a stack’s actual configuration differs, or has drifted, from its expected configuration.
    • Termination protection helps prevent a stack from being accidentally deleted.
    • Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update.
    • StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and Regions with a single operation.
    • Know how to wait for resources set up to be completed before proceeding esp. cfn-signal
  • AWS Config helps to assess, audit, and evaluate the configurations of the AWS resources
    • AWS Config can monitor and detect deviations from desired configurations, and it can also be used together with other services, such as AWS Systems Manager, to automatically remediate such deviations when they are detected
  • AWS Systems Manager is the operations hub
    • Patch Manager automates the process of patching managed instances with both security-related and other types of updates.
    • Session Manager helps manage EC2 instances, on-premises instances, and VMs through a browser-based shell or through the AWS CLI without requiring ssh keys, ports to be opened, or bastion hosts.
  • AWS Trusted Advisor provides recommendations that help follow AWS best practices. Trusted Advisor evaluates your account by using checks.
  • AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet.
  • Personal Health Dashboard provides alerts and guidance for AWS events that might affect your environment & the Service Health Dashboard shows the general status of AWS services.
  • Data Lifecycle Manager to automate the creation, retention, and deletion of EBS snapshots and EBS-backed AMIs. 
  • AWS DataSync automates moving data between on-premises storage and S3 or Elastic File System (EFS).
  • AWS Control Tower provides the easiest way to set up and govern a secure, multi-account AWS environment, called a landing zone

Networking & Content Delivery

  • VPC – Virtual Private Cloud is a virtual network in AWS
    • Understand Public Subnet (has access to the Internet) vs Private Subnet (no access to the Internet)
    • Route table defines rules, termed as routes, which determine where network traffic from the subnet would be routed
    • Internet Gateway enables access to the internet
    • Bastion host – allow access to instances in the private subnet without directly exposing them to the internet.
    • NAT helps route traffic from private subnets to the internet
    • NAT instance vs NAT Gateway
    • Virtual Private Gateway – Connectivity between on-premises and VPC
    • Egress-Only Internet Gateway – relevant to IPv6 only to allow egress traffic from private subnet to internet, without allowing ingress traffic
    • VPC Flow Logs enables you to capture information about the IP traffic going to and from network interfaces in the VPC and can help in monitoring the traffic or troubleshooting any connectivity issues
    • Security Groups vs NACLs esp. Security Groups are stateful and NACLs are stateless.
    • VPC Peering provides a connection between two VPCs that enables routing of traffic between them using private IP addresses.
    • VPC Endpoints enables the creation of a private connection between VPC to supported AWS services and VPC endpoint services powered by PrivateLink using its private IP address
    • Ability to debug networking issues like EC2 not accessible, EC2 not reachable, or not able to communicate with others or Internet.
  • Route 53 provides a scalable DNS system
    • supports ALIAS record type helps map zone apex records to ELB, CloudFront, and S3 endpoints.
    • Understand Routing Policies and their use cases
      • Failover routing policy helps to configure active-passive failover.
      • Geolocation routing policy helps route traffic based on the location of the users.
      • Geoproximity routing policy helps route traffic based on the location of the resources and, optionally, shift traffic from resources in one location to resources in another.
      • Latency routing policy use with resources in multiple AWS Regions and you want to route traffic to the Region that provides the best latency with less round-trip time.
      • Weighted routing policy helps route traffic to multiple resources in specified proportions.
    • Focus on Weighted, Latency routing policies
  • Understand ELB, ALB, and NLB and what features they provide like
    • Understand keys differences ELB vs ALB vs NLB
    • ALB provides content and path routing
    • NLB provides the ability to give static IPs to the load balancer esp. if there is a requirement to whitelist IPs.
    • LB access logs provide the source IP address
    • supports Sticky sessions to enable the load balancer to bind a user’s session to a specific target.
  • Understand CloudFront and use cases
    • CloudFront can be used with S3 to expose static data and website
  • Know VPN and Direct Connect to provide AWS to on-premises connectivity. Not covered in detail.

Compute

  • Understand EC2 in depth
    • Understand EC2 instance types and use cases.
    • Understand EC2 purchase options esp. spot instances and improved reserved instances options.
    • Understand EC2 Metadata & Userdata.
    • Understand EC2 Security. 
      • Use IAM Role work with EC2 instances to access services
      • IAM Role can now be attached to stopped and runnings instances
    • AMIs provide the information required to launch an instance, which is a virtual server in the cloud.
      • AMIs are regional and can be shared publicly or with other accounts
      • Only AMIs with unencrypted volumes or encrypted with a CMK (customer-managed keys) can be shared.
      • The best practice is to use prebaked or golden images to reduce startup time for the applications. Leverage EC2 Image Builder.
    • Troubleshooting EC2 issues
      • RequestLimitExceeded
      • InstanceLimitExceeded – Concurrent running instance limit, default is 20, has been reached in a region. Request increase in limits.
      • InsufficientInstanceCapacity – AWS does not currently have enough available capacity to service the request. Change AZ or Instance Type.
    • Monitoring EC2 instances
      • System status checks failure – Stop and Start
      • Instance status checks failure – Reboot
    • EC2 supports Instance Recovery where the recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata.
    • EC2 Image Builder can be used to pre-baked images with software to speed up booting and launching time.
  • Understand Placement groups
    • Cluster Placement Group provide low latency, High-Performance Computing by the logical grouping of instances within a Single AZ
    • Spread Placement Groups is a group of instances that are each placed on distinct underlying hardware i.e. each instance on a distinct rack across AZ
    • Partition Placement Groups is a group of instances spread across partitions i.e. group of instances spread across racks across AZs
  • Understand Auto Scaling
    • Auto Scaling can be configured with multiple AZs for high availability to launch instances across multiple AZs
    • Auto Scaling attempts to distribute instances evenly between the AZs that are enabled for the Auto Scaling group
    • Auto Scaling supports
      • Dynamic scaling, which allows you to scale automatically in response to the changing demand
      • Schedule scaling, which allows you to scale the application in response to predictable load changes
      • Manual scaling can be performed by changing the desired capacity or adding and removing instances
    • Auto Scaling life cycle hooks can be used to perform activities before instance termination.
  • Understand Lambda and its use cases
    • Lambda functions can be hosted in VPC with internet access controlled by a NAT instance.
    • RDS Proxy acts as an intermediary between the application and an RDS database. RDS Proxy establishes and manages the necessary connection pools to the database so that the application creates fewer database connections.

Storage

  • S3 provides object storage service
    • Understand storage classes with lifecycle policies
    • S3 data protection provides encryption at rest and encryption in transit
      • S3 default encryption can be used to encrypt the data with S3 bucket policies to prevent or reject unencrypted object uploads.
    • Multi-part handling for fault-tolerant and performant large file uploads
    • static website hosting, CORS
    • S3 Versioning can help recover from accidental deletes and overwrites.
    • Pre-Signed URLs for both upload and download
    • S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between the client and an S3 bucket using globally distributed edge locations in CloudFront.
  • Understand Glacier as archival storage. Glacier does not provide immediate access to the data even with expediated retrievals.
  • Understand EBS storage option
  • Storage Gateway allows storage of data in the AWS cloud for scalable and cost-effective storage while maintaining data security.
    •  Gateway-cached volumes stores data is stored in S3 and retains a copy of recently read data locally for low latency access to the frequently accessed data
    • Gateway-stored volumes maintain the entire data set locally to provide low latency access
  • EFS is a cost-optimized, serverless, scalable, and fully managed file storage for use with AWS Cloud and on-premises resources.
    • supports data at rest encryption only during the creation. After creation, the file system cannot be encrypted and must be copied over to a new encrypted disk.
    • supports General purpose and Max I/O performance mode.
    • If hitting PercentIOLimit issue move to Max I/O performance mode.
  • FSx makes it easy and cost-effective to launch, run, and scale feature-rich, high-performance file systems in the cloud
  • FSx for Windows supports SMB protocol and a Multi-AZ file system to provide high availability across multiple AZs.
  • AWS Backup can be used to automate backup for EC2 instances and EFS file systems

Databases

  • RDS provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks.
    • Understand RDS Multi-AZ vs Read Replicas and use cases
    • Multi-AZ deployment provides high availability, durability, and failover support
    • Read replicas enable increased scalability and database availability in the case of an AZ failure.
    • Automated backups and database change logs enable point-in-time recovery of the database during the backup retention period, up to the last five minutes of database usage.
  • Aurora is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine
    • Backtracking “rewinds” the DB cluster to the specified time and performs in-place restore and does not create a new instance.
    • Automated Backups that help restore the DB as a new instance
  • Know ElastiCache use cases, mainly for caching performance
    • Understand ElastiCache Redis vs Memcached
    • Redis provides Multi-AZ support helps provide high availability across AZs and Online resharding to dynamically scale.
    • ElastiCache can be used as a caching layer for RDS.
  • Know DynamoDB. Not covered in detail

Security

  • IAM provides Identity and Access Management services.
  • S3 Encryption supports data at rest and in transit encryption
    • Understand S3 with SSE, SSE-C, SSE-KMS
    • S3 default encryption can help encrypt objects, however, it does not encrypt existing objects before the setting was enabled. You can use S3 Inventory to list the objects and S3 Batch to encrypt them.
  • Understand KMS for key management and envelope encryption
    • KMS with imported customer key material does not support rotation and has to be done manually.
  • AWS WAF – Web Application Firewall helps protect the applications against common web exploits like XSS or SQL Injection and bots that may affect availability, compromise security, or consume excessive resources
  • AWS GuardDuty is a threat detection service that continuously monitors the AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.
  • AWS Secrets Manager can help securely expose credentials as well as rotate them.
    • Secrets Manager integrates with Lambda and supports credentials rotation
  • AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS
  • Amazon Inspector
    • is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.
    • automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
  • AWS Certificate Manager (ACM) handles the complexity of creating, storing, and renewing public and private SSL/TLS X.509 certificates and keys that protect the AWS websites and applications.
  • Know AWS Artifact as on-demand access to compliance reports

Analytics

  • Amazon Athena can be used to query S3 data without duplicating the data and using SQL queries
  • Elasticsearch service is a distributed search and analytics engine built on Apache Lucene.
    • Elasticsearch production setup would be 3 AZs, 3 dedicated master nodes, 6 nodes with two replicas in each AZ.

Integration Tools

  • Understand SQS as message queuing service and SNS as pub/sub notification service
    • Focus on SQS as a decoupling service
    • Understand SQS FIFO, make sure you know the differences between standard and FIFO
  • Understand CloudWatch integration with SNS for notification

Financial Management

  • Know AWS Organizations
    • Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization centrally.
  • Consolidated billing enables consolidating payments from multiple AWS accounts and includes combined usage and volume discounts including sharing of Reserved Instances across accounts.
  • Understand how to setup Billing Alerts using CloudWatch
  • Cost allocation tags can be used to differentiate resource costs and analyzed using Cost Explorer or on a Cost Allocation report.

All the Best

AWS S3 Security

AWS S3 Security

  • AWS S3 Security is a shared responsibility between AWS and the Customer
  • As a managed service, S3 is protected by the AWS global network security procedures
  • AWS handles basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery.
  • Security and compliance of S3 is assessed by third-party auditors as part of multiple AWS compliance programs including SOC, PCI DSS, HIPAA, etc.
  • AWS S3 provides several other features to handle security, which are customers’ responsibility.

S3 Data Protection

Refer blog post @ S3 Data Protection

S3 Encryption

Refer blog post @ S3 Encryption

S3 Permissions

Refer blog post @ S3 Permissions

S3 Object Lock

  • S3 Object Lock helps to store objects using a write-once-read-many (WORM) model.
  • S3 Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
  • S3 Object Lock can help meet regulatory requirements that require WORM storage or add an extra layer of protection against object changes and deletion.
  • Object Lock for new buckets can be enabled only for new buckets. For an existing bucket, contact AWS Support.
  • Enabling Object Lock automatically enables versioning for the bucket.
  • Once Object Lock is enabled, you can’t disable Object Lock or suspend versioning for the bucket.
  • S3 Object Lock provides two retention modes that apply different levels of protection to the objects
    • Governance mode
      • Users can’t overwrite or delete an object version or alter its lock settings unless they have special permissions.
      • Objects against can be protected from being deleted by most users, but you can still grant some users permission to alter the retention settings or delete the object if necessary.
      • Can be used to test retention-period settings before creating a compliance-mode retention period.
    • Compliance mode
      • A protected object version can’t be overwritten or deleted by any user, including the root user in your AWS account.
      • When an object is locked in compliance mode, its retention mode can’t be changed, and its retention period can’t be shortened.
      • Compliance mode helps ensure that an object version can’t be overwritten or deleted for the duration of the retention period.

S3 VPC Gateway Endpoint

  • A VPC endpoint enables connections between a VPC and supported services, without requiring that you use an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
  • VPC is not exposed to the public internet.
  • Gateway Endpoint is a gateway that is a target for a route in your route table used for traffic destined to either S3.

S3 Security Best Practices

S3 Preventative Security Best Practices

  • Ensure S3 buckets use the correct policies and are not publicly accessible
    • Use S3 block public access
    • Identify Bucket policies and ACLs that allow public access
    • Use AWS Trusted Advisor to inspect the S3 implementation.
  • Implement least privilege access
  • Use IAM roles for applications and AWS services that require S3 access
  • Enable Multi-factor authentication (MFA) Delete to help prevent accidental bucket deletions
  • Consider Data at Rest Encryption
  • Enforce Data in Transit Encryption
  • Consider S3 Object Lock to store objects using a “Write Once Read Many” (WORM) model.
  • Enable versioning to easily recover from both unintended user actions and application failures.
  • Consider S3 Cross-Region replication
  • Consider VPC endpoints for S3 access to provide private S3 connectivity and help prevent traffic from potentially traversing the open internet.

S3 Monitoring and Auditing Best Practices

  • Identify and Audit all S3 buckets to have visibility of all the S3 resources to assess their security posture and take action on potential areas of weakness.
  • Implement monitoring using AWS monitoring tools
  • Enable S3 server access logging, which provides detailed records of the requests that are made to a bucket useful for security and access audits
  • Use AWS CloudTrail, which provides a record of actions taken by a user, a role, or an AWS service in S3.
  • Enable AWS Config, which enables you to assess, audit, and evaluate the configurations of the AWS resources
  • Consider using Amazon Macie with S3 to automatically discover, classify, and protect sensitive data in AWS.
  • Monitor AWS security advisories to regularly check security advisories posted in Trusted Advisor for the AWS account.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_S3_Security

AWS S3 Encryption

AWS S3 Encryption

  • AWS S3 Encryption supports both data at rest and data in transit encryption.
  • Data in-transit
    • S3 allows protection of data in transit by enabling communication via SSL or using client-side encryption
  • Data at Rest
    • Server-Side Encryption
      • S3 encrypts the object before saving it on disks in its data centers and decrypt it when the objects are downloaded
    • Client-Side Encryption
      • data is encrypted at the client-side and uploaded to S3.
      • the encryption process, the encryption keys, and related tools are managed by the user.

S3 Server-Side Encryption

  • Server-side encryption is about data encryption at rest
  • Server-side encryption encrypts only the object data.
  • Any object metadata is not encrypted.
  • S3 handles the encryption (as it writes to disks) and decryption (when objects are accessed) of the data objects
  • There is no difference in the access mechanism for both encrypted or unencrypted objects and is handled transparently by S3

Server-Side Encryption with S3-Managed Keys – SSE-S3

  • Each object is encrypted with a unique data key employing strong multi-factor encryption.
  • SSE-S3 encrypts the data key with a master key that is regularly rotated.
  • S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt the data.
  • Whether or not objects are encrypted with SSE-S3 can’t be enforced when they are uploaded using pre-signed URLs, because the only way server-side encryption can be specified is through the AWS Management Console or through an HTTP request header
  • Must set header x-amz-server-side-encryption to AES-256
  • For enforcing server-side encryption for all of the objects that are stored in a bucket, use a bucket policy that denies permissions to upload an object unless the request includes x-amz-server-side-encryption header to request server-side encryption.

Server-Side Encryption with AWS KMS-Managed Keys – SSE-KMS

Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)

  • SSE-KMS is similar to SSE-S3, but it uses AWS Key Management Services (KMS) which provides additional benefits along with additional charges
    • KMS is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud.
    • KMS uses customer master keys (CMKs) to encrypt the S3 objects.
    • The master key is never made available
    • KMS enables you to centrally create encryption keys, define the policies that control how keys can be used
    • Allows audit of keys used to prove they are being used correctly, by inspecting logs in AWS CloudTrail
    • Allows keys to temporarily disabled and re-enabled
    • Allows keys to be rotated regularly
    • Security controls in AWS KMS can help meet encryption-related compliance requirements.
  • SSE-KMS enables separate permissions for the use of an envelope key (that is, a key that protects the data’s encryption key) that provides added protection against unauthorized access of the objects in S3.
  • SSE-KMS provides the option to create and manage encryption keys yourself, or use a default customer master key (CMK) that is unique to you, the service you’re using, and the region you’re working in.
  • Creating and Managing CMK gives more flexibility, including the ability to create, rotate, disable, and define access controls, and to audit the encryption keys used to protect the data.
  • Data keys used to encrypt the data are also encrypted and stored alongside the data they protect and are unique to each object.
  • Process flow
    • An application or AWS service client requests an encryption key to encrypt data and passes a reference to a master key under the account.
    • Client requests are authenticated based on whether they have access to use the master key.
    • A new data encryption key is created, and a copy of it is encrypted under the master key.
    • Both the data key and encrypted data key are returned to the client.
    • Data key is used to encrypt customer data and then deleted as soon as is practical.
    • Encrypted data key is stored for later use and sent back to AWS KMS when the source data needs to be decrypted.
  • S3 only supports symmetric keys and not asymmetric keys.
  • Must set header x-amz-server-side-encryption to aws:kms

Server-Side Encryption with Customer-Provided Keys – SSE-C

AWS S3 Server Side Encryption using Customer Provided Keys SSE-C

  • Encryption keys can be managed and provided by the Customer and S3 manages the encryption, as it writes to disks, and decryption, when you access the objects
  • When you upload an object, the encryption key is provided as a part of the request and S3 uses that encryption key to apply AES-256 encryption to the data and removes the encryption key from memory.
  • When you download an object, the same encryption key should be provided as a part of the request. S3 first verifies the encryption key and if it matches the object is decrypted before returning back to you.
  • As each object and each object’s version can be encrypted with a different key, you are responsible for maintaining the mapping between the object and the encryption key used.
  • SSE-C requests must be done through HTTPS and S3 will reject any requests made over HTTP when using SSE-C.
  • For security considerations, AWS recommends considering any key sent erroneously using HTTP to be compromised and discarded or rotated
  • S3 does not store the encryption key provided. Instead, a randomly salted HMAC value of the encryption key is stored which can be used to validate future requests. The salted HMAC value cannot be used to decrypt the contents of the encrypted object or to derive the value of the encryption key. That means, if you lose the encryption key, you lose the object.

Client-Side Encryption

Client-side encryption refers to encrypting data before sending it to S3 and decrypting the data after downloading it

AWS KMS-managed Customer Master Key – CMK

  • Customer can maintain the encryption CMK with AWS KMS and can provide the CMK id to the client to encrypt the data
  • Uploading Object
    • AWS S3 encryption client first sends a request to AWS KMS for the key to encrypt the object data
    • AWS KMS returns a randomly generated data encryption key with 2 versions a plain text version for encrypting the data and cipher blob to be uploaded with the object as object metadata
    • Client obtains a unique data encryption key for each object it uploads.
    • AWS S3 encryption client uploads the encrypted data and the cipher blob with object metadata
  • Download Object
    • AWS Client first downloads the encrypted object along with the cipher blob version of the data encryption key stored as object metadata
    • AWS Client then sends the cipher blob to AWS KMS to get the plain text version of the same, so that it can decrypt the object data.

Client-Side master key

  • Encryption master keys are completely maintained at the Client-side
  • Uploading Object
    • S3 encryption client ( for e.g. AmazonS3EncryptionClient in the AWS SDK for Java) locally generates randomly a one-time-use symmetric key (also known as a data encryption key or data key).
    • Client encrypts the data encryption key using the customer provided master key
    • Client uses this data encryption key to encrypt the data of a single S3 object (for each object, the client generates a separate data key).
    • Client then uploads the encrypted data to S3 and also saves the encrypted data key and its material description as object metadata ( x-amz-meta-x-amz-key)  in S3 by default
  • Downloading Object
    • Client first downloads the encrypted object from S3 along with the object metadata.
    • Using the material description in the metadata, the client first determines which master key to use to decrypt the encrypted data key.
    • Using that master key, the client decrypts the data key and uses it to decrypt the object
  • Client-side master keys and your unencrypted data are never sent to AWS
  • If the master key is lost the data cannot be decrypted

Enforcing S3 Encryption

  • S3 Default Encryption
    • helps set the default encryption behavior for an S3 bucket so that all new objects are encrypted when they are stored in the bucket.
    • Objects are encrypted using SSE with either S3-managed keys (SSE-S3) or AWS KMS keys stored in AWS KMS (SSE-KMS).
  • S3 Bucket Policy
    • can be applied that denies permissions to upload an object unless the request includes x-amz-server-side-encryption header to request server-side encryption.
    • is not required, if S3 default encryption is enabled
    • are evaluated before the default encryption.

S3 Bucket Policy Enforce Encryption

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is storing data on Amazon Simple Storage Service (S3). The company’s security policy mandates that data is encrypted at rest. Which of the following methods can achieve this? Choose 3 answers
    1. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys
    2. Use Amazon S3 server-side encryption with customer-provided keys
    3. Use Amazon S3 server-side encryption with EC2 key pair.
    4. Use Amazon S3 bucket policies to restrict access to the data at rest.
    5. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key
    6. Use SSL to encrypt the data while in transit to Amazon S3.
  2. A user has enabled versioning on an S3 bucket. The user is using server side encryption for data at Rest. If the user is supplying his own keys for encryption (SSE-C) which of the below mentioned statements is true?
    1. The user should use the same encryption key for all versions of the same object
    2. It is possible to have different encryption keys for different versions of the same object
    3. AWS S3 does not allow the user to upload his own keys for server side encryption
    4. The SSE-C does not work when versioning is enabled
  3. A storage admin wants to encrypt all the objects stored in S3 using server side encryption. The user does not want to use the AES 256 encryption key provided by S3. How can the user achieve this?
    1. The admin should upload his secret key to the AWS console and let S3 decrypt the objects
    2. The admin should use CLI or API to upload the encryption key to the S3 bucket. When making a call to the S3 API mention the encryption key URL in each request
    3. S3 does not support client supplied encryption keys for server side encryption
    4. The admin should send the keys and encryption algorithm with each API call
  4. A user has enabled versioning on an S3 bucket. The user is using server side encryption for data at rest. If the user is supplying his own keys for encryption (SSE-C), what is recommended to the user for the purpose of security?
    1. User should not use his own security key as it is not secure
    2. Configure S3 to rotate the user’s encryption key at regular intervals
    3. Configure S3 to store the user’s keys securely with SSL
    4. Keep rotating the encryption key manually at the client side
  5. A system admin is planning to encrypt all objects being uploaded to S3 from an application. The system admin does not want to implement his own encryption algorithm; instead he is planning to use server side encryption by supplying his own key (SSE-C.. Which parameter is not required while making a call for SSE-C?
    1. x-amz-server-side-encryption-customer-key-AES-256
    2. x-amz-server-side-encryption-customer-key
    3. x-amz-server-side-encryption-customer-algorithm
    4. x-amz-server-side-encryption-customer-key-MD5
  6. You are designing a personal document-archiving solution for your global enterprise with thousands of employee. Each employee has potentially gigabytes of data to be backed up in this archiving solution. The solution will be exposed to he employees as an application, where they can just drag and drop their files to the archiving system. Employees can retrieve their archives through a web interface. The corporate network has high bandwidth AWS DirectConnect connectivity to AWS. You have regulatory requirements that all data needs to be encrypted before being uploaded to the cloud. How do you implement this in a highly available and cost efficient way?
    1. Manage encryption keys on-premise in an encrypted relational database. Set up an on-premises server with sufficient storage to temporarily store files and then upload them to Amazon S3, providing a client-side master key. (Storing temporary increases cost and not a high availability option)
    2. Manage encryption keys in a Hardware Security Module(HSM) appliance on-premise server with sufficient storage to temporarily store, encrypt, and upload files directly into amazon Glacier. (Not cost effective)
    3. Manage encryption keys in amazon Key Management Service (KMS), upload to amazon simple storage service (s3) with client-side encryption using a KMS customer master key ID and configure Amazon S3 lifecycle policies to store each object using the amazon glacier storage tier. (with CSE-KMS the encryption happens at client side before the object is upload to S3 and KMS is cost effective as well)
    4. Manage encryption keys in an AWS CloudHSM appliance. Encrypt files prior to uploading on the employee desktop and then upload directly into amazon glacier (Not cost effective)
  7. A user has enabled server side encryption with S3. The user downloads the encrypted object from S3. How can the user decrypt it?
    1. S3 does not support server side encryption
    2. S3 provides a server side key to decrypt the object
    3. The user needs to decrypt the object using their own private key
    4. S3 manages encryption and decryption automatically
  8. When uploading an object, what request header can be explicitly specified in a request to Amazon S3 to encrypt object data when saved on the server side?
    1. x-amz-storage-class
    2. Content-MD5
    3. x-amz-security-token
    4. x-amz-server-side-encryption
  9. A company must ensure that any objects uploaded to an S3 bucket are encrypted. Which of the following actions should the SysOps Administrator take to meet this requirement? (Select TWO.)
    1. Implement AWS Shield to protect against unencrypted objects stored in S3 buckets.
    2. Implement Object access control list (ACL) to deny unencrypted objects from being uploaded to the S3 bucket.
    3. Implement Amazon S3 default encryption to make sure that any object being uploaded is encrypted before it is stored.
    4. Implement Amazon Inspector to inspect objects uploaded to the S3 bucket to make sure that they are encrypted.
    5. Implement S3 bucket policies to deny unencrypted objects from being uploaded to the buckets.

References

AWS_S3_Encryption

Using AWS in a Hybrid Environment

Using AWS in a Hybrid Environment

As the adoption of the public cloud grows, more and more people are finding themselves adopting a hybrid cloud model. This adoption is being driven out of necessity more than any architectural design decision. When you start to look closely at what makes a hybrid cloud, it’s increasingly understandable why more and more people are using AWS in this hybrid model.

Of course, when we think about why you would use a hybrid cloud, it’s the mixture of different computing models that is the most obvious reason. With a hybrid cloud model consisting of on-premises infrastructures, private cloud services, and public cloud offerings, and being able to orchestrate the deployment of workloads and components across these, it becomes increasingly easy to distribute an application in a resilient manner.

AWS services

AWS provides so many different services from IaaS virtual machines, PaaS relational database offerings, through to all different types of storage offerings at extremely low costs. When we start to look at how we can leverage AWS in a hybrid cloud world, it becomes extremely obvious that storage is at the forefront of these decisions, but the key driver behind everything in the cloud is agility. It’s a buzzword that has been around for quite some time, but the ability to spin up and destroy workloads on-demand without having to invest a large amount of capital to deliver a new service or capability to a business is what is driving this hybrid cloud adoption. Of course, when you start to introduce different platforms into your already existing environment, complexities can exist. Overcoming these complexities is what makes a successful hybrid cloud implementation. Considerations around security, authentication, networking, and connectivity must be looked at. In short, these considerations are the same as when a new data center is implemented, it is just simply a different platform.

Veeam, AWS, and the Hybrid Cloud

As more and more businesses adopt this hybrid cloud approach, protecting and migrating workloads across these different platforms becomes extremely complex. This is where Veeam can help businesses deliver on the true promise of a hybrid cloud. Veeam offers multiple products that can be used individually in a modular way, to provide data protection and management of individual resources and services or combined to provide a centralized data management solution.

Let’s look at a real-world scenario. In this example, we have some workloads running out in AWS that need to be migrated to our on-premises data center. Maybe we are facing latency issues, or we have a security requirement for this application to be closer to some services running on-premises. Using Veeam, we can easily protect those workloads and migrate those to another data center.

Using AWS in a Hybrid Environment - VeeamThe diagram above shows how simple this can be carried out. By protecting workloads in AWS and using Veeam, you can simply move workloads across different platforms. It doesn’t matter which direction you want to move workloads either, you can just as easily take a virtual machine running on VMware vSphere or Microsoft Hyper-V and migrate that to AWS EC2 as an instance. You can move workloads across multiple platforms or hypervisors extremely easily.

Summary

Introducing and implementing a hybrid cloud with AWS may be daunting, but it needn’t be complex. By taking a considered approach to aspects such as networking, connectivity, and migration, leveraging AWS in a hybrid cloud model with your existing on-premises implementation can provide anyone with an agile, simple, quick, and easy approach to delivering new services and capabilities. Combine that with products from companies like Veeam, and implementing a true hybrid cloud data management solution is extremely simple, providing you with the flexibility of moving workloads across multiple platforms, while implementing a reliable service to the end customers of the business.

For more information on Veeam, please visit the Veeam Backup for AWS page. Also, don’t forget to check “Choose Your Cloud Adventure” interactive e-book to learn how to manage your AWS data like a hero

AWS Elastic File Store – EFS

Elastic File Store – EFS

  • Elastic File Store – EFS provides a simple, fully managed, easy to set up, scale, serverless, and cost-optimized file storage for use with AWS Cloud and on-premises resources.
  • EFS can automatically scale from gigabytes to petabytes of data without needing to provision storage
  • EFS provides managed NFS (network file system) that can be mounted on and accessed by multiple EC2 in multiple AZs simultaneously
  • EFS offers highly durable, highly scalable, and highly available.
    • stores data redundantly across multiple Availability Zones
    • grows and shrinks automatically as files are added and removed, so there is no need to manage storage procurement or provisioning.
  • EFS supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol
  • EFS provides file system access semantics, such as strong data consistency and file locking
  • EFS is compatible with all Linux-based AMIs for EC2,  POSIX file system (~Linux) that has a standard file API
  • EFS is a shared POSIX system for Linux systems and does not work for Windows
  • EFS offers the ability to encrypt data at rest using KMS and in transit.
  • EFS can be accessed from on-premises using an AWS Direct Connect or AWS VPN connection between the on-premises datacenter and VPC.
  • EFS can be accessed concurrently from servers in the on-premises datacenter as well as EC2 instances in the VPC

EFS Storage Classes

EFS Storage Classes

Standard storage classes

  • EFS Standard and Standard–Infrequent Access (Standard–IA), which offer multi-AZ resilience and the highest levels of durability and availability.
  • For file systems using Standard storage classes, a mount target can be created in each availability Zone in the AWS Region.
  • Standard
    • regional storage class for frequently accessed data.
    • offers the highest levels of availability and durability by storing file system data redundantly across multiple AZs in an AWS Region.
    • ideal for active file system workloads and you pay only for the file system storage you use per month
  • Standard-Infrequent Access (Standard-IA)
    • regional, low-cost storage class that’s cost-optimized for files infrequently accessed i.e. not accessed every day
    • offers the highest levels of availability and durability by storing file system data redundantly across multiple AZs in an AWS Region
    • cost to retrieve files, lower price to store

EFS Regional

One Zone storage classes

  • EFS One Zone and EFS One Zone–Infrequent Access (EFS One Zone–IA), which offer customers the choice of additional savings by choosing to save their data in a single AZ.
  • For file systems using One Zone storage classes, you create only a single mount target that is in the same Availability Zone as the file system
  • EFS One Zone
    • For frequently accessed files stored redundantly within a single AZ in an AWS Region.
  • EFS One Zone-IA (One Zone-IA)
    • A lower-cost storage class for infrequently accessed files stored redundantly within a single AZ in an AWS Region.

EFS Zonal

EFS Lifecycle Management

  • EFS lifecycle management automatically manages cost-effective file storage for the file systems.
  • When enabled, lifecycle management migrates files that haven’t been accessed for a set period of time to an infrequent access storage class, Standard-IA or One Zone-IA
  • Lifecycle Management automatically moves the data to the EFS IA storage class according to the lifecycle policy. for e.g., you can move files automatically into EFS IA fourteen days after not being accessed.
  • EFS lifecycle management uses an internal timer to track when a file was last accessed, and not the POSIX file system attributes that are publicly viewable.
  • Whenever a file in Standard or One Zone storage is accessed, the lifecycle management timer is reset.
  • After lifecycle management moves a file into one of the IA storage classes, the file remains there indefinitely if EFS Intelligent-Tiering is not enabled.

EFS Performance Modes

General Purpose (default)

  • latency-sensitive use cases
  • ideal for web serving environments, content management systems, home directories, and general file serving, etc.

Max I/O

  • can scale to higher levels of aggregate throughput and operations per second
  • with a tradeoff of slightly higher latencies for file metadata operations
  • ideal for highly parallelized applications and workloads, such as big data analysis, media processing, and genomic analysis
  • is not available for file systems using One Zone storage classes.

EFS Throughput Modes

Provisioned Throughput

  • throughput of the file system (in MiB/s) can be instantly provisioned independent of the amount of data stored.

Bursting Throughput

  • throughput on EFS scales as the size of the file system in the EFS Standard or One Zone storage class grows

EFS Security

  • EFS supports authentication, authorization, and encryption capabilities to help you meet the security and compliance requirements.
  • EFS supports two forms of encryption for file systems,
    • Encryption in transit
      • Encryption in Transit can be enabled when you mount the file system.
    • Encryption at rest.
      • can be enabled only when creating an EFS file system.
      • All the data and metadata are encrypted.
  • NFS client access to EFS is controlled by both AWS IAM policies and network security policies like security groups

EFS Access Points

  • EFS access points are application-specific entry points into an EFS file system that make it easier to manage application access to shared datasets
  • Access points can enforce a user identity, including the user’s POSIX groups, for all file system requests that are made through the access point.
  • Access points can also enforce a different root directory for the file system so that clients can only access data in the specified directory or its subdirectories.
  • AWS IAM policies can be used to enforce that specific application use a specific access point.
  • IAM policies with access points provide secure access to specific datasets for the applications.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An administrator runs a highly available application in AWS. A file storage layer is needed that can share between instances and scale the platform more easily. The storage should also be POSIX compliant. Which AWS service can perform this action?
    1. Amazon EBS
    2. Amazon S3
    3. Amazon EFS
    4. Amazon EC2 Instance store

References

AWS_Elastic_File_Store_EFS

AWS EC2 Image Builder

AWS EC2 Image Builder

  • EC2 Image Builder is a fully managed AWS service that makes it easier to automate the creation, management, and deployment of customized, secure, and up-to-date server images that are pre-installed and pre-configured with software and settings to meet specific IT standards
  • EC2 Image Builder simplifies the building, testing, and deployment of Virtual Machine and container images for use on AWS or on-premises.
  • Image Builder significantly reduces the effort of keeping images up-to-date and secure by providing a simple graphical interface, built-in automation, and AWS-provided security settings.
  • Image Builder removes any manual steps for updating an image nor do you have to build your own automation pipeline.
  • Image Builder provides a one-stop-shop to build, secure, and test up-to-date Virtual Machine and container images using common workflows.
  • Image Builder allows you to easily validate your images for functionality, compatibility, and security compliance with AWS-provided tests and your own tests before using them in production
  • Image Builder is offered at no cost, other than the cost of the underlying AWS resources used to create, store, and share the images.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_EC2_Image_Builder

AWS CloudWatch Agent

AWS CloudWatch Agent

  • CloudWatch Agent helps collect metrics and logs and push them to CloudWatch
  • Default namespace for metrics collected by the CloudWatch agent is CWAgent, although a different namespace can be configured
  • Logs collected by the unified CloudWatch agent are processed and stored in CloudWatch Logs
  • CloudWatch agent helps to
    • Collect internal system-level metrics from EC2 instances across operating systems. The metrics can include in-guest metrics, in addition to the metrics for EC2 instances.
    • Collect system-level metrics from on-premises servers. These can include servers in a hybrid environment as well as servers not managed by AWS.
    • Retrieve custom metrics from the applications or services using the StatsD and collectd protocols. StatsD is supported on both Linux servers and servers running Windows Server. collectd is supported only on Linux servers.
    • Collect logs from EC2 instances and on-premises servers, running either Linux or Windows Server.
    • Collect metrics for individual processes using the procstat plugins, which are stored in the procstat namespace.
  • CloudWatch agent can be installed, on Amazon Linux 2 and on all supported operating systems, manually or using AWS Systems Manager
  • CloudWatch agent needs to write metrics to CloudWatch, and IAM role for EC2 instances or IAM user for the on-premises server should be assigned.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_CloudWatch_Agent