AWS CloudTrail helps you enable governance, compliance, operational, and risk auditing of the AWS account.
CloudTrail helps to get a history of AWS API calls and related events for the AWS account.
CloudTrail records actions taken by a user, role, or AWS service.
CloudTrail tracking includes calls made by using the AWS Management Console, AWS SDKs, Command-line tools (CLI), APIs, and higher-level AWS services (such as AWS CloudFormation)
CloudTrail helps to identify which users and accounts called AWS, the source IP address the calls were made from, and when the calls occurred.
CloudTrail is enabled on your AWS account when you create it.
CloudTrail is per AWS account and per region for all the supported services.
CloudTrail AWS API call history enables security analysis, resource change tracking, and compliance auditing.
CloudTrail event history provides a viewable, searchable, and downloadable record of the past 90 days of CloudTrail events.
CloudTrail logs can be encrypted by using default S3 SSE-S3 or KMS.
CloudTrail log file integrity validation can be used to check whether a log file was modified, deleted, or unchanged after CloudTrail delivered it.
CloudTrail integrates with AWS Organizations and provides an organization trail that enables the delivery of events in the management account, delegated administrator account, and all member accounts in an organization to the same S3 bucket, CloudWatch Logs, and CloudWatch Events.
CloudTrail Insights can be enabled on a trail to help identify and respond to unusual activity.
CloudTrail Lake helps run fine-grained SQL-based queries on events.
CloudTrail Works
AWS CloudTrail captures AWS API calls and related events made by or on behalf of an AWS account and delivers log files to a specified S3 bucket.
S3 lifecycle rules can be applied to archive or delete log files automatically.
Log files from all the regions can be delivered to a single S3 bucket and are encrypted, by default, using S3 server-side encryption (SSE). Encryption can be configured with AWS KMS.
CloudTrail publishes new log files multiple times an hour, usually about every 5 mins, and typically delivers log files within 15 mins of an API call.
CloudTrail can be configured, optionally, to deliver events to a log group to be monitored by CloudWatch Logs.
SNS notifications can be configured to be sent each time a log file is delivered to your bucket.
A Trail is a configuration that enables logging of the AWS API activity and delivery of events to an specified S3 bucket.
Trail can be created with CloudTrail console, AWS CLI, or CloudTrail API.
Events in a trail can also be delivered and analyzed with CloudWatch Logs and EventBridge.
A Trail can be applied to all regions or a single region
A trail that applies to all regions
When a trail is created that applies to all regions, CloudTrail creates the same trail in each region, records the log files in each region, and delivers the log files to the specified single S3 bucket (and optionally to the CloudWatch Logs log group).
Default setting when a trail is created using the CloudTrail console.
A single SNS topic for notifications and CloudWatch Logs log group for events would suffice for all regions.
Advantages
configuration settings for the trail apply consistently across all regions.
manage trail configuration for all regions from one location.
immediately receive events from a new region
receive log files from all regions in a single S3 bucket and optionally in a CloudWatch Logs log group.
create trails in regions not used often to monitor for unusual activity.
A trail that applies to one region
An S3 bucket can be specified that receives events only from that region and it can be in any region that you specify.
Additional individual trails are created that apply to specific regions, those trails can deliver event logs to a single S3 bucket.
Turning on a trail means creating a trail and start logging.
CloudTrail supports five trails per region. A trail that applies to all regions counts as one trail in every region
As a best practice, a trail can be created that applies to all regions in the AWS partition e.g. AWS for all standard AWS regions or aws-cn for china
IAM can control which AWS users can create, configure, or delete trails, start and stop logging, and access the buckets containing log information.
Log file integrity validation can be enabled to verify that log files have remained unchanged since CloudTrail delivered them.
CloudTrail Lake helps run fine-grained SQL-based queries on the events.
CloudTrail with AWS Organizations
With AWS Organizations, an Organization trail can be created that will log all events for all AWS accounts in that organization.
Organization trails can apply to all AWS Regions or one Region.
Organization trails must be created in the management account, and when specified as applying to an organization, are automatically applied to all member accounts in the organization.
Member accounts will be able to see the organization trail, but cannot modify or delete it.
By default, member accounts will not have access to the log files for the organization trail in the S3 bucket.
CloudTrail Events
An event in CloudTrail is the record of activity in an AWS account.
CloudTrail events provide a history of both API and non-API account activity made through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.
CloudTrail has the following event types
Management Events
Management events provide information about management or control plane operations that are performed on resources.
Includes resource creation, modification, and deletion events.
By default, trails log all management events for the AWS account.
Data Events
Data events provide information about the resource or data plane operations performed on or in a resource.
Includes data events like reading and writing of objects in S3 or items in DynamoDB.
By default, trails don’t log data events for the AWS account.
CloudTrail Insights Event
CloudTrail Insights events capture unusual API call rate or error rate activity in the AWS account.
An Insights event is a record of unusual levels of writemanagement API activity, or unusual levels of errors returned on management API activity.
By default, trails don’t log CloudTrail Insights events.
When enabled, CloudTrail detects unusual activity, and Insights events are logged to a different folder or prefix in the destination S3 bucket for the trail.
Insights events provide relevant information, such as the associated API, error code, incident time, and statistics, that help you understand and act on unusual activity.
Unlike other types of events captured in a CloudTrail trail, Insights events are logged only when CloudTrail detects changes in the account’s API usage or error rate logging that differ significantly from the account’s typical usage patterns.
Global Services Option
For most services, events are sent to the region where the action happened.
For global services such as IAM, AWS STS, and CloudFront, events are delivered to any trail that has the Include global services option enabled.
AWS OpsWorks and Route 53 actions are logged in the US East (N. Virginia) region.
To avoid receiving duplicate global service events, remember
Global service events are always delivered to trails that have the Apply trail to all regions option enabled.
Events are delivered from a single region to the bucket for the trail. This setting cannot be changed.
If you have a single region trail, you should enable the Include global services option.
If you have multiple single region trails, you should enable the Include global services option in only one of the trails.
About global service events
have a trail with the Apply trail to all regions option enabled.
have multiple single-region trails.
do not need to enable the Include global services option for the single region trails. Global service events are delivered for the first trail.
CloudTrail Log File Integrity
Validated log files are invaluable in security and forensic investigations.
CloudTrail log file integrity validation can be used to check whether a log file was modified, deleted, or unchanged after CloudTrail delivered it.
The validation feature is built using industry-standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing which makes it computationally infeasible to modify, delete or forge CloudTrail log files without detection.
When log file integrity validation is enabled
CloudTrail creates a hash for every log file that it delivers.
Every hour, CloudTrail also creates and delivers a digest file that references the log files for the last hour and contains a hash of each.
CloudTrail signs each digest file using the private key of a public and private key pair.
After delivery, the public key can be used to validate the digest file.
CloudTrail uses different key pairs for each AWS region.
Digest files are delivered to the same S3 bucket, but a separate folder, associated with the trail for the log files
The separation of digest files and log files enables the enforcement of granular security policies and permits existing log processing solutions to continue to operate without modification.
Each digest file also contains the digital signature of the previous digest file if one exists.
Signature for the current digest file is in the metadata properties of the digest file S3 object.
Log files and digest files can be stored in S3 or Glacier securely, durably and inexpensively for an indefinite period of time.
To enhance the security of the digest files stored in S3, S3 MFA Delete can be enabled.
CloudTrail Enabled Use Cases
Track changes to AWS resources
Can be used to track creation, modification or deletion of AWS resources
Compliance Aid
easier to demonstrate compliance with internal policy and regulatory standards
Troubleshooting Operational Issues
identify the recent changes or actions to troubleshoot any issues
Security Analysis
use log files as inputs to log analysis tools to perform security analysis and to detect user behavior patterns
CloudTrail Processing Library (CPL)
CloudTrail Processing Library (CPL) helps build applications to take immediate action on events in CloudTrail log files
CPL helps to
read messages delivered to SNS or SQS
downloads and reads the log files from S3 continuously
serializes the events into a POJO
allows custom logic implementation for processing
fault tolerant and supports multi-threading
AWS CloudTrail vs AWS Config
AWS Config reports on WHAT has changed, whereas CloudTrail reports on WHO made the change, WHEN, and from WHICH location.
AWS Config focuses on the configuration of the AWS resources and reports with detailed snapshots on HOW the resources have changed, whereas CloudTrail focuses on the events, or API calls, that drive those changes. It focuses on the user, application, and activity performed on the system.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
You currently operate a web application in the AWS US-East region. The application runs on an auto-scaled layer of EC2 instances and an RDS Multi-AZ database. Your IT security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your EC2, IAM and RDS resources. The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend?
Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected. Use IAM roles, S3 bucket policies and Multi-Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. (Single New bucket with global services option for IAM and MFA deletefor confidentiality)
Create a new CloudTrail with one new S3 bucket to store the logs. Configure SNS to send log file delivery notifications to your management system. Use IAM roles and S3 bucket policies on the S3 bucket that stores your logs. (Missing Global Services for IAM)
Create a new CloudTrail trail with an existing S3 bucket to store the logs and with the global services option selected Use S3 ACLs and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. (Existing bucket prevents confidentiality)
Create three new CloudTrail trails with three new S3 buckets to store the logs one for the AWS Management console, one for AWS SDKs and one for command line tools. Use IAM roles and S3 bucket policies on the S3 buckets that store your logs (3 buckets not needed, Missing Global services options)
Which of the following are true regarding AWS CloudTrail? Choose 3 answers
CloudTrail is enabled globally (it can be enabled for all regions and also per-region basis)
CloudTrail is enabled by default (was not enabled by default, however, it is enabled by default as per the latest AWS enhancements)
CloudTrail is enabled on a per-region basis (it can be enabled for all regions and also per-region basis)
CloudTrail is enabled on a per-service basis (once enabled it is applicable for all the supported services, service can’t be selected)
Logs can be delivered to a single Amazon S3 bucket for aggregation
CloudTrail is enabled for all available services within a region. (is enabled only for CloudTrail supported services)
Logs can only be processed and delivered to the region in which they are generated. (can be logged to bucket in any region)
An organization has configured the custom metric upload with CloudWatch. The organization has given permission to its employees to upload data using CLI as well SDK. How can the user track the calls made to CloudWatch?
The user can enable logging with CloudWatch which logs all the activities
Use CloudTrail to monitor the API calls
Create an IAM user and allow each user to log the data using the S3 bucket
Enable detailed monitoring with CloudWatch
A user is trying to understand the CloudWatch metrics for the AWS services. It is required that the user should first understand the namespace for the AWS services. Which of the below mentioned is not a valid namespace for the AWS services?
Your CTO thinks your AWS account was hacked. What is the only way to know for certain if there was unauthorized access and what they did, assuming your hackers are very sophisticated AWS engineers and doing everything they can to cover their tracks?
Use CloudTrail Log File Integrity Validation. (Refer link)
Use AWS Config SNS Subscriptions and process events in real time.
Use CloudTrail backed up to AWS S3 and Glacier.
Use AWS Config Timeline forensics.
Your CTO has asked you to make sure that you know what all users of your AWS account are doing to change resources at all times. She wants a report of who is doing what over time, reported to her once per week, for as broad a resource type group as possible. How should you do this?
Create a global AWS CloudTrail Trail. Configure a script to aggregate the log data delivered to S3 once per week and deliver this to the CTO.
Use CloudWatch Events Rules with an SNS topic subscribed to all AWS API calls. Subscribe the CTO to an email type delivery on this SNS Topic.
Use AWS IAM credential reports to deliver a CSV of all uses of IAM User Tokens over time to the CTO.
Use AWS Config with an SNS subscription on a Lambda, and insert these changes over time into a DynamoDB table. Generate reports based on the contents of this table.
AWS Organizations is an account management service that enables consolidating multiple AWS accounts into an organization that can be created and centrally managed.
AWS Organizations enables you to
Automate AWS account creation and management, and provision resources with AWS CloudFormation Stacksets
Maintain a secure environment with policies and management of AWS security services
Govern access to AWS services, resources, and regions
Centrally manage policies across multiple AWS accounts
gives developers and systems administrators an easy way to create and manage a collection of related AWS resources
Resources can be updated, deleted, and modified in an orderly, controlled and predictable fashion, in effect applying version control to the AWS infrastructure as code done for software code
CloudFormation Template is an architectural diagram, in JSON format, and Stack is the end result of that diagram, which is actually provisioned
template can be used to set up the resources consistently and repeatedly over and over across multiple regions and consists of
List of AWS resources and their configuration values
An optional template file format version number
An optional list of template parameters (input values supplied at stack creation time)
An optional list of output values like public IP address using the Fn::GetAtt function
An optional list of data tables used to lookup static configuration values for e.g., AMI names per AZ
supports Chef & Puppet Integration to deploy and configure right down the application layer
supports Bootstrap scripts to install packages, files, and services on the EC2 instances by simply describing them in the CF template
automatic rollback on error feature is enabled, by default, which will cause all the AWS resources that CF created successfully for a stack up to the point where an error occurred to be deleted
provides a WaitCondition resource to block the creation of other resources until a completion signal is received from an external source
allows DeletionPolicy attribute to be defined for resources in the template
retain to preserve resources like S3 even after stack deletion
snapshot to backup resources like RDS after stack deletion
DependsOn attribute to specify that the creation of a specific resource follows another
Service role is an IAM role that allows AWS CloudFormation to make calls to resources in a stack on the user’s behalf
Nested stacks can separate out reusable, common components and create dedicated templates to mix and match different templates but use nested stacks to create a single, unified stack
Change Sets presents a summary or preview of the proposed changes that CloudFormation will make when a stack is updated
Drift detection enables you to detect whether a stack’s actual configuration differs, or has drifted, from its expected configuration.
Termination protection helps prevent a stack from being accidentally deleted.
Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update.
StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and Regions with a single operation.
makes it easier for developers to quickly deploy and manage applications in the AWS cloud.
automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling and application health monitoring
CloudFormation supports ElasticBeanstalk
provisions resources to support
a web application that handles HTTP(S) requests or
a web application that handles background-processing (worker) tasks
supports Out Of the Box
Apache Tomcat for Java applications
Apache HTTP Server for PHP applications
Apache HTTP server for Python applications
Nginx or Apache HTTP Server for Node.js applications
Passenger for Ruby applications
MicroSoft IIS 7.5 for .Net applications
Single and Multi Container Docker
supports custom AMI to be used
is designed to support multiple running environments such as one for Dev, QA, Pre-Prod and Production.
supports versioning and stores and tracks application versions over time allowing easy rollback to prior version
can provision RDS DB instance and connectivity information is exposed to the application by environment variables, but is NOT recommended for production setup as the RDS is tied up with the Elastic Beanstalk lifecycle and if deleted, the RDS instance would be deleted as well
is a configuration management service that helps to configure and operate applications in a cloud enterprise by using Chef
helps deploy and monitor applications in stacks with multiple layers
supports preconfigured layers for Applications, Databases, Load Balancers, Caching
OpsWorks Stacks features is a set of lifecycle events – Setup, Configure, Deploy, Undeploy, and Shutdown – which automatically runs specified set of recipes at the appropriate time on each instance
Layers depend on Chef recipes to handle tasks such as installing packages on instances, deploying apps, running scripts, and so on
OpsWorks Stacks runs the recipes for each layer, even if the instance belongs to multiple layers
supports Auto Healing and Auto Scaling to monitor instance health, and provision new instances
allows monitoring of AWS resources and applications in real time, collect and track pre configured or custom metrics and configure alarms to send notification or make resource changes based on defined rules
does not aggregate data across regions
stores the log data indefinitely, and the retention can be changed for each log group at any time
alarm history is stored for only 14 days
can be used an alternative to S3 to store logs with the ability to configure Alarms and generate metrics, however logs cannot be made public
Alarms exist only in the created region and the Alarm actions must reside in the same region as well
records access to API calls for the AWS account made from AWS management console, SDKs, CLI and higher level AWS service
support many AWS services and tracks who did, from where, what & when
can be enabled per-region basis, a region can include global services (like IAM, STS etc), is applicable to all the supported services within that region
log files from different regions can be sent to the same S3 bucket
can be integrated with SNS to notify logs availability, CloudWatch logs log group for notifications when specific API events occur
call history enables security analysis, resource change tracking, trouble shooting and compliance auditing