Amazon CloudWatch

Amazon CloudWatch

  • CloudWatch monitors AWS resources and applications in real time.
  • CloudWatch can be used to collect and track metrics, which are the variables to be measured for resources and applications.
  • CloudWatch is basically a metrics repository where the metrics can be inserted and statistics retrieved based on those metrics.
  • In addition to monitoring the built-in metrics that come with AWS, custom metrics can also be monitored
  • CloudWatch provides system-wide visibility into resource utilization, application performance, and operational health.
  • By default, CloudWatch stores the log data indefinitely, and the retention can be changed for each log group at any time.
  • CloudWatch alarms can be configured
    • to send notifications or
    • to automatically make changes to the resources based on defined rules
  • CloudWatch dashboards are customizable home pages in the CloudWatch console used to monitor the resources in a single view, even those resources that are spread across different Regions.
  • CloudWatch Agent helps collect metrics and logs from EC2 instances and on-premises servers and push them to CloudWatch.

CloudWatch Architecture

CloudWatch Architecture
  • CloudWatch collects various metrics from various resources
  • These metrics, as statistics, are available to the user through Console, CLI
  • CloudWatch allows the creation of alarms with defined rules
    • to perform actions to auto-scaling or stop, start, or terminate instances
    • to send notifications using SNS actions on your behalf

CloudWatch Concepts

Namespaces

  • CloudWatch namespaces are containers for metrics.
  • Metrics in different namespaces are isolated from each other, so that metrics from different applications are not mistakenly aggregated into the same statistics.
  • AWS namespaces all follow the convention AWS/<service>, for e.g. AWS/EC2 and AWS/ELB
  • Namespace names must be fewer than 256 characters in length.
  • There is no default namespace. Each data element put into CloudWatch must specify a namespace.

Metrics

  • Metric is the fundamental concept in CloudWatch.
  • Uniquely defined by a name, a namespace, and one or more dimensions.
  • Represents a time-ordered set of data points published to CloudWatch.
  • Each data point has a time stamp, and (optionally) a unit of measure.
  • Data points can be either custom metrics or metrics from other
    services in AWS.
  • Statistics can be retrieved about those data points as an ordered set of time-series data that occur within a specified time window.
  • When the statistics are requested, the returned data stream is identified by namespace, metric name, dimension, and (optionally) the unit.
  • Metrics exist only in the region in which they are created.
  • CloudWatch stores the metric data for two weeks
  • Metrics cannot be deleted, but they automatically expire after 15 months, if no new data is published to them.
  • Metric retention is as follows
    • Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution custom metrics.
    • Data points with a 60 secs (1 min) period are available for 15 days
    • Data points with a 300 secs (5 min) period are available for 63 days
    • Data points with a 3600 secs (1 hour) period are available for 455 days (15 months)

Dimensions

  • A dimension is a name/value pair that uniquely identifies a metric.
  • Every metric has specific characteristics that describe it, and you can think of dimensions as categories for those characteristics.
  • Dimensions help design a structure for the statistics plan.
  • Dimensions are part of the unique identifier for a metric, whenever a unique name pair is added to one of the metrics, a new metric is created.
  • Dimensions can be used to filter result sets that CloudWatch query returns.
  • A metric can be assigned up to ten dimensions to a metric.

Time Stamps

  • Each metric data point must be marked with a time stamp to identify the data point on a time series.
  • Timestamp can be up to two weeks in the past and up to two hours into the future.
  • If no timestamp is provided, a time stamp based on the time the data element was received is created.
  • All times reflect the UTC time zone when statistics are retrieved

Resolution

  • Each metric is one of the following:
    • Standard resolution, with data having a one-minute granularity
    • High resolution, with data at a granularity of one second

Units

  • Units represent the statistic’s unit of measure e.g. count, bytes, %, etc

Statistics

  • Statistics are metric data aggregations over specified periods of time
  • Aggregations are made using the namespace, metric name, dimensions, and the data point unit of measure, within the specified time period

Periods

  • Period is the length of time associated with a specific statistic.
  • Each statistic represents an aggregation of the metrics data collected for a specified period of time.
  • Although periods are expressed in seconds, the minimum granularity for a period is one minute.

Aggregation

  • CloudWatch aggregates statistics according to the period length specified in calls to GetMetricStatistics.
  • Multiple data points can be published with the same or similar time stamps. CloudWatch aggregates them by period length when the statistics about those data points are requested.
  • Aggregated statistics are only available when using detailed monitoring.
  • Instances that use basic monitoring are not included in the aggregates
  • CloudWatch does not aggregate data across regions.

Alarms

  • Alarms can automatically initiate actions on behalf of the user, based on specified parameters.
  • Alarm watches a single metric over a specified time period, and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods.
  • Alarms invoke actions for sustained state changes only i.e. the state must have changed and been maintained for a specified number of periods.
  • Action can be a
    • SNS notification
    • Auto Scaling policies
    • EC2 action – stop or terminate EC2 instances
  • After an alarm invokes an action due to a change in state, its subsequent behavior depends on the type of action associated with the alarm.
    • For Auto Scaling policy notifications, the alarm continues to invoke the action for every period that the alarm remains in the new state.
    • For SNS notifications, no additional actions are invoked.
  • An alarm has three possible states:
    • OK—The metric is within the defined threshold
    • ALARM—The metric is outside of the defined threshold
    • INSUFFICIENT_DATA—Alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state
  • Alarms exist only in the region in which they are created.
  • Alarm actions must reside in the same region as the alarm
  • Alarm history is available for the last 14 days.
  • Alarm can be tested by setting it to any state using the SetAlarmState API (mon-set-alarm-state command). This temporary state change lasts only until the next alarm comparison occurs.
  • Alarms can be disabled and enabled using the DisableAlarmActions and EnableAlarmActions APIs (mon-disable-alarm-actions and mon-enable-alarm-actions commands).

Regions

  • CloudWatch does not aggregate data across regions. Therefore, metrics are completely separate between regions.

Custom Metrics

  • CloudWatch allows publishing custom metrics with put-metric-data CLI command (or its Query API equivalent PutMetricData)
  • CloudWatch creates a new metric if put-metric-data is called with a new metric name,  else it associates the data with the specified existing metric
  • put-metric-data command can only publish one data point per call
  • CloudWatch stores data about a metric as a series of data points and each data point has an associated time stamp
  • Creating a new metric using the put-metric-data command, can take up to two minutes before statistics can be retrieved on the new metric using the get-metric-statistics command and can take up to fifteen minutes before the new metric appears in the list of metrics retrieved using the list-metrics command.
  • CloudWatch allows publishing
    • Single data point
      • Data points can be published with time stamps as granular as one-thousandth of a second, CloudWatch aggregates the data to a minimum granularity of one minute
      • CloudWatch records the average (sum of all items divided by number of items) of the values received for every 1-minute period, as well as number of samples, maximum value, and minimum value for the same time period
      • CloudWatch uses one-minute boundaries when aggregating data points
    • Aggregated set of data points called a statistics set
      • Data can also be aggregated before being published to CloudWatch
      • Aggregating data minimizes the number of calls reducing it to a single call per minute with the statistic set of data
      • Statistics include Sum, Average, Minimum, Maximum, SampleCount
  • If the application produces data that is more sporadic and have periods that have no associated data, either a the value zero (0) or no value at all can be published
  • However, it can be helpful to publish zero instead of no value
    • to monitor the health of your application for e.g. alarm can be configured to notify if no metrics published every 5 minutes
    • to track the total number of data points
    • to have statistics such as minimum and average to include data points with the value 0.

CloudWatch Dashboards

  • CloudWatch dashboards are customizable home pages in the CloudWatch console used to monitor the resources in a single view, even those resources that are spread across different Regions.
  • Dashboards can be used to create customized views of the metrics and alarms for the AWS resources.
  • Dashboards can help to create
    • A single view for selected metrics and alarms to help assess the health of the resources and applications across one or more Regions.
    • An operational playbook that provides guidance for team members during operational events about how to respond to specific incidents.
    • A common view of critical resource and application measurements that can be shared by team members for faster communication flow during operational events.
  • CloudWatch cross-account observability helps monitor and troubleshoot applications that span multiple accounts within a Region.
  • Cross-account observability includes monitoring and source accounts
    • A monitoring account is a central AWS account that can view and interact with observability data generated from source accounts.
    • A source account is an individual AWS account that generates observability data for the resources that reside in it.
    • Source accounts share their observability data with the monitoring account which can include the following types of telemetry:
      • Metrics in CloudWatch
      • Log groups in CloudWatch Logs
      • Traces in AWS X-Ray

CloudWatch Agent

  • CloudWatch Agent helps collect metrics and logs from EC2 instances and on-premises servers and push them to CloudWatch.
  • Logs collected by the unified agent are processed and stored in CloudWatch Logs.

CloudWatch Logs

Refer blog post @ CloudWatch Logs

CloudWatch Supported Services

Refer blog post @ CloudWatch Supported Services

Accessing CloudWatch

  • CloudWatch can be accessed using
    • AWS CloudWatch console
    • CloudWatch CLI
    • AWS CLI
    • CloudWatch API
    • AWS SDKs

 

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this? Choose 2 answers
    1. Amazon Simple Email Service (Cannot be integrated with CloudWatch directly)
    2. Amazon CloudWatch
    3. Amazon Simple Queue Service
    4. Amazon Route 53
    5. Amazon Simple Notification Service
  2. A customer needs to capture all client connection information from their load balancer every five minutes. The company wants to use this data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements?
    1. Enable AWS CloudTrail for the load balancer.
    2. Enable access logs on the load balancer. (Refer link)
    3. Install the Amazon CloudWatch Logs agent on the load balancer.
    4. Enable Amazon CloudWatch metrics on the load balancer (does not provide Client connection information)
  3. A user is running a batch process on EBS backed EC2 instances. The batch process starts a few instances to process Hadoop Map reduce jobs, which can run between 50 – 600 minutes or sometimes for more time. The user wants to configure that the instance gets terminated only when the process is completed. How can the user configure this with CloudWatch?
    1. Setup the CloudWatch action to terminate the instance when the CPU utilization is less than 5%
    2. Setup the CloudWatch with Auto Scaling to terminate all the instances
    3. Setup a job which terminates all instances after 600 minutes
    4. It is not possible to terminate instances automatically
  4. A user has two EC2 instances running in two separate regions. The user is running an internal memory management tool, which captures the data and sends it to CloudWatch in US East, using a CLI with the same namespace and metric. Which of the below mentioned options is true with respect to the above statement?
    1. The setup will not work as CloudWatch cannot receive data across regions
    2. CloudWatch will receive and aggregate the data based on the namespace and metric
    3. CloudWatch will give an error since the data will conflict due to two sources
    4. CloudWatch will take the data of the server, which sends the data first
  5. A user is sending the data to CloudWatch using the CloudWatch API. The user is sending data 90 minutes in the future. What will CloudWatch do in this case?
    1. CloudWatch will accept the data
    2. It is not possible to send data of the future
    3. It is not possible to send the data manually to CloudWatch
    4. The user cannot send data for more than 60 minutes in the future
  6. A user is having data generated randomly based on a certain event. The user wants to upload that data to CloudWatch. It may happen that event may not have data generated for some period due to randomness. Which of the below mentioned options is a recommended option for this case?
    1. For the period when there is no data, the user should not send the data at all
    2. For the period when there is no data the user should send a blank value
    3. For the period when there is no data the user should send the value as 0 (Refer User Guide)
    4. The user must upload the data to CloudWatch as having no data for some period will cause an error at CloudWatch monitoring
  7. A user has a weighing plant. The user measures the weight of some goods every 5 minutes and sends data to AWS CloudWatch for monitoring and tracking. Which of the below mentioned parameters is mandatory for the user to include in the request list?
    1. Value
    2. Namespace (refer put-metric request)
    3. Metric Name
    4. Timezone
  8. A user has a refrigerator plant. The user is measuring the temperature of the plant every 15 minutes. If the user wants to send the data to CloudWatch to view the data visually, which of the below mentioned statements is true with respect to the information given above?
    1. The user needs to use AWS CLI or API to upload the data
    2. The user can use the AWS Import Export facility to import data to CloudWatch
    3. The user will upload data from the AWS console
    4. The user cannot upload data to CloudWatch since it is not an AWS service metric
  9. A user has launched an EC2 instance. The user is planning to setup the CloudWatch alarm. Which of the below mentioned actions is not supported by the CloudWatch alarm?
    1. Notify the Auto Scaling launch config to scale up
    2. Send an SMS using SNS
    3. Notify the Auto Scaling group to scale down
    4. Stop the EC2 instance
  10. A user has a refrigerator plant. The user is measuring the temperature of the plant every 15 minutes. If the user wants to send the data to CloudWatch to view the data visually, which of the below mentioned statements is true with respect to the information given above?
    1. The user needs to use AWS CLI or API to upload the data
    2. The user can use the AWS Import Export facility to import data to CloudWatch
    3. The user will upload data from the AWS console
    4. The user cannot upload data to CloudWatch since it is not an AWS service metric
  11. A user is trying to aggregate all the CloudWatch metric data of the last 1 week. Which of the below mentioned statistics is not available for the user as a part of data aggregation?
    1. Aggregate
    2. Sum
    3. Sample data
    4. Average
  12. A user has setup a CloudWatch alarm on an EC2 action when the CPU utilization is above 75%. The alarm sends a notification to SNS on the alarm state. If the user wants to simulate the alarm action how can he achieve this?
    1. Run activities on the CPU such that its utilization reaches above 75%
    2. From the AWS console change the state to ‘Alarm’
    3. The user can set the alarm state to ‘Alarm’ using CLI
    4. Run the SNS action manually
  13. A user is publishing custom metrics to CloudWatch. Which of the below mentioned statements will help the user understand the functionality better?
    1. The user can use the CloudWatch Import tool
    2. The user should be able to see the data in the console after around 15 minutes
    3. If the user is uploading the custom data, the user must supply the namespace, timezone, and metric name as part of the command
    4. The user can view as well as upload data using the console, CLI and APIs
  14. An application that you are managing has EC2 instances and DynamoDB tables deployed to several AWS Regions. In order to monitor the performance of the application globally, you would like to see two graphs 1) Avg CPU Utilization across all EC2 instances and 2) Number of Throttled Requests for all DynamoDB tables. How can you accomplish this? [PROFESSIONAL]
    1. Tag your resources with the application name, and select the tag name as the dimension in the CloudWatch Management console to view the respective graphs (CloudWatch metrics are regional)
    2. Use the CloudWatch CLI tools to pull the respective metrics from each regional endpoint. Aggregate the data offline & store it for graphing in CloudWatch.
    3. Add SNMP traps to each instance and DynamoDB table. Leverage a central monitoring server to capture data from each instance and table. Put the aggregate data into CloudWatch for graphing (Can’t add SNMP traps to DynamoDB as it is a managed service)
    4. Add a CloudWatch agent to each instance and attach one to each DynamoDB table. When configuring the agent set the appropriate application name & view the graphs in CloudWatch. (Can’t add agents to DynamoDB as it is a managed service)
  15. You have set up Individual AWS accounts for each project. You have been asked to make sure your AWS Infrastructure costs do not exceed the budget set per project for each month. Which of the following approaches can help ensure that you do not exceed the budget each month? [PROFESSIONAL]
    1. Consolidate your accounts so you have a single bill for all accounts and projects (Consolidation will not help limit per account)
    2. Set up auto scaling with CloudWatch alarms using SNS to notify you when you are running too many Instances in a given account (many instances do not directly map to cost and would not give exact cost)
    3. Set up CloudWatch billing alerts for all AWS resources used by each project, with a notification occurring when the amount for each resource tagged to a particular project matches the budget allocated to the project. (as each project already has a account, no need for resource tagging)
    4. Set up CloudWatch billing alerts for all AWS resources used by each account, with email notifications when it hits 50%. 80% and 90% of its budgeted monthly spend
  16. You meet once per month with your operations team to review the past month’s data. During the meeting, you realize that 3 weeks ago, your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency on your 3-tier web service API. You use DynamoDB for the database layer, ELB, EBS, and EC2 for the business logic tier, and SQS, ELB, and EC2 for the presentation layer. Which of the following techniques will NOT help you figure out what happened?
    1. Check your CloudTrail log history around the spike’s time for any API calls that caused slowness.
    2. Review CloudWatch Metrics graphs to determine which component(s) slowed the system down. (Metrics data was available for 2 weeks before, however it has been extended now)
    3. Review your ELB access logs in S3 to see if any ELBs in your system saw the latency.
    4. Analyze your logs to detect bursts in traffic at that time.
  17. You have a high security requirement for your AWS accounts. What is the most rapid and sophisticated setup you can use to react to AWS API calls to your account?
    1. Subscription to AWS Config via an SNS Topic. Use a Lambda Function to perform in-flight analysis and reactivity to changes as they occur.
    2. Global AWS CloudTrail setup delivering to S3 with an SNS subscription to the deliver notifications, pushing into a Lambda, which inserts records into an ELK stack for analysis.
    3. Use a CloudWatch Rule ScheduleExpression to periodically analyze IAM credential logs. Push the deltas for events into an ELK stack and perform ad-hoc analysis there.
    4. CloudWatch Events Rules, which trigger based on all AWS API calls, submitting all events to an AWS Kinesis Stream for arbitrary downstream analysis. (CloudWatch Events allow subscription to AWS API calls, and direction of these events into Kinesis Streams. This allows a unified, near real-time stream for all API calls, which can be analyzed with any tool(s). Refer link)
  18. To monitor API calls against our AWS account by different users and entities, we can use ____ to create a history of calls in bulk for later review, and use ____ for reacting to AWS API calls in real-time.
    1. AWS Config; AWS Inspector
    2. AWS CloudTrail; AWS Config
    3. AWS CloudTrail; CloudWatch Events (CloudTrail is a batch API call collection service, CloudWatch Events enables real-time monitoring of calls through the Rules object interface. Refer link)
    4. AWS Config; AWS Lambda
  19. You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO? [PROFESSIONAL]
    1. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues. (is not fast in search and introduces delay)
    2. Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc MapReduce analysis and write new queries when needed. (is not fast in search and introduces delay)
    3. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues. (is not fast in search and introduces delay)
    4. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (ELK – Elasticsearch, Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)
  20. Your EC2-Based Multi-tier application includes a monitoring instance that periodically makes application -level read only requests of various application components and if any of those fail more than three times 30 seconds calls CloudWatch to fire an alarm, and the alarm notifies your operations team by email and SMS of a possible application health problem. However, you also need to watch the watcher -the monitoring instance itself – and be notified if it becomes unhealthy. Which of the following is a simple way to achieve that goal? [PROFESSIONAL]
    1. Run another monitoring instance that pings the monitoring instance and fires a could watch alarm mat notifies your operations team should the primary monitoring instance become unhealthy.
    2. Set a CloudWatch alarm based on EC2 system and instance status checks and have the alarm notify your operations team of any detected problem with the monitoring instance.
    3. Set a CloudWatch alarm based on the CPU utilization of the monitoring instance and nave the alarm notify your operations team if C r the CPU usage exceeds 50% few more than one minute: then have your monitoring application go into a CPU-bound loop should it Detect any application problems.
    4. Have the monitoring instances post messages to an SOS queue and then dequeue those messages on another instance should the queue cease to have new messages, the second instance should first terminate the original monitoring instance start another backup monitoring instance and assume (the role of the previous monitoring instance and beginning adding messages to the SQS queue.
  21.  

AWS CloudWatch Logs – Certification

AWS CloudWatch Logs

  • CloudWatch Logs can be used to monitor, store, and access log files from  EC2 instances, CloudTrail, Route 53, and other sources
  • CloudWatch Logs uses the log data for monitoring in an not; so, no code changes are required
  • CloudWatch Logs require CloudWatch logs agent to be installed on the EC2 instances and on-premises servers.
  • CloudWatch Logs agent makes it easy to quickly send both rotated and non-rotated log data off of a host and into the log service.
  • An VPC endpoint can be configured to keep traffic between VPC and CloudWatch Logs from leaving the Amazon network. It doesn’t require an IGW, NAT, VPN connection, or Direct Connect connection
  • CloudWatch Logs allows exporting log data from the log groups to an S3 bucket, which can then be used for custom processing and analysis, or to load onto other systems.
  • Log data is encrypted while in transit and while it is at rest
  • Log data can be encrypted using an AWS KMS or customer master key (CMK).

Required Mainly for SysOps Associate & DevOps Professional Exam

CloudWatch Logs Concepts

Log Events

  • A log event is a record of some activity recorded by the application or resource being monitored.
  • Log event record contains two properties: the timestamp of when the event occurred, and the raw event message

Log Streams

  • A log stream is a sequence of log events that share the same source for e.g. logs events from an Apache access log on a specific host.

Log Groups

  • Log groups define groups of log streams that share the same retention, monitoring, and access control settings for e.g. Apache access logs from each host grouped through log streams into a single log group
  • Each log stream has to belong to one log group
  • There is no limit on the number of log streams that can belong to one log group.

Metric Filters

  • Metric filters can be used to extract metric observations from ingested events and transform them to data points in a CloudWatch metric.
  • Metric filters are assigned to log groups, and all of the filters assigned to a log group are applied to their log streams.

Retention Settings

  • Retention settings can be used to specify how long log events are kept in CloudWatch Logs.
  • Expired log events get deleted automatically.
  • Retention settings are assigned to log groups, and the retention assigned to a log group is applied to their log streams.

CloudWatch Logs Use cases

Monitor Logs from EC2 Instances in Real-time

  • can help monitor applications and systems using log data
  • can help track number of errors for e.g. 404, 500, for even specific literal terms “NullReferenceException”, occurring in the applications, which can then be matched to a threshold to send notification

Monitor AWS CloudTrail Logged Events

  • can be used to monitor particular API activity as captured by CloudTrail by creating alarms in CloudWatch and receive notifications

Archive Log Data

  • can help store the log data in highly durable storage, an alternative to S3
  • log retention setting can be modified, so that any log events older than this setting are automatically deleted.

Log Route 53 DNS Queries

  • can help log information about the DNS queries that Route 53 receives.

Real-time Processing of Log Data with Subscriptions

  • Subscriptions can help get access to real-time feed of logs events from CloudWatch logs and have it delivered to other services such as Kinesis stream, Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems
  • A subscription filter defines the filter pattern to use for filtering which log events get delivered to the AWS resource, as well as information about where to send matching log events to.
  • CloudWatch Logs log group can also be configured to stream data Elasticsearch Service cluster in near real-time

Searching and Filtering

  • CloudWatch Logs allows searching and filtering the log data by creating one or more metric filters.
  • Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs.
  • CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that can be put as graph or set an alarm on.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Once we have our logs in CloudWatch, we can do a number of things such as: Choose 3. Choose the 3 correct answers:[CDOP]
    1. Send the log data to AWS Lambda for custom processing or to load into other systems
    2. Stream the log data to Amazon Kinesis
    3. Stream the log data into Amazon Elasticsearch in near real-time with CloudWatch Logs subscriptions.
    4. Record API calls for your AWS account and delivers log files containing API calls to your Amazon S3 bucket
  2. You have decided to set the threshold for errors on your application to a certain number and once that threshold is reached you need to alert the Senior DevOps engineer. What is the best way to do this? Choose 3. Choose the 3 correct answers: [CDOP]
    1. Set the threshold your application can tolerate in a CloudWatch Logs group and link a CloudWatch alarm on that threshold.
    2. Use CloudWatch Logs agent to send log data from the app to CloudWatch Logs from Amazon EC2 instances
    3. Pipe data from EC2 to the application logs using AWS Data Pipeline and CloudWatch
    4. Once a CloudWatch alarm is triggered, use SNS to notify the Senior DevOps Engineer.
  3. You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO? [CDOP]
    1. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues. (is not fast in search and introduces delay)
    2. Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc MapReduce analysis and write new queries when needed. (is not fast in search and introduces delay)
    3. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues. (is not fast in search and introduces delay)
    4. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (ELK – Elasticsearch, Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)
  4. You use Amazon CloudWatch as your primary monitoring system for your web application. After a recent software deployment, your users are getting Intermittent 500 Internal Server Errors when using the web application. You want to create a CloudWatch alarm, and notify an on-call engineer when these occur. How can you accomplish this using AWS services? (Choose three.) [CDOP]
    1. Deploy your web application as an AWS Elastic Beanstalk application. Use the default Elastic Beanstalk CloudWatch metrics to capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric.
    2. Install a CloudWatch Logs Agent on your servers to stream web application logs to CloudWatch.
    3. Use Amazon Simple Email Service to notify an on-call engineer when a CloudWatch alarm is triggered.
    4. Create a CloudWatch Logs group and define metric filters that capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric.
    5. Use Amazon Simple Notification Service to notify an on-call engineer when a CloudWatch alarm is triggered.
    6. Use AWS Data Pipeline to stream web application logs from your servers to CloudWatch.

References

AWS_CloudWatch_Logs_User_Guide

CloudWatch Monitoring Supported AWS Services

CloudWatch Monitoring Supported AWS Services

  • CloudWatch offers either basic or detailed monitoring for supported AWS services.
  • Basic monitoring means that a service sends data points to CloudWatch every five minutes.
  • Detailed monitoring means that a service sends data points to CloudWatch every minute.
  • If the AWS service supports both basic and detailed monitoring, the basic would be enabled by default and the detailed monitoring needs to be enabled for details metrics

AWS Services with Monitoring support

  • Auto Scaling
    • By default, basic monitoring is enabled when the launch configuration is created using the AWS Management Console, and detailed monitoring is enabled when the launch configuration is created using the AWS CLI or an API
    • Auto Scaling sends data to CloudWatch every 5 minutes by default when created from Console.
    • For an additional charge, you can enable detailed monitoring for Auto Scaling, which sends data to CloudWatch every minute.
  • Amazon CloudFront
    • Amazon CloudFront sends data to CloudWatch every minute by default.
  • Amazon CloudSearch
    • Amazon CloudSearch sends data to CloudWatch every minute by default.
  • Amazon CloudWatch Events
    • Amazon CloudWatch Events sends data to CloudWatch every minute by default.
  • Amazon CloudWatch Logs
    • Amazon CloudWatch Logs sends data to CloudWatch every minute by default.
  • Amazon DynamoDB
    • Amazon DynamoDB sends data to CloudWatch every minute for some metrics and every 5 minutes for other metrics.
  • Amazon EC2 Container Service
    • Amazon EC2 Container Service sends data to CloudWatch every minute.
  • Amazon ElastiCache
    • Amazon ElastiCache sends data to CloudWatch every minute.
  • Amazon Elastic Block Store
    • Amazon Elastic Block Store sends data to CloudWatch every 5 minutes.
    • Provisioned IOPS SSD (io1) volumes automatically send one-minute metrics to CloudWatch.
  • Amazon Elastic Compute Cloud
    • Amazon EC2 sends data to CloudWatch every 5 minutes by default. For an additional charge, you can enable detailed monitoring for Amazon EC2, which sends data to CloudWatch every minute.
  • Elastic Load Balancing
    • Elastic Load Balancing sends data to CloudWatch every minute.
  • Amazon Elastic MapReduce
    • Amazon Elastic MapReduce sends data to CloudWatch every 5 minutes.
  • Amazon Elasticsearch Service
    • Amazon Elasticsearch Service sends data to CloudWatch every minute.
  • Amazon Kinesis Streams
    • Amazon Kinesis Streams sends data to CloudWatch every minute.
  • Amazon Kinesis Firehose
    • Amazon Kinesis Firehose sends data to CloudWatch every minute.
  • AWS Lambda
    • AWS Lambda sends data to CloudWatch every minute.
  • Amazon Machine Learning
    • Amazon Machine Learning sends data to CloudWatch every 5 minutes.
  • AWS OpsWorks
    • AWS OpsWorks sends data to CloudWatch every minute.
  • Amazon Redshift
    • Amazon Redshift sends data to CloudWatch every minute.
  • Amazon Relational Database Service
    • Amazon Relational Database Service sends data to CloudWatch every minute.
  • Amazon Route 53
    • Amazon Route 53 sends data to CloudWatch every minute.
  • Amazon Simple Notification Service
    • Amazon Simple Notification Service sends data to CloudWatch every 5 minutes.
  • Amazon Simple Queue Service
    • Amazon Simple Queue Service sends data to CloudWatch every 5 minutes.
  • Amazon Simple Storage Service
    • Amazon Simple Storage Service sends data to CloudWatch once a day.
  • Amazon Simple Workflow Service
    • Amazon Simple Workflow Service sends data to CloudWatch every 5 minutes.
  • AWS Storage Gateway
    • AWS Storage Gateway sends data to CloudWatch every 5 minutes.
  • AWS WAF
    • AWS WAF sends data to CloudWatch every minute.
  • Amazon WorkSpaces
    • Amazon WorkSpaces sends data to CloudWatch every 5 minutes.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What is the minimum time Interval for the data that Amazon CloudWatch receives and aggregates?
    1. One second
    2. Five seconds
    3. One minute
    4. Three minutes
    5. Five minutes
  2. In the ‘Detailed’ monitoring data available for your Amazon EBS volumes, Provisioned IOPS volumes automatically send _____ minute metrics to Amazon CloudWatch.
    1. 3
    2. 1
    3. 5
    4. 2
  3. Using Amazon CloudWatch’s Free Tier, what is the frequency of metric updates, which you receive?
    1. 5 minutes
    2. 500 milliseconds.
    3. 30 seconds
    4. 1 minute
  4. What is the type of monitoring data (for Amazon EBS volumes) which is available automatically in 5-minute periods at no charge called?
    1. Basic
    2. Primary
    3. Detailed
    4. Local
  5. A user has created an Auto Scaling group using CLI. The user wants to enable CloudWatch detailed monitoring for that group. How can the user configure this?
    1. When the user sets an alarm on the Auto Scaling group, it automatically enables detail monitoring
    2. By default detailed monitoring is enabled for Auto Scaling (Detailed monitoring is enabled when you create the launch configuration using the AWS CLI or an API)
    3. Auto Scaling does not support detailed monitoring
    4. Enable detail monitoring from the AWS console
  6. A user is trying to understand the detailed CloudWatch monitoring concept. Which of the below mentioned services provides detailed monitoring with CloudWatch without charging the user extra?
    1. AWS Auto Scaling
    2. AWS Route 53
    3. AWS EMR
    4. AWS SNS
  7. A user is trying to understand the detailed CloudWatch monitoring concept. Which of the below mentioned services does not provide detailed monitoring with CloudWatch?
    1. AWS EMR
    2. AWS RDS
    3. AWS ELB
    4. AWS Route53
  8. A user has enabled detailed CloudWatch monitoring with the AWS Simple Notification Service. Which of the below mentioned statements helps the user understand detailed monitoring better?
    1. SNS will send data every minute after configuration
    2. There is no need to enable since SNS provides data every minute
    3. AWS CloudWatch does not support monitoring for SNS
    4. SNS cannot provide data every minute
  9. A user has configured an Auto Scaling group with ELB. The user has enabled detailed CloudWatch monitoring on Auto Scaling. Which of the below mentioned statements will help the user understand the functionality better?
    1. It is not possible to setup detailed monitoring for Auto Scaling
    2. In this case, Auto Scaling will send data every minute and will charge the user extra
    3. Detailed monitoring will send data every minute without additional charges
    4. Auto Scaling sends data every minute only and does not charge the user

References

AWS ELB Monitoring

AWS ELB Monitoring

  • Elastic Load Balancing publishes data points to CloudWatch about the load balancers and back-end instances
  • Elastic Load Balancing reports metrics to CloudWatch only when requests are flowing through the load balancer.
    • If there are requests flowing through the load balancer, Elastic Load Balancing measures and sends its metrics in 60-second intervals.
    • If there are no requests flowing through the load balancer or no data for a metric, the metric is not reported.

CloudWatch Metrics

  • HealthyHostCount, UnHealthyHostCount
    • Number of healthy and unhealthy instances registered with the load balancer.
    • Most useful statistics are average, min, and max
  • RequestCount
    • Number of requests completed or connections made during the specified interval (1 or 5 minutes).
    • Most useful statistic is sum
  • Latency
    • Time elapsed, in seconds, after the request leaves the load balancer until the headers of the response are received.
    • Most useful statistic is average
  • SurgeQueueLength
    • Total number of requests that are pending routing.
    • Load balancer queues a request if it is unable to establish a connection with a healthy instance in order to route the request.
    • Maximum size of the queue is 1,024. Additional requests are rejected when the queue is full.
    • Most useful statistic is max, because it represents the peak of queued requests.
  • SpilloverCount
    • The total number of requests that were rejected because the surge queue is full. Should ideally be 0
    • Most useful statistic is sum.
  • HTTPCode_ELB_4XX, HTTPCode_ELB_5XX
    • Client and Server error code generated by the load balancer
    • Most useful statistic is sum.
  • HTTPCode_Backend_2XX, HTTPCode_Backend_3XX, HTTPCode_Backend_4XX, HTTPCode_Backend_5XX
    • Number of HTTP response codes generated by registered instances
    • Most useful statistic is sum.

Elastic Load Balancer Access Logs

  • Elastic Load Balancing provides access logs that capture detailed information about all requests sent to your load balancer.
  • Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses.
  • Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket
  • Access logging is disabled by default and can be enabled without any additional charge. You are only charged for S3 storage

CloudTrail Logs

  • AWS CloudTrail can be used to capture all calls to the Elastic Load Balancing API made by or on behalf of your AWS account and either made using Elastic Load Balancing API directly or indirectly through the AWS Management Console or AWS CLI
  • CloudTrail stores the information as log files in an Amazon S3 bucket that you specify.
  • Logs collected by CloudTrail can be used to monitor the activity of your load balancers and determine what API call was made, what source IP address was used, who made the call, when it was made, and so on

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An admin is planning to monitor the ELB. Which of the below mentioned services does not help the admin capture the monitoring information about the ELB activity
    1. ELB Access logs
    2. ELB health check
    3. CloudWatch metrics
    4. ELB API calls with CloudTrail
  2. A customer needs to capture all client connection information from their load balancer every five minutes. The company wants to use this data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements?
    1. Enable AWS CloudTrail for the load balancer.
    2. Enable access logs on the load balancer.
    3. Install the Amazon CloudWatch Logs agent on the load balancer.
    4. Enable Amazon CloudWatch metrics on the load balancer
  3. Your supervisor has requested a way to analyze traffic patterns for your application. You need to capture all connection information from your load balancer every 10 minutes. Pick a solution from below. Choose the correct answer:
    1. Enable access logs on the load balancer
    2. Create a custom metric CloudWatch filter on your load balancer
    3. Use a CloudWatch Logs Agent
    4. Use AWS CloudTrail with your load balancer

References

Elastic Load Balance developer guide