Amazon CloudWatch

Amazon CloudWatch

  • CloudWatch monitors AWS resources and applications in real time.
  • CloudWatch can be used to collect and track metrics, which are the variables to be measured for resources and applications.
  • CloudWatch is basically a metrics repository where the metrics can be inserted and statistics retrieved based on those metrics.
  • In addition to monitoring the built-in metrics that come with AWS, custom metrics can also be monitored
  • CloudWatch provides system-wide visibility into resource utilization, application performance, and operational health.
  • By default, CloudWatch stores the log data indefinitely, and the retention can be changed for each log group at any time.
  • CloudWatch alarms can be configured
    • to send notifications or
    • to automatically make changes to the resources based on defined rules
  • CloudWatch dashboards are customizable home pages in the CloudWatch console used to monitor the resources in a single view, even those resources that are spread across different Regions.
  • CloudWatch Agent helps collect metrics and logs from EC2 instances and on-premises servers and push them to CloudWatch.

CloudWatch Architecture

CloudWatch Architecture
  • CloudWatch collects various metrics from various resources
  • These metrics, as statistics, are available to the user through Console, CLI
  • CloudWatch allows the creation of alarms with defined rules
    • to perform actions to auto-scaling or stop, start, or terminate instances
    • to send notifications using SNS actions on your behalf

CloudWatch Concepts

Namespaces

  • CloudWatch namespaces are containers for metrics.
  • Metrics in different namespaces are isolated from each other, so that metrics from different applications are not mistakenly aggregated into the same statistics.
  • AWS namespaces all follow the convention AWS/<service>, for e.g. AWS/EC2 and AWS/ELB
  • Namespace names must be fewer than 256 characters in length.
  • There is no default namespace. Each data element put into CloudWatch must specify a namespace.

Metrics

  • Metric is the fundamental concept in CloudWatch.
  • Uniquely defined by a name, a namespace, and one or more dimensions.
  • Represents a time-ordered set of data points published to CloudWatch.
  • Each data point has a time stamp, and (optionally) a unit of measure.
  • Data points can be either custom metrics or metrics from other
    services in AWS.
  • Statistics can be retrieved about those data points as an ordered set of time-series data that occur within a specified time window.
  • When the statistics are requested, the returned data stream is identified by namespace, metric name, dimension, and (optionally) the unit.
  • Metrics exist only in the region in which they are created.
  • CloudWatch stores the metric data for two weeks
  • Metrics cannot be deleted, but they automatically expire after 15 months, if no new data is published to them.
  • Metric retention is as follows
    • Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution custom metrics.
    • Data points with a 60 secs (1 min) period are available for 15 days
    • Data points with a 300 secs (5 min) period are available for 63 days
    • Data points with a 3600 secs (1 hour) period are available for 455 days (15 months)

Dimensions

  • A dimension is a name/value pair that uniquely identifies a metric.
  • Every metric has specific characteristics that describe it, and you can think of dimensions as categories for those characteristics.
  • Dimensions help design a structure for the statistics plan.
  • Dimensions are part of the unique identifier for a metric, whenever a unique name pair is added to one of the metrics, a new metric is created.
  • Dimensions can be used to filter result sets that CloudWatch query returns.
  • A metric can be assigned up to ten dimensions to a metric.

Time Stamps

  • Each metric data point must be marked with a time stamp to identify the data point on a time series.
  • Timestamp can be up to two weeks in the past and up to two hours into the future.
  • If no timestamp is provided, a time stamp based on the time the data element was received is created.
  • All times reflect the UTC time zone when statistics are retrieved

Resolution

  • Each metric is one of the following:
    • Standard resolution, with data having a one-minute granularity
    • High resolution, with data at a granularity of one second

Units

  • Units represent the statistic’s unit of measure e.g. count, bytes, %, etc

Statistics

  • Statistics are metric data aggregations over specified periods of time
  • Aggregations are made using the namespace, metric name, dimensions, and the data point unit of measure, within the specified time period

Periods

  • Period is the length of time associated with a specific statistic.
  • Each statistic represents an aggregation of the metrics data collected for a specified period of time.
  • Although periods are expressed in seconds, the minimum granularity for a period is one minute.

Aggregation

  • CloudWatch aggregates statistics according to the period length specified in calls to GetMetricStatistics.
  • Multiple data points can be published with the same or similar time stamps. CloudWatch aggregates them by period length when the statistics about those data points are requested.
  • Aggregated statistics are only available when using detailed monitoring.
  • Instances that use basic monitoring are not included in the aggregates
  • CloudWatch does not aggregate data across regions.

Alarms

  • Alarms can automatically initiate actions on behalf of the user, based on specified parameters.
  • Alarm watches a single metric over a specified time period, and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods.
  • Alarms invoke actions for sustained state changes only i.e. the state must have changed and been maintained for a specified number of periods.
  • Action can be a
    • SNS notification
    • Auto Scaling policies
    • EC2 action – stop or terminate EC2 instances
  • After an alarm invokes an action due to a change in state, its subsequent behavior depends on the type of action associated with the alarm.
    • For Auto Scaling policy notifications, the alarm continues to invoke the action for every period that the alarm remains in the new state.
    • For SNS notifications, no additional actions are invoked.
  • An alarm has three possible states:
    • OK—The metric is within the defined threshold
    • ALARM—The metric is outside of the defined threshold
    • INSUFFICIENT_DATA—Alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state
  • Alarms exist only in the region in which they are created.
  • Alarm actions must reside in the same region as the alarm
  • Alarm history is available for the last 14 days.
  • Alarm can be tested by setting it to any state using the SetAlarmState API (mon-set-alarm-state command). This temporary state change lasts only until the next alarm comparison occurs.
  • Alarms can be disabled and enabled using the DisableAlarmActions and EnableAlarmActions APIs (mon-disable-alarm-actions and mon-enable-alarm-actions commands).

Regions

  • CloudWatch does not aggregate data across regions. Therefore, metrics are completely separate between regions.

Custom Metrics

  • CloudWatch allows publishing custom metrics with put-metric-data CLI command (or its Query API equivalent PutMetricData)
  • CloudWatch creates a new metric if put-metric-data is called with a new metric name,  else it associates the data with the specified existing metric
  • put-metric-data command can only publish one data point per call
  • CloudWatch stores data about a metric as a series of data points and each data point has an associated time stamp
  • Creating a new metric using the put-metric-data command, can take up to two minutes before statistics can be retrieved on the new metric using the get-metric-statistics command and can take up to fifteen minutes before the new metric appears in the list of metrics retrieved using the list-metrics command.
  • CloudWatch allows publishing
    • Single data point
      • Data points can be published with time stamps as granular as one-thousandth of a second, CloudWatch aggregates the data to a minimum granularity of one minute
      • CloudWatch records the average (sum of all items divided by number of items) of the values received for every 1-minute period, as well as number of samples, maximum value, and minimum value for the same time period
      • CloudWatch uses one-minute boundaries when aggregating data points
    • Aggregated set of data points called a statistics set
      • Data can also be aggregated before being published to CloudWatch
      • Aggregating data minimizes the number of calls reducing it to a single call per minute with the statistic set of data
      • Statistics include Sum, Average, Minimum, Maximum, SampleCount
  • If the application produces data that is more sporadic and have periods that have no associated data, either a the value zero (0) or no value at all can be published
  • However, it can be helpful to publish zero instead of no value
    • to monitor the health of your application for e.g. alarm can be configured to notify if no metrics published every 5 minutes
    • to track the total number of data points
    • to have statistics such as minimum and average to include data points with the value 0.

CloudWatch Dashboards

  • CloudWatch dashboards are customizable home pages in the CloudWatch console used to monitor the resources in a single view, even those resources that are spread across different Regions.
  • Dashboards can be used to create customized views of the metrics and alarms for the AWS resources.
  • Dashboards can help to create
    • A single view for selected metrics and alarms to help assess the health of the resources and applications across one or more Regions.
    • An operational playbook that provides guidance for team members during operational events about how to respond to specific incidents.
    • A common view of critical resource and application measurements that can be shared by team members for faster communication flow during operational events.
  • CloudWatch cross-account observability helps monitor and troubleshoot applications that span multiple accounts within a Region.
  • Cross-account observability includes monitoring and source accounts
    • A monitoring account is a central AWS account that can view and interact with observability data generated from source accounts.
    • A source account is an individual AWS account that generates observability data for the resources that reside in it.
    • Source accounts share their observability data with the monitoring account which can include the following types of telemetry:
      • Metrics in CloudWatch
      • Log groups in CloudWatch Logs
      • Traces in AWS X-Ray

CloudWatch Agent

  • CloudWatch Agent helps collect metrics and logs from EC2 instances and on-premises servers and push them to CloudWatch.
  • Logs collected by the unified agent are processed and stored in CloudWatch Logs.

CloudWatch Logs

Refer blog post @ CloudWatch Logs

CloudWatch Supported Services

Refer blog post @ CloudWatch Supported Services

Accessing CloudWatch

  • CloudWatch can be accessed using
    • AWS CloudWatch console
    • CloudWatch CLI
    • AWS CLI
    • CloudWatch API
    • AWS SDKs

 

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this? Choose 2 answers
    1. Amazon Simple Email Service (Cannot be integrated with CloudWatch directly)
    2. Amazon CloudWatch
    3. Amazon Simple Queue Service
    4. Amazon Route 53
    5. Amazon Simple Notification Service
  2. A customer needs to capture all client connection information from their load balancer every five minutes. The company wants to use this data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements?
    1. Enable AWS CloudTrail for the load balancer.
    2. Enable access logs on the load balancer. (Refer link)
    3. Install the Amazon CloudWatch Logs agent on the load balancer.
    4. Enable Amazon CloudWatch metrics on the load balancer (does not provide Client connection information)
  3. A user is running a batch process on EBS backed EC2 instances. The batch process starts a few instances to process Hadoop Map reduce jobs, which can run between 50 – 600 minutes or sometimes for more time. The user wants to configure that the instance gets terminated only when the process is completed. How can the user configure this with CloudWatch?
    1. Setup the CloudWatch action to terminate the instance when the CPU utilization is less than 5%
    2. Setup the CloudWatch with Auto Scaling to terminate all the instances
    3. Setup a job which terminates all instances after 600 minutes
    4. It is not possible to terminate instances automatically
  4. A user has two EC2 instances running in two separate regions. The user is running an internal memory management tool, which captures the data and sends it to CloudWatch in US East, using a CLI with the same namespace and metric. Which of the below mentioned options is true with respect to the above statement?
    1. The setup will not work as CloudWatch cannot receive data across regions
    2. CloudWatch will receive and aggregate the data based on the namespace and metric
    3. CloudWatch will give an error since the data will conflict due to two sources
    4. CloudWatch will take the data of the server, which sends the data first
  5. A user is sending the data to CloudWatch using the CloudWatch API. The user is sending data 90 minutes in the future. What will CloudWatch do in this case?
    1. CloudWatch will accept the data
    2. It is not possible to send data of the future
    3. It is not possible to send the data manually to CloudWatch
    4. The user cannot send data for more than 60 minutes in the future
  6. A user is having data generated randomly based on a certain event. The user wants to upload that data to CloudWatch. It may happen that event may not have data generated for some period due to randomness. Which of the below mentioned options is a recommended option for this case?
    1. For the period when there is no data, the user should not send the data at all
    2. For the period when there is no data the user should send a blank value
    3. For the period when there is no data the user should send the value as 0 (Refer User Guide)
    4. The user must upload the data to CloudWatch as having no data for some period will cause an error at CloudWatch monitoring
  7. A user has a weighing plant. The user measures the weight of some goods every 5 minutes and sends data to AWS CloudWatch for monitoring and tracking. Which of the below mentioned parameters is mandatory for the user to include in the request list?
    1. Value
    2. Namespace (refer put-metric request)
    3. Metric Name
    4. Timezone
  8. A user has a refrigerator plant. The user is measuring the temperature of the plant every 15 minutes. If the user wants to send the data to CloudWatch to view the data visually, which of the below mentioned statements is true with respect to the information given above?
    1. The user needs to use AWS CLI or API to upload the data
    2. The user can use the AWS Import Export facility to import data to CloudWatch
    3. The user will upload data from the AWS console
    4. The user cannot upload data to CloudWatch since it is not an AWS service metric
  9. A user has launched an EC2 instance. The user is planning to setup the CloudWatch alarm. Which of the below mentioned actions is not supported by the CloudWatch alarm?
    1. Notify the Auto Scaling launch config to scale up
    2. Send an SMS using SNS
    3. Notify the Auto Scaling group to scale down
    4. Stop the EC2 instance
  10. A user has a refrigerator plant. The user is measuring the temperature of the plant every 15 minutes. If the user wants to send the data to CloudWatch to view the data visually, which of the below mentioned statements is true with respect to the information given above?
    1. The user needs to use AWS CLI or API to upload the data
    2. The user can use the AWS Import Export facility to import data to CloudWatch
    3. The user will upload data from the AWS console
    4. The user cannot upload data to CloudWatch since it is not an AWS service metric
  11. A user is trying to aggregate all the CloudWatch metric data of the last 1 week. Which of the below mentioned statistics is not available for the user as a part of data aggregation?
    1. Aggregate
    2. Sum
    3. Sample data
    4. Average
  12. A user has setup a CloudWatch alarm on an EC2 action when the CPU utilization is above 75%. The alarm sends a notification to SNS on the alarm state. If the user wants to simulate the alarm action how can he achieve this?
    1. Run activities on the CPU such that its utilization reaches above 75%
    2. From the AWS console change the state to ‘Alarm’
    3. The user can set the alarm state to ‘Alarm’ using CLI
    4. Run the SNS action manually
  13. A user is publishing custom metrics to CloudWatch. Which of the below mentioned statements will help the user understand the functionality better?
    1. The user can use the CloudWatch Import tool
    2. The user should be able to see the data in the console after around 15 minutes
    3. If the user is uploading the custom data, the user must supply the namespace, timezone, and metric name as part of the command
    4. The user can view as well as upload data using the console, CLI and APIs
  14. An application that you are managing has EC2 instances and DynamoDB tables deployed to several AWS Regions. In order to monitor the performance of the application globally, you would like to see two graphs 1) Avg CPU Utilization across all EC2 instances and 2) Number of Throttled Requests for all DynamoDB tables. How can you accomplish this? [PROFESSIONAL]
    1. Tag your resources with the application name, and select the tag name as the dimension in the CloudWatch Management console to view the respective graphs (CloudWatch metrics are regional)
    2. Use the CloudWatch CLI tools to pull the respective metrics from each regional endpoint. Aggregate the data offline & store it for graphing in CloudWatch.
    3. Add SNMP traps to each instance and DynamoDB table. Leverage a central monitoring server to capture data from each instance and table. Put the aggregate data into CloudWatch for graphing (Can’t add SNMP traps to DynamoDB as it is a managed service)
    4. Add a CloudWatch agent to each instance and attach one to each DynamoDB table. When configuring the agent set the appropriate application name & view the graphs in CloudWatch. (Can’t add agents to DynamoDB as it is a managed service)
  15. You have set up Individual AWS accounts for each project. You have been asked to make sure your AWS Infrastructure costs do not exceed the budget set per project for each month. Which of the following approaches can help ensure that you do not exceed the budget each month? [PROFESSIONAL]
    1. Consolidate your accounts so you have a single bill for all accounts and projects (Consolidation will not help limit per account)
    2. Set up auto scaling with CloudWatch alarms using SNS to notify you when you are running too many Instances in a given account (many instances do not directly map to cost and would not give exact cost)
    3. Set up CloudWatch billing alerts for all AWS resources used by each project, with a notification occurring when the amount for each resource tagged to a particular project matches the budget allocated to the project. (as each project already has a account, no need for resource tagging)
    4. Set up CloudWatch billing alerts for all AWS resources used by each account, with email notifications when it hits 50%. 80% and 90% of its budgeted monthly spend
  16. You meet once per month with your operations team to review the past month’s data. During the meeting, you realize that 3 weeks ago, your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency on your 3-tier web service API. You use DynamoDB for the database layer, ELB, EBS, and EC2 for the business logic tier, and SQS, ELB, and EC2 for the presentation layer. Which of the following techniques will NOT help you figure out what happened?
    1. Check your CloudTrail log history around the spike’s time for any API calls that caused slowness.
    2. Review CloudWatch Metrics graphs to determine which component(s) slowed the system down. (Metrics data was available for 2 weeks before, however it has been extended now)
    3. Review your ELB access logs in S3 to see if any ELBs in your system saw the latency.
    4. Analyze your logs to detect bursts in traffic at that time.
  17. You have a high security requirement for your AWS accounts. What is the most rapid and sophisticated setup you can use to react to AWS API calls to your account?
    1. Subscription to AWS Config via an SNS Topic. Use a Lambda Function to perform in-flight analysis and reactivity to changes as they occur.
    2. Global AWS CloudTrail setup delivering to S3 with an SNS subscription to the deliver notifications, pushing into a Lambda, which inserts records into an ELK stack for analysis.
    3. Use a CloudWatch Rule ScheduleExpression to periodically analyze IAM credential logs. Push the deltas for events into an ELK stack and perform ad-hoc analysis there.
    4. CloudWatch Events Rules, which trigger based on all AWS API calls, submitting all events to an AWS Kinesis Stream for arbitrary downstream analysis. (CloudWatch Events allow subscription to AWS API calls, and direction of these events into Kinesis Streams. This allows a unified, near real-time stream for all API calls, which can be analyzed with any tool(s). Refer link)
  18. To monitor API calls against our AWS account by different users and entities, we can use ____ to create a history of calls in bulk for later review, and use ____ for reacting to AWS API calls in real-time.
    1. AWS Config; AWS Inspector
    2. AWS CloudTrail; AWS Config
    3. AWS CloudTrail; CloudWatch Events (CloudTrail is a batch API call collection service, CloudWatch Events enables real-time monitoring of calls through the Rules object interface. Refer link)
    4. AWS Config; AWS Lambda
  19. You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO? [PROFESSIONAL]
    1. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues. (is not fast in search and introduces delay)
    2. Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc MapReduce analysis and write new queries when needed. (is not fast in search and introduces delay)
    3. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues. (is not fast in search and introduces delay)
    4. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (ELK – Elasticsearch, Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)
  20. Your EC2-Based Multi-tier application includes a monitoring instance that periodically makes application -level read only requests of various application components and if any of those fail more than three times 30 seconds calls CloudWatch to fire an alarm, and the alarm notifies your operations team by email and SMS of a possible application health problem. However, you also need to watch the watcher -the monitoring instance itself – and be notified if it becomes unhealthy. Which of the following is a simple way to achieve that goal? [PROFESSIONAL]
    1. Run another monitoring instance that pings the monitoring instance and fires a could watch alarm mat notifies your operations team should the primary monitoring instance become unhealthy.
    2. Set a CloudWatch alarm based on EC2 system and instance status checks and have the alarm notify your operations team of any detected problem with the monitoring instance.
    3. Set a CloudWatch alarm based on the CPU utilization of the monitoring instance and nave the alarm notify your operations team if C r the CPU usage exceeds 50% few more than one minute: then have your monitoring application go into a CPU-bound loop should it Detect any application problems.
    4. Have the monitoring instances post messages to an SOS queue and then dequeue those messages on another instance should the queue cease to have new messages, the second instance should first terminate the original monitoring instance start another backup monitoring instance and assume (the role of the previous monitoring instance and beginning adding messages to the SQS queue.
  21.  

51 thoughts on “Amazon CloudWatch

  1. Based on the following, I think the answer to question 2 should be b.

    “Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues.” and “You can specify a publishing interval of either 5 minutes or 60 minutes when you enable the access log for your load balancer.” where as CloudWatch captures information every 60 seconds, which is not required in this case. “Elastic Load Balancing reports metrics to CloudWatch only when requests are flowing through the load balancer. If there are requests flowing through the load balancer, Elastic Load Balancing measures and sends its metrics in 60-second intervals.”

    http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html

  2. Hi Jayendra,
    Q7
    A user has a weighing plant. The user measures the weight of some goods every 5 minutes and sends data to AWS CloudWatch for monitoring and tracking. Which of the below mentioned parameters is mandatory for the user to include in the request list?
    Value
    Namespace
    Metric Name
    Timezone

    Isn’t Metric name also mandatory?

    Cheers
    Satish

      1. Is this still correct?

        I’ve got:

        ╰─$ aws cloudwatch put-metric-data –namespace MyNameSpace

        Parameter validation failed:
        Missing required parameter in input: “MetricData”

  3. Hi Jayendra,

    When I created ASG, the detailed monitoring is enabled by default. Can you double check?

    Thanks,

    Jacky

    A user has created an Auto Scaling group using CLI. The user wants to enable CloudWatch detailed monitoring for that group. How can the user configure this?
    When the user sets an alarm on the Auto Scaling group, it automatically enables detail monitoring
    By default detailed monitoring is enabled for Auto Scaling
    Auto Scaling does not support detailed monitoring
    Enable detail monitoring from the AWS console

    1. Thanks Jacky for the Correction, it is enabled by default when the ASG is created by CLI as targeted by the question. For ASG created from Console, by default, basic monitoring is enabled.

  4. Can you help me answering below question?

    a user has created a launch configuration for auto scaling where cloudwatch detailed monitoring is disabled the user wants to now enable detailed monitoring. how can the user achieve this?

    A- Update the launch config with CLI to set instancemonitoring.enabled=true
    B- Update the launch config with CLI to set instancemonitoringDisabled =true
    C-The user should change the auto scaling group from the aws console to enable the detailed monitoring
    D- create new launch config with detail monitoring enabled and update the auto scaling group

    1. The launch configuration once created cannot be updated, so the answer should be #D. You create a new launch configuration with detail monitoring enabled and then update the auto scaling group to use it.

  5. Hi Jayendra,

    The answer for Question 4 cannot be as Cloudwatch cannnot aggregate data across region. I am still not clear about the correct answer. Pls advice

    1. Hi Uma, as per my understanding the metrics are being pushed though CLI to CloudWatch it should be able to collect and agrregate them.

        1. Hi Sherief,

          No, not sure where specifically in the document it says you can collect metrics from across the regions? Document only says can collect across instances.
          And Question to Jay – With CLI you metrics may be sent to cloud watch but should be with in the region and not across the region for cloudwatch itself.

          So, I think answer ‘a’ is correct?

          1. when sending custom metrics, i presume i can point to regional endpoint CloudWatch and send metrics and that should work. But I frankly haven’t tried so need to check does it allow or not.

      1. Hi Jayenrapatil,
        First of all, thank you very much for this resource.
        From the link below, it says:

        “You can monitor AWS resources in multiple regions using a single CloudWatch dashboard. For example, you can create a dashboard that shows CPU utilization for an EC2 instance located in the us-west-2 region with your billing metrics, which are located in the us-east-1 region. ”

        But you are saying CloudWatch does not aggregate data across regions.

        Can you clarify this? I am very confused.

        http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cross_region_dashboard.html

        1. Hi Ugur, CloudWatch by default does not aggregate across region. However, when you are manually inserting custom metrics, I don’t see it a reason to aggregate the same. But I haven’t tried it frankly.

          1. Confirmed- if you send custom metrics to CloudWatch in 1 region from EC2 instances in different regions using the same namespace or dimension, you can aggregate the metrics in the CloudWatch console.

          2. Hi Jay, Thanks again for the material and the time you are putting into this blog.
            I followed your Solution architect associate one and passed by 916 score last week. THANK YOU.

            Question here:
            How can you even publish from an EC2 in one region to CloudWatch in different region. I don’t see any parameter in put-metric-data command to define the target region.

          3. Congrats fariemami, thats a great score.
            I remember check on the publishing metrics across regions before marking the option. Let me check the reference again.

  6. Regarding question 2.

    A customer needs to capture all client connection information from their load balancer every five minutes. The company wants to use this data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements?
    Isn’t A the correct answer as well ?

    Elastic Load Balancing is integrated with AWS CloudTrail, which captures APIs calls and delivers log files to an Amazon S3 bucket that you specify. There is no cost to use CloudTrail. However, the standard rates for Amazon S3 apply.

    CloudTrail logs calls to the AWS APIs, including the Elastic Load Balancing API, whenever you use them directly or indirectly through the AWS Management Console. You can use the information collected by CloudTrail to determine what API call was made, what source IP address was used, who made the call, when it was made, and so on. You can use AWS CloudTrail to capture information about the calls to the Elastic Load Balancing API made by or on behalf of your AWS account

    Elastic Load Balancing Event Information

    The log files from CloudTrail contain event information in JSON format. An event record represents a single AWS API call and includes information about the requested action, such as the user that requested the action, the date and the time of the request, the request parameters, and the response elements.

    http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/ELB-API-Logs.html

    http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-cloudtrail-logs.html

    1. Here the question is to capture the client connection information which can be tracked using ELB access logs.
      CloudTrail would not help capture client connection, but only calls made to AWS API services using SDKs, CLI or Console and can be useful for auditing.

  7. Hi Jayendra!

    I need your help in this question.

    A user has setup a web application on EC2. The user is generating a log of the application performance at every second. There are multiple entries for each second. If the user wants to send that data to CloudWatch every minute, what should he do?

    A. The user should send only the data of the 60th second as CloudWatch will map the receive data timezone with the sent data timezone
    B. It is not possible to send the custom metric to CloudWatch every minute
    C. Give CloudWatch the Min, Max, Sum, and SampleCount of a number of every minute
    D. Calculate the average of one minute and send the data to CloudWatch

    thanks.

    1. Should be C. Refer CloudWatch Concepts
      For large datasets, you can insert a pre-aggregated dataset called a statistic set. With statistic sets, you give CloudWatch the Min, Max, Sum, and SampleCount for a number of data points. This is commonly used when you need to collect data many times in a minute. For example, suppose you have a metric for the request latency of a webpage. It doesn't make sense to publish data with every webpage hit. We suggest that you collect the latency of all hits to that webpage, aggregate them once a minute, and send that statistic set to CloudWatch.

  8. Hi, Jayendra!

    In this question I am in doubt between A and D.

    A user has configured an Auto Scaling group with ELB. The user has enabled detailed CloudWatch monitoring on Elastic Load balancing. Which of the below mentioned statements will help the user understand this functionality better?
    A. ELB sends data to CloudWatch every minute only and does not charge the user
    B. ELB will send data every minute and will charge the user extra
    C. ELB is not supported by CloudWatch
    D. It is not possible to setup detailed monitoring for ELB

    1. Monitoring done every minute is called detailed monitoring. Detailed monitoring is enabled by default on ELB and metrics will be sent to cloudwatch as long as traffic is flowing through the load balancer in that 1 minute.

      Answer: A

      A. ELB sends data to CloudWatch every minute only and does not charge the user(correct)
      B. ELB will send data every minute and will charge the user extra (not extra charge. Included as part of base ELB charges)
      C. ELB is not supported by CloudWatch (it is supported)
      D. It is not possible to setup detailed monitoring for ELB (it is possible and available by default)

  9. Hi,

    Question no.9

    A user has launched an EC2 instance. The user is planning to setup the CloudWatch alarm. Which of the below mentioned actions is not supported by the CloudWatch alarm?

    1) Notify the Auto Scaling launch config to scale up
    2) Send an SMS using SNS
    3) Notify the Auto Scaling group to scale down
    4) Stop the EC2 instance

    I think the first answer is supported”Notify the Auto Scaling launch config to scale up”, Please help me clarify, what could be the right answer for this. Reference doc: https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-cloudwatch/

    1. Its supported which is correct. However, the questions ask for the actions not supported.

  10. Hi Sir,

    Please help me with this.

    A user has configured an Auto Scaling group with ELB. The user has enabled detailed CloudWatch monitoring on Elastic Load balancing. Which of the below mentioned statements will help the user understand this functionality better?

    A. ELB sends data to CloudWatch every minute only and does not charge the user
    B. ELB will send data every minute and will charge the user extra
    C. ELB is not supported by CloudWatch
    D. It is not possible to setup detailed monitoring for ELB

  11. edit to my comment above….as its ELB it provides 13(?) free metrics so answer is A

  12. Hi jay,
    Question 17:
    17. You have a high security requirement for your AWS accounts. What is the most rapid and sophisticated setup you can use to react to AWS API calls to your account?

    Would you explain why D? Considering that “Read-only APIs, such as those that begin with List, Get, or Describe are not supported by CloudWatch Events” and new still have to build a consumer for the kinesis stream.
    Isn’t the B a better choice?

    Thanks,
    farzin

  13. Hi Jayrendra,
    Thanks for all your efforts. It seems these questions are not updated since long time. Please update the question as per new format SOA-c01, if you have time. Thanks in advance.

  14. Hi Jayendra, Thanks for this post. A small typo can be corrected towards the end of the section where the list of statistics mentions ‘Data Sample’ as one of them, i think you meant SampleCount there. Cheers.

  15. Hi jayendrapatil,

    Pls answer this.

    An Application performs read-heavy operations on an Amazon Aurora DB instance. The SysOps Administrator monitors the CPUUtilization CloudWatch metric and has recently seen it increase to 90%. The Administrator would like to understand what is driving the CPU surge.Which of the following should be Administrator additionally monitor to understand the CPU surge?
    A. FreeableMemory and DatabaseConnections to understand the amount of available RAM and number of connections to DB instance.
    B. FreeableMemory and EngineUptime to understand the amount of available RAM and the amount of time the instance has been up and running.
    C. DatabaseConnections and AuroraReplicaLag for the number of connections to the DB instance and the amount of lag when replicating updates from the primary instance.
    D. DatabaseConnections and InsertLatency for the number of connections to the DB instance and latency for insert queries.

  16. Hi jayendrapatil,
    Pls answer this.

    A company has centralized all its logs into one Amazon CloudWatch Logs log group. The SysOps Administrator is to alert different teams of any issues relevant to them.What is the MOST efficient approach to accomplish this?

    A. Write a AWS lambda function that will query the logs every minute and contain the logic of which team to notify on which patterns and issues.
    B. Set up different metric filters for each team based on patterns and alerts. Each alarm will notify the appropriate notification list.
    C. Redesign the aggregation of logs so that each team’s relevant parts are sent to a separate log group, then subscribe each team to its respective log group.
    D. Create an AWS Auto Scaling group of Amazon EC2 instances that will scale based on the amount of ingested log entries. This group will pull streams, look for patterns, and send notifications to relevant teams.

  17. Hi sir,
    pls, answer this.

    A user has launched 10 instances from the same AMI ID using Auto Scaling. The user is trying to see the average CPU utilization across all instances of the last 2 weeks under the CloudWatch console. How can the user achieve this?
    A. View the Auto Scaling CPU metrics
    B. Aggregate the data over the instance AMI ID
    C. The user has to use the CloudWatchanalyser to find the average data across instances
    D. It is not possible to see the average CPU utilization of the same AMI ID since the instance ID is different

Comments are closed.