Terraform Cheat Sheet

  • An open source provisioning declarative tool that based on Infrastructure as a Code paradigm
  • designed on immutable infrastructure principles
  • Written in Golang and uses own syntax – HCL (Hashicorp Configuration Language), but also supports JSON
  • Helps to evolve the infrastructure, safely and predictably
  • Applies Graph Theory to IaaC and provides Automation, Versioning and Reusability
  • Terraform is a multipurpose composition tool:
    ○ Composes multiple tiers (SaaS/PaaS/IaaS)
    ○ A plugin-based architecture model
  • Terraform is not a cloud agnostic tool. It embraces all major Cloud Providers and provides common language to orchestrate the infrastructure resources
  • Terraform is not a configuration management tool and other tools like chef, ansible exists in the market.

Terraform Architecture

Terraform Architecture

Terraform Providers (Plugins)

  • provide abstraction above the upstream API and is responsible for understanding API interactions and exposing resources.
  • Invoke only upstream APIs for the basic CRUD operations
  • Providers are unaware of anything related to configuration loading, graph
    theory, etc.
  • supports multiple provider instances using alias for e.g. multiple aws provides with different region
  • can be integrated with any API using providers framework
  • Most providers configure a specific infrastructure platform (either cloud or self-hosted).
  • can also offer local utilities for tasks like generating random numbers for unique resource names.

Terraform Provisioners

  • run code locally or remotely on resource creation
    • local exec executes code on the machine running terraform
    • remote exec
      • runs on the provisioned resource
      • supports ssh and winrm
    • requires inline list of commands
  • should be used as a last resort
  • are defined within the resource block.
  • support types – Create and Destroy
    • if creation time fails, resource is tainted if provisioning failed, by default. (next apply it will be re-created)
    • behavior can be overridden by setting the on_failure to continue, which means ignore and continue
    • for destroy, if it fails – resources are not removed

Terraform Workspaces

  • helps manage multiple distinct sets of infrastructure resources or environments with the same code.
  • just need to create needed workspace and use them, instead of creating a directory for each environment to manage
  • state files for each workspace are stored in the directory terraform.tfstate.d
  • terraform workspace new dev creates a new workspace and switches to it as well
  • terraform workspace select dev helps select workspace
  • terraform workspace list lists the workspaces and shows the current active one with *
  • does not provide strong separation as it uses the same backend

Terraform Workflow

Terraform Workflow

init

  • initializes a working directory containing Terraform configuration files.
  • performs
    • backend initialization , storage for terraform state file.
    • modules installation, downloaded from terraform registry to local path
    • provider(s) plugins installation, the plugins are downloaded in the sub-directory of the present working directory at the path of .terraform/plugins
  • supports -upgrade to update all previously installed plugins to the newest version that complies with the configuration’s version constraints
  • is safe to run multiple times, to bring the working directory up to date with changes in the configuration
  • does not delete the existing configuration or state

validate

  • validates syntactically for format and correctness.
  • is used to validate/check the syntax of the Terraform files.
  • verifies whether a configuration is syntactically valid and internally consistent, regardless of any provided variables or existing state.
  • A syntax check is done on all the terraform files in the directory, and will display an error if any of the files doesn’t validate.

plan

  • create a execution plan
  • traverses each vertex and requests each provider using parallelism
  • calculates the difference between the last-known state and
    the current state and presents this difference as the output of the terraform plan operation to user in their terminal
  • does not modify the infrastructure or state.
  • allows a user to see which actions Terraform will perform prior to making any changes to reach the desired state
  • will scan all *.tf  files in the directory and create the plan
  • will perform refresh for each resource and might hit rate limiting issues as it calls provider APIs
  • all resources refresh can be disabled or avoided using
    • -refresh=false or
    • target=xxxx or
    • break resources into different directories.
  • supports -out to save the plan

apply

  • apply changes to reach the desired state.
  • scans the current directory for the configuration and applies the changes appropriately.
  • can be provided with a explicit plan, saved as out from terraform plan
  • If no explicit plan file is given on the command line, terraform apply will create a new plan automatically and prompt for approval to apply it
  • will modify the infrastructure and the state.
  • if a resource successfully creates but fails during provisioning,
    • Terraform will error and mark the resource as “tainted”.
    • A resource that is tainted has been physically created, but can’t be considered safe to use since provisioning failed.
    • Terraform also does not automatically roll back and destroy the resource during the apply when the failure happens, because that would go against the execution plan: the execution plan would’ve said a resource will be created, but does not say it will ever be deleted.
  • does not import any resource.
  • supports -auto-approve to apply the changes without asking for a confirmation
  • supports -target to apply a specific module

refresh

  • used to reconcile the state Terraform knows about (via its state file) with the real-world infrastructure
  • does not modify infrastructure, but does modify the state file

destroy

  • destroy the infrastructure and all resources
  • modifies both state and infrastructure
  • terraform destroy -target can be used to destroy targeted resources
  • terraform plan -destroy allows creation of destroy plan

import

  • helps import already-existing external resources, not managed by Terraform, into Terraform state and allow it to manage those resources
  • Terraform is not able to auto-generate configurations for those imported modules, for now, and requires you to first write the resource definition in Terraform and then import this resource

taint

  • marks a Terraform-managed resource as tainted, forcing it to be destroyed and recreated on the next apply.
  • will not modify infrastructure, but does modify the state file in order to mark a resource as tainted. Infrastructure and state are changed in next apply.
  • can be used to taint a resource within a module

fmt

  • format to lint the code into a standard format

console

  • command provides an interactive console for evaluating expressions.

Terraform Modules

  • enables code reuse
  • supports versioning to maintain compatibility
  • stores code remotely
  • enables easier testing
  • enables encapsulation with all the separate resources under one configuration block
  • modules can be nested inside other modules, allowing you to quickly spin up whole separate environments.
  • can be referred using source attribute
  • supports Local and Remote modules
    • Local modules are stored alongside the Terraform configuration (in a separate directory, outside of each environment but in the same repository) with source path ./ or ../
    • Remote modules are stored externally in a separate repository, and supports versioning
  • supports following backends
    • Local paths
    • Terraform Registry
    • GitHub
    • Bitbucket
    • Generic Git, Mercurial repositories
    • HTTP URLs
    • S3 buckets
    • GCS buckets
  • Module requirements
    • must be on GitHub and must be a public repo, if using public registry.
    • must be named terraform-<PROVIDER>-<NAME>, where <NAME> reflects the type of infrastructure the module manages and <PROVIDER> is the main provider where it creates that infrastructure. for e.g. terraform-google-vault or terraform-aws-ec2-instance.
    • must maintain x.y.z tags for releases to identify module versions. Release tag names must be a semantic version, which can optionally be prefixed with a v for example, v1.0.4 and 0.9.2. Tags that don’t look like version numbers are ignored.
    • must maintain a Standard module structure, which allows the registry to inspect the module and generate documentation, track resource usage, parse submodules and examples, and more.

Terraform Read and write configuration

terraform_sample

  • Resources
    • resource are the most important element in the Terraform language that describes one or more infrastructure objects, such as compute instances etc
    • resource type and local name together serve as an identifier for a given resource and must be unique within a module for e.g.  aws_instance.local_name
  • Data Sources
    • data allow data to be fetched or computed for use elsewhere in Terraform configuration
    • allows a Terraform configuration to make use of information defined outside of Terraform, or defined by another separate Terraform configuration
  • Variables
    • variable serve as parameters for a Terraform module and act like function arguments
    • allows aspects of the module to be customized without altering the module’s own source code, and allowing modules to be shared between different configurations
    • can be defined through multiple ways
      • command line for e.g.-var="image_id=ami-abc123"
      • variable definition files .tfvars or .tfvars.json. By default, terraform automatically loads
        • Files named exactly terraform.tfvars or terraform.tfvars.json.
        • Any files with names ending in .auto.tfvars or .auto.tfvars.json
        • file can also be passed with -var-file
      • environment variables can be used to set variables using the format TF_VAR_name
      • Environment variables
      • terraform.tfvars file, if present.
      • terraform.tfvars.json file, if present.
      • Any *.auto.tfvars or *.auto.tfvars.json files, processed in lexical order of their filenames.
      • Any -var and -var-file options on the command line, in the order they are provided.Terraform loads variables in the following order, with later sources taking precedence over earlier ones:
  • Local Values
    • locals assigns a name to an expression, allowing it to be used multiple times within a module without repeating it.
    • are like a function’s temporary local variables.
    • helps to avoid repeating the same values or expressions multiple times in a configuration.
  • Output
    • are like function return values.
    • output can be marked as containing sensitive material using the optional sensitive argument, which prevents Terraform from showing its value in the list of outputs. However, they are still stored in the state as plain text.
    • In a parent module, outputs of child modules are available in expressions as module.<MODULE NAME>.<OUTPUT NAME>.
  • Named Values
    • is an expression that references the associated value for e.g. aws_instance.local_name, data.aws_ami.centos, var.instance_type etc.
    • support Local named values for e.g count.index
  • Dependencies
    • identifies implicit dependencies as Terraform automatically infers when one resource depends on another by studying the resource attributes used in interpolation expressions for e.g aws_eip on resource `aws_instance`
    • explicit dependencies can be defined using depends_on where dependencies between resources that are not visible to Terraform
  • Data Types
    • supports primitive data types of
      • string, number and bool
      • Terraform language will automatically convert number and bool values to string values when needed
    • supports complex data types of
      • list – a sequence of values identified by consecutive whole numbers starting with zero.
      • map – a collection of values where each is identified by a string label.
      • set –  a collection of unique values that do not have any secondary identifiers or ordering.
    • supports structural data types of
      • object – a collection of named attributes that each have their own type
      • tuple – a sequence of elements identified by consecutive whole numbers starting with zero, where each element has its own type.
  • Built-in Functions
    • includes a number of built-in functions that can be called from within expressions to transform and combine values for e.g. min, max, file, concat, element, index, lookup etc.
    • does not support user-defined functions
  • Dynamic Blocks
    • acts much like a for expression, but produces nested blocks instead of a complex typed value. It iterates over a given complex value, and generates a nested block for each element of that complex value.
  • Terraform Comments
    • supports three different syntaxes for comments:
      • #
      • //
      • /* and */

Terraform Backends

  • determines how state is loaded and how an operation such as apply is executed
  • are responsible for storing state and providing an API for optional state locking
  • needs to be initialized
  • if switching the backed for the first time setup, Terraform provides a migration option
  • helps
    • collaboration and working as a team, with the state maintained remotely and state locking
    • can provide enhanced security for sensitive data
    • support remote operations
  • supports local vs remote backends
    • local (default) backend stores state in a local JSON file on disk
    • remote backend stores state remotely like S3, OSS, GCS, Consul and support features like remote operation, state locking, encryption, versioning etc.
  • supports partial configuration with remaining configuration arguments provided as part of the initialization process
  • Backend configuration doesn’t support interpolations.
  • GitHub is not the supported backend type in Terraform.

Terraform State Management

  • state helps keep track of the infrastructure Terraform manages
  • stored locally in the terraform.tfstate
  • recommended not to edit the state manually
  • Use terraform state command
    • mv – to move/rename modules
    • rm – to safely remove resource from the state. (destroy/retain like)
    • pull – to observe current remote state
    • list & show – to write/debug modules

State Locking

  • happens for all operations that could write state, if supported by backend
  • prevents others from acquiring the lock & potentially corrupting the state
  • backends which support state locking are
    • azurerm
    • Hashicorp consul
    • Tencent Cloud Object Storage (COS)
    • etcdv3
    • Google Cloud Storage GCS
    • HTTP endpoints
    • Kubernetes Secret with locking done using a Lease resource
    • AliCloud Object Storage OSS with locking via TableStore
    • PostgreSQL
    • AWS S3 with locking via DynamoDB
    • Terraform Enterprise
  • Backends which do not support state locking are
    • artifactory
    • etcd
  • can be disabled for most commands with the -lock flag
  • use force-unlock command to manually unlock the state if unlocking failed

State Security

  • can contain sensitive data, depending on the resources in use for e.g passwords and keys
  • using local state, data is stored in plain-text JSON files
  • using remote state, state is held in memory when used by Terraform. It may be encrypted at rest, if supported by backend for e.g. S3, OSS

Terraform Logging

  • debugging can be controlled using TF_LOG , which can be configured for different levels TRACE, DEBUG, INFO, WARN or ERROR, with TRACE being the more verbose.
  • logs path can be controlled TF_LOG_PATHTF_LOG needs to be specified.

Terraform Cloud and Terraform Enterprise

  • Terraform Cloud provides Cloud Infrastructure Automation as a Service. It is offered as a multi-tenant SaaS platform and is designed to suit the needs of smaller teams and organizations. Its smaller plans default to one run at a time, which prevents users from executing multiple runs concurrently.
  • Terraform Enterprise is a private install for organizations who prefer to self-manage. It is designed to suit the needs of organizations with specific requirements for security, compliance and custom operations.
  • Terraform Cloud provides features
    • Remote Terraform Execution – supports Remote Operations for Remote Terraform execution which helps provide consistency and visibility for critical provisioning operations.
    • Workspaces – organizes infrastructure with workspaces instead of directories. Each workspace contains everything necessary to manage a given collection of infrastructure, and Terraform uses that content whenever it executes in the context of that workspace.
    • Remote State Management – acts as a remote backend for the Terraform state. State storage is tied to workspaces, which helps keep state associated with the configuration that created it.
    • Version Control Integration – is designed to work directly with the version control system (VCS) provider.
    • Private Module Registry – provides a private and central library of versioned & validated modules to be used within the organization
    • Team based Permission System – can define groups of users that match the organization’s real-world teams and assign them only the permissions they need
    • Sentinel Policies – embeds the Sentinel policy-as-code framework, which lets you define and enforce granular policies for how the organization provisions infrastructure. Helps eliminate provisioned resources that don’t follow security, compliance, or operational policies.
    • Cost Estimation – can display an estimate of its total cost, as well as any change in cost caused by the proposed updates
    • Security – encrypts state at rest and protects it with TLS in transit.
  • Terraform Enterprise features
    • includes all the Terraform Cloud features with
    • Audit – supports detailed audit logging and tracks the identity of the user requesting state and maintains a history of state changes.
    • SSO/SAML – SAML for SSO provides the ability to govern user access to your applications.
  • Terraform Enterprise currently supports running under the following operating systems for a Clustered deployment:
    • Ubuntu 16.04.3 – 16.04.5 / 18.04
    • Red Hat Enterprise Linux 7.4 through 7.7
    • CentOS 7.4 – 7.7
    • Amazon Linux
    • Oracle Linux
    • Clusters currently don’t support other Linux variants.
  • Terraform Cloud currently supports following VCS Provider
    • GitHub.com
    • GitHub.com (OAuth)
    • GitHub Enterprise
    • GitLab.com
    • GitLab EE and CE
    • Bitbucket Cloud
    • Bitbucket Server
    • Azure DevOps Server
    • Azure DevOps Services
  • A Terraform Enterprise install that is provisioned on a network that does not have Internet access is generally known as an air-gapped install. These types of installs require you to pull updates, providers, etc. from external sources vs. being able to download them directly.

AWS Certification – Content Delivery – Cheat Sheet

CloudFront

  • provides low latency and high data transfer speeds for distribution of static, dynamic web or streaming content to web users
  • delivers the content through a worldwide network of data centers called Edge Locations
  • keeps persistent connections with the origin servers so that the files can be fetched from the origin servers as quickly as possible.
  • dramatically reduces the number of network hops that users’ requests must pass through
  • supports multiple origin server options, like AWS hosted service for e.g. S3, EC2, ELB or an on premise server, which stores the original, definitive version of the objects
  • single distribution can have multiple origins and Path pattern in a cache behavior determines which requests are routed to the origin
  • supports Web Download distribution and RTMP Streaming distribution
    • Web distribution supports static, dynamic web content, on demand using progressive download & HLS and live streaming video content
    • RTMP supports streaming of media files using Adobe Media Server and the Adobe Real-Time Messaging Protocol (RTMP) ONLY
  • supports HTTPS using either
    • dedicated IP address, which is expensive as dedicated IP address is assigned to each CloudFront edge location
    • Server Name Indication (SNI), which is free but supported by modern browsers only with the domain name available in the request header
  • For E2E HTTPS connection,
    • Viewers -> CloudFront needs either self signed certificate, or certificate issued by CA or ACM
    • CloudFront -> Origin needs certificate issued by ACM for ELB and by CA for other origins
  •  Security
    • Origin Access Identity (OAI) can be used to restrict the content from S3 origin to be accessible from CloudFront only
    • supports Geo restriction (Geo-Blocking) to whitelist or blacklist countries that can access the content
    • Signed URLs 
      • for RTMP distribution as signed cookies aren’t supported
      • to restrict access to individual files, for e.g., an installation download for your application.
      • users using a client, for e.g. a custom HTTP client, that doesn’t support cookies
    • Signed Cookies
      • provide access to multiple restricted files, for e.g., video part files in HLS format or all of the files in the subscribers’ area of a website.
      • don’t want to change the current URLs
    • integrates with AWS WAF, a web application firewall that helps protect web applications from attacks by allowing rules configured based on IP addresses, HTTP headers, and custom URI strings
  • supports GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE to get object & object headers, add, update, and delete objects
    • only caches responses to GET and HEAD requests and, optionally, OPTIONS requests
    • does not cache responses to PUT, POST, PATCH, DELETE request methods and these requests are proxied back to the origin
  • object removal from cache
    • would be removed upon expiry (TTL) from the cache, by default 24 hrs
    • can be invalidated explicitly, but has a cost associated, however might continue to see the old version until it expires from those caches
    • objects can be invalidated only for Web distribution
    • change object name, versioning, to serve different version
  • supports adding or modifying custom headers before the request is sent to origin which can be used to
    • validate if user is accessing the content from CDN
    • identifying CDN from which the request was forwarded from, in case of multiple CloudFront distribution
    • for viewers not supporting CORS to return the Access-Control-Allow-Origin header for every request
  • supports Partial GET requests using range header to download object in smaller units improving the efficiency of partial downloads and recovery from partially failed transfers
  • supports compression to compress and serve compressed files when viewer requests include Accept-Encoding: gzip in the request header
  • supports different price class to include all regions, to include only least expensive regions and other regions to exclude most expensive regions
  • supports access logs which contain detailed information about every user request for both web and RTMP distribution

AWS Certification – Analytics Services – Cheat Sheet

Kinesis Data Streams – KDS

  • enables real-time processing of streaming data at massive scale
  • provides ordering of records per shard
  • provides an ability to read and/or replay records in the same order
  • allows multiple applications to consume the same data
  • data is replicated across three data centers within a region
  • data is preserved for 24 hours, by default, and can be extended to 7 days
  • data inserted in Kinesis, it can’t be deleted (immutability) but only expires
  • streams can be scaled using multiple shards, based on the partition key
  • each shard provides the capacity of 1MB/sec data input and 2MB/sec data output with 1000 PUT requests per second
  • Kinesis vs SQS
    • real-time processing of streaming big data vs reliable, highly scalable hosted queue for storing messages
    • ordered records, as well as the ability to read and/or replay records in the same order vs no guarantee on data ordering (with the standard queues before the FIFO queue feature was released)
    • data storage up to 24 hours, extended to 7 days vs 1 minute to extended to 14 days but cleared if deleted by the consumer
    • supports multiple consumers vs single consumer at a time and requires multiple queues to deliver message to multiple consumers
  • Kinesis Producer
    • API
      • PutRecord and PutRecords are synchronous
      • PutRecords uses batching and increases throughput
      • might experience ProvisionedThroughputExceeded Exceptions, when sending more data. Use retries with backoff, resharding or change partition key.
    • KPL
      • producer supports synchronous or asynchronous use cases
      • supports inbuilt batching and retry mechanism
    • Kinesis Agent can help monitor log files and send them to KDS
    • supports third party libraries like Spark, Flume, Kafka connect etc.
  • Kinesis Consumers
    • Kinesis SDK
      • Records are polled by consumers from a shard
    • Kinesis Client Library (KCL)
      • Read records from Kinesis produced with the KPL (de-aggregation)
      • supports checkpointing feature to keep track of the application’s state and resume progress using DynamoDB table
      • if KDS application receives provisioned-throughput exceptions, increase the provisioned throughput for the DynamoDB table
    • Kinesis Connector Library – can be replaced using Firehose or Lambda
    • Third party libraries: Spark, Log4J Appenders, Flume, Kafka Connect…
    • Kinesis Firehose, AWS Lambda
    • Kinesis Consumer Enhanced Fan-Out
      • supports  Multiple Consumer applications for the same Stream
      • provides Low Latency ~70ms
      • Higher costs
      • Default limit of 5 consumers using enhanced fan-out per data stream
  • Kinesis Security
    • allows access / authorization control using IAM policies
    • supports Encryption in flight using HTTPS endpoints
    • supports Data encryption at rest either using client side encryption before pushing the data to data streams or server side encryption
    • supports VPC Endpoints to access within VPC

Kinesis Data  Firehose – KDF

  • data transfer solution for delivering real time streaming data to destinations such as S3,  Redshift,  Elasticsearch service, and Splunk.
  • is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration
  • is Near Real Time (min. 60 secs) as it buffers incoming streaming data to a certain size or for a certain period of time before delivering it
  • supports batching, compression, and encryption of the data before loading it, minimizing the amount of storage used at the destination and increasing security
  • supports data compression, minimizing the amount of storage used at the destination. It currently supports GZIP, ZIP, and SNAPPY compression formats. Only GZIP is supported if the data is further loaded to Redshift.
  • supports out of box data transformation as well as custom transformation using Lambda function to transform incoming source data and deliver the transformed data to destinations
  • uses at least once semantics for data delivery.
  • supports multiple producers as datasource, which include Kinesis data stream, KPL, Kinesis Agent, or the Kinesis Data Firehose API using the AWS SDK, CloudWatch Logs, CloudWatch Events, or AWS IoT
  • does NOT support consumers like Spark and KCL
  • supports interface VPC endpoint to keep traffic between the VPC and Kinesis Data Firehose from leaving the Amazon network.

Kinesis Data Streams vs Kinesis Data Firehose

Kinesis Data Analytics

  • helps analyze streaming data, gain actionable insights, and respond to the business and customer needs in real time.
  • reduces the complexity of building, managing, and integrating streaming applications with other AWS service

Redshift

  • Redshift is a fast, fully managed data warehouse
  • provides simple and cost-effective solution to analyze all the data using standard SQL and the existing Business Intelligence (BI) tools.
  • manages the work needed to set up, operate, and scale a data warehouse, from provisioning the infrastructure capacity to automating ongoing administrative tasks such as backups, and patching.
  • automatically monitors your nodes and drives to help you recover from failures.
  • only supports Single-AZ deployments.
  • replicates all the data within the data warehouse cluster when it is loaded and also continuously backs up your data to S3.
  • attempts to maintain at least three copies of your data (the original and replica on the compute nodes and a backup in S3).
  • supports cross-region snapshot replication to another region for disaster recovery
  • Redshift supports four distribution styles; AUTO, EVEN, KEY, or ALL.
    • KEY distribution uses a single column as distribution key (DISTKEY) and helps place matching values on the same node slice
    • Even distribution distributes the rows across the slices in a round-robin fashion, regardless of the values in any particular column
    • ALL distribution replicates whole table in every compute node.
    • AUTO distribution lets Redshift assigns an optimal distribution style based on the size of the table data
  • Redshift supports Compound and Interleaved sort keys
    • Compound key
      • is made up of all of the columns listed in the sort key definition, in the order they are listed and is more efficient when query predicates use a prefix, or query’s filter applies conditions, such as filters and joins, which is a subset of the sort key columns in order.
    • Interleaved sort key
      • gives equal weight to each column in the sort key, so query predicates can use any subset of the columns that make up the sort key, in any order.
      • Not ideal for monotonically increasing attributes
  • Column encodings CANNOT be changed once created.
  • supports query queues for Workload Management, in order to manage concurrency and resource planning. It is a best practice to have separate queues for long running resource-intensive queries and fast queries that don’t require big amounts of memory and CPU
  • Supports Enhanced VPC routing
  • Import/Export Data
    • UNLOAD helps copy data from Redshift table to S3
    • COPY command
      • helps copy data from S3 to Redshift
      • also supports EMR, DynamoDB, remote hosts using SSH
      • parallelized and efficient
      • can decrypt data as it is loaded from S3
      • DON’T use multiple concurrent COPY commands to load one table from multiple files as Redshift is forced to perform a serialized load, which is much slower.
      • supports data decryption when loading data, if data encrypted
      • supports decompressing data, if data is  compressed.
    • Split the Load Data into Multiple Files
    • Load the data in sort key order to avoid needing to vacuum.
    • Use a Manifest File
      • provides Data consistency, to avoid S3 eventual consistency issues
      • helps specify different S3 locations in a more efficient way that with the use of S3 prefixes.
  • Redshift Spectrum
    • helps query and retrieve structured and semistructured data from files in S3 without having to load the data into Redshift tables
    • Redshift Spectrum external tables are read-only. You can’t COPY or INSERT to an external table.

EMR

  • is a web service that utilizes a hosted Hadoop framework running on the web-scale infrastructure of EC2 and S3
  • launches all nodes for a given cluster in the same Availability Zone, which improves performance as it provides higher data access rate
  • seamlessly supports Reserved, On-Demand and Spot Instances
  • consists of Master Node for management and Slave nodes, which consists of Core nodes holding data and Task nodes for performing tasks only
  • is fault tolerant for slave node failures and continues job execution if a slave node goes down
  • does not automatically provision another node to take over failed slaves
  • supports Persistent and Transient cluster types
    • Persistent which continue to run
    • Transient which terminates once the job steps are completed
  • supports EMRFS which allows S3 to be used as a durable HA data storage

Detailed Reading

Glue

  •  fully-managed ETL service that automates the time-consuming steps of data preparation for analytics
  • is serverless and supports pay-as-you-go model.
  • recommends and generates ETL code to transform the source data into target schemas, and runs the ETL jobs on a fully managed, scale-out Apache Spark environment to load your data into its destination.
  • helps setup, orchestrate, and monitor complex data flows.
  • natively supports RDS, Redshift, S3 and databases on EC2 instances.
  • supports server side encryption for data at rest and SSL for data in motion.
  • provides development endpoints to edit, debug, and test the code it generates.
  • AWS Glue Data Catalog
    • is a central repository to store structural and operational metadata for all the data assets.
    • automatically discovers and profiles the data
    • automatically discover both structured and semi-structured data stored in the data lake on S3, Redshift, and other databases
    • provides a unified view of the data that is available for ETL, querying and reporting using services like Athena, EMR, and Redshift Spectrum.
    • Each AWS account has one AWS Glue Data Catalog per region.
  • AWS Glue crawler
    • connects to a data store, progresses through a prioritized list of classifiers to extract the schema of the data and other statistics, and then populates the Glue Data Catalog with this metadata
    • can be scheduled to run periodically so that the metadata is always up-to-date and in-sync with the underlying data.

QuickSight

  • is a very fast, easy-to-use, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from their data, anytime, on any device.
  • delivers fast and responsive query performance by using a robust in-memory engine (SPICE).
    • “SPICE” stands for a Super-fast, Parallel, In-memory Calculation Engine
    • can also be  configured to keep the data in SPICE up-to-date as the data in the underlying sources change.
    • automatically replicates data for high availability and enables QuickSight to scale to support users to perform simultaneous fast interactive analysis across a wide variety of AWS data sources.
  • supports
    • Excel files and flat files like CSV, TSV, CLF, ELF
    • on-premises databases like PostgreSQL, SQL Server and MySQL
    • SaaS applications like Salesforce
    • and AWS data sources such as Redshift, RDS, Aurora, Athena, and S3
  • supports various functions to format and transform the data.
  • supports assorted visualizations that facilitate different analytical approaches:
    • Comparison and distribution – Bar charts (several assorted variants)
    • Changes over time – Line graphs, Area line charts
    • Correlation – Scatter plots, Heat maps
    • Aggregation – Pie graphs, Tree maps
    • Tabular – Pivot tables

Data Pipeline

  • orchestration service that helps define data-driven workflows to automate and schedule regular data movement and data processing activities
  • integrates with on-premises and cloud-based storage systems
  • allows scheduling, retry, and failure logic for the workflows

Elasticsearch

  • Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS Cloud.
  • Elasticsearch provides
    • real-time, distributed search and analytics engine
    • ability to provision all the resources for Elasticsearch cluster and launches the cluster
    • easy to use cluster scaling options. Scaling Elasticsearch Service domain by adding or modifying instances, and storage volumes is an online operation that does not require any downtime.
    • provides self-healing clusters, which automatically detects and replaces failed Elasticsearch nodes, reducing the overhead associated with self-managed infrastructures
    • domain snapshots to back up and restore ES domains and replicate domains across AZs
    • enhanced security with IAM, Network, Domain access policies, and fine-grained access control
    • storage volumes for the data using EBS volumes
    • ability to span cluster nodes across multiple AZs in the same region, known as zone awareness,  for high availability and redundancy.  Elasticsearch Service automatically distributes the primary and replica shards across instances in different AZs.
    • dedicated master nodes to improve cluster stability
    • data visualization using the Kibana tool
    • integration with CloudWatch for monitoring ES domain metrics
    • integration with CloudTrail for auditing configuration API calls to ES domains
    • integration with S3, Kinesis, and DynamoDB for loading streaming data
    • ability to handle structured and Unstructured data
    • supports encryption at rest through KMS, node-to-node encryption over TLS, and the ability to require clients to communicate of HTTPS