AWS IAM Best Practices

AWS IAM Best Practices

AWS recommends the following AWS Identity and Access Management service – IAM Best Practices to secure AWS resources

Root Account – Don’t use & Lock away access keys

  • Do not use the AWS Root account which has full access to all the AWS resources and services including the Billing information.
  • Permissions associated with the AWS Root account cannot be restricted.
  • Do not generate the access keys, if not required
  • If already generated and not needed, delete the access keys.
  • If access keys are needed, rotate (change) the access key regularly
  • Never share the Root account credentials or access keys, instead create IAM users or Roles to grant granular access
  • Enable AWS multifactor authentication (MFA) on the AWS account

User – Create individual IAM users

  • Don’t use the AWS root account credentials to access AWS, and don’t share the credentials with anyone else.
  • Start by creating an IAM User with an Administrator role that has access to all resources as the Root except the account’s security credentials.
  • Create individual users for anyone who needs access to your AWS account and gives each user unique credentials and grant different permissions.

Groups – Use groups to assign permissions to IAM users

  • Instead of defining permissions for individual IAM users, create groups and define the relevant permissions for each group as per the job function, and then associate IAM users to those groups.
  • Users in an IAM group inherit the permissions assigned to the group and a User can belong to multiple groups
  • It is much easier to add new users, remove users and modify the permissions of a group of users.

Permission – Grant least privilege

  • IAM user, by default, is created with no permissions
  • Users should be granted LEAST PRIVILEGE as required to perform a task.
  • Starting with minimal permissions and adding to the permissions as required to perform the job function is far better than granting all access and trying to then tighten it down.

Passwords – Enforce strong password policy for users

  • Enforce users to create strong passwords and enforce them to rotate their passwords periodically.
  • Enable a strong password policy to define password requirements forcing users to create passwords with requirements like at least one capital letter, one number, and how frequently it should be rotated.

MFA – Enable MFA for privileged users

  • For extra security, Enable MultiFactor Authentication (MFA) for privileged IAM users, who are allowed access to sensitive resources or APIs.

Role – Use temporary credentials with IAM roles

  • Use roles for workloads instead of creating IAM user and hardcoding the credentials which can compromise the access and are also hard to rotate.
  • Roles have specific permissions and do not have a permanent set of credentials.
  • Roles provide a way to access AWS by relying on dynamically generated & automatically rotated temporary security credentials.
  • Roles  associated with it but dynamically provide temporary credentials that are automatically rotated

Sharing – Delegate using roles

  • Allow users from same AWS account, another AWS account, or externally authenticated users (either through any corporate authentication service or through Google, Facebook etc) to use IAM roles to specify the permissions which can then be assumed by them
  • A role can be defined that specifies what permissions the IAM users in the other account are allowed, and from which AWS accounts the IAM users are allowed to assume the role

Rotation – Rotate credentials regularly

  • Change your own passwords and access keys regularly and enforce it through a strong password policy. So even if a password or access key is compromised without your knowledge, you limit how long the credentials can be used to access your resources
  • Access keys allows creation of 2 active keys at the same time for an user. These can be used to rotate the keys.

Track & Review – Remove unnecessary credentials

  • Remove IAM user and credentials (that is, passwords and access keys) that are not needed.
  • Use the IAM Credential report that lists all IAM users in the account and the status of their various credentials, including passwords, access keys, and MFA devices and usage patterns to figure out what can be removed
  • Passwords and access keys that have not been used recently might be good candidates for removal.

Conditions – Use policy conditions for extra security

  • Define conditions under which IAM policies allow access to a resource.
  • Conditions would help provide finer access control to the AWS services and resources for e.g. access limited to a specific IP range or allowing only encrypted requests for uploads to S3 buckets etc.

Auditing – Monitor activity in the AWS account

  • Enable logging features provided through CloudTrail, S3, CloudFront in AWS to determine the actions users have taken in the account and the resources that were used.
  • Log files show the time and date of actions, the source IP for an action, which actions failed due to inadequate permissions, and more.

Use IAM Access Analyzer

  • IAM Access Analyzer analyzes the services and actions that the IAM roles use, and then generates a least-privilege policy that you can use.
  • Access Analyzer helps preview and analyze public and cross-account access for supported resource types by reviewing the generated findings.
  • IAM Access Analyzer helps to validate the policies created to ensure that they adhere to the IAM policy language (JSON) and IAM best practices.

Use Permissions Boundaries

  • Use IAM Permissions Boundaries to delegate permissions management within an account
  • IAM permissions boundaries help set the maximum permissions that you delegate and that an identity-based policy can grant to an IAM role.
  • A permissions boundary does not grant permissions on its own.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your organization is preparing for a security assessment of your use of AWS. In preparation for this assessment, which two IAM best practices should you consider implementing? Choose 2 answers
    1. Create individual IAM users for everyone in your organization (May not be needed as can use Roles as well)
    2. Configure MFA on the root account and for privileged IAM users
    3. Assign IAM users and groups configured with policies granting least privilege access
    4. Ensure all users have been assigned and are frequently rotating a password, access ID/secret key, and X.509 certificate (Must be assigned only if using console or through command line)
  2. What are the recommended best practices for IAM? (Choose 3 answers)
    1. Grant least privilege
    2. User the AWS account(root) for regular user
    3. Use Mutli-Factor Authentication (MFA)
    4. Store access key/private key in git
    5. Rotate credentials regularly
  3. Which of the below mentioned options is not a best practice to securely manage the AWS access credentials?
    1. Enable MFA for privileged users
    2. Create individual IAM users
    3. Keep rotating your secure access credentials at regular intervals
    4. Create strong access key and secret access key and attach to the root account
  4. Your CTO is very worried about the security of your AWS account. How best can you prevent hackers from completely hijacking your account?
    1. Use short but complex password on the root account and any administrators.
    2. Use AWS IAM Geo-Lock and disallow anyone from logging in except for in your city.
    3. Use MFA on all users and accounts, especially on the root account. (For increased security, it is recommend to configure MFA to help protect AWS resources)
    4. Don’t write down or remember the root account password after creating the AWS account.
  5. Fill the blanks: ____ helps us track AWS API calls and transitions, ____ helps to understand what resources we have now, and ____ allows auditing credentials and logins.
    1. AWS Config, CloudTrail, IAM Credential Reports
    2. CloudTrail, IAM Credential Reports, AWS Config
    3. CloudTrail, AWS Config, IAM Credential Reports
    4. AWS Config, IAM Credential Reports, CloudTrail

References

Google Cloud Spanner

Google Cloud Spanner

  • Cloud Spanner is a fully managed, mission-critical relational database service
  • Cloud Spanner provides a scalable online transaction processing (OLTP) database with high availability and strong consistency at a global scale.
  • Cloud Spanner provides traditional relational semantics like schemas, ACID transactions and SQL interface
  • Cloud Spanner provides Automatic, Synchronous replication within and across regions for high availability (99.999%)
  • Cloud Spanner benefits
    • OLTP (Online Transactional Processing)
    • Global scale
    • Relational data model
    • ACID/Strong or External consistency
    • Low latency
    • Fully managed and highly available
    • Automatic replication

Cloud Spanner Architecture

Cloud Spanner ArchitectureInstance

  • Cloud Spanner Instance determines the location and the allocation of resources
  • Instance creation includes two important choices
    • Instance configuration
      • determines the geographic placement i.e. location and replication of the databases
      • Location can be regional or multi-regional
      • cannot be changed once selected during the creation
    • Node count
      • determines the amount of the instance’s serving and storage resources
      • can be updated
  • Cloud Spanner distributes an instance across zones of one or more regions to provide high performance and high availability
  • Cloud Spanner instances have:
    • At least three read-write replicas of the database each in a different zone
    • Each zone is a separate isolation fault domain
    • Paxos distributed consensus protocol used for writes/transaction commits
    • Synchronous replication of writes to all zones across all regions
    • Database is available even if one zone fails (99.999% availability SLA for multi-region and 99.99% availability SLA for regional)

Regional vs Multi-Regional

  • Regional Configuration
    • Cloud Spanner maintains 3 read-write replicas, each within a different Google Cloud zone in that region.
    • Each read-write replica contains a full copy of the operational database that is able to serve read-write and read-only requests.
    • Cloud Spanner uses replicas in different zones so that if a single-zone failure occurs, the database remains available.
    • Every Cloud Spanner mutation requires a write quorum that’s composed of a majority of voting replicas. Write quorums are formed from two out of the three replicas in regional configurations.
    • Provides 99.99% availability
  • Multi-Regional Configuration
    • Multi-region configurations allow replicating the database’s data not just in multiple zones, but in multiple zones across multiple regions
    • Additional replicas enable reading data with low latency from multiple locations close to or within the regions in the configuration.
    • As the quorum (read-write) replicas are spread across more than one region, additional network latency is incurred when these replicas communicate with each other to vote on writes.
    • Multi-region configurations enable the application to achieve faster reads in more places at the cost of a small increase in write latency.
    • Provides 99.999% availability
    • Multi-regional makes use of, paxos based replication, TrueTime and leader election, to provide global consistency and higher availability

Cloud Spanner - Regional vs Multi-Regional Configurations

Replication

  • Cloud Spanner automatically gets replication at the byte level from the underlying distributed filesystem.
  • Cloud Spanner also performs data replication to provide global availability and geographic locality, with fail-over between replicas being transparent to the client.
  • Cloud Spanner creates multiple copies, or “replicas,” of the rows, then stores these replicas in different geographic areas.
  • Cloud Spanner uses a synchronous, Paxos distributed consensus protocol, in which voting replicas take a vote on every write request to ensure transactions are available in sufficient replicas before being committed.
  • Globally synchronous replication gives the ability to read the most up-to-date data from any Cloud Spanner read-write or read-only replica.
  • Cloud Spanner creates replicas of each database split
  • A split holds a range of contiguous rows, where the rows are ordered by the primary key.
  • All of the data in a split is physically stored together in the replica, and Cloud Spanner serves each replica out of an independent failure zone.
  • A set of splits is stored and replicated using Paxos.
  • Within each Paxos replica set, one replica is elected to act as the leader.
  • Leader replicas are responsible for handling writes, while any read-write or read-only replica can serve a read request without communicating with the leader (though if a strong read is requested, the leader will typically be consulted to ensure that the read-only replica has received all recent mutations)
  • Cloud Spanner automatically reshards data into splits and automatically migrates data across machines (even across datacenters) to balance load, and in response to failures.
  • Spanner’s sharding considers the parent child relationships in interleaved tables and related data is migrated together to preserve query performance

Cloud Spanner Data Model

  • A Cloud Spanner Instance can contain one or more databases
  • A Cloud Spanner database can contain one or more tables
  •  Tables look like relational database tables in that they are structured with rows, columns, and values, and they contain primary keys
  • Every table must have a primary key, and that primary key can be composed of zero or more columns of that table
  • Parent-child relationships in Cloud Spanner
    • Table Interleaving
      • Table interleaving is a good choice for many parent-child relationships where the child table’s primary key includes the parent table’s primary key columns
      • Child rows are colocated with the parent rows significantly improving the performance
      • Primary key column(s) of the parent table must be the prefix of the primary key of the child table
    • Foreign Keys
      • Foreign keys are similar to traditional databases.
      • They are not limited to primary key columns, and tables can have multiple foreign key relationships, both as a parent in some relationships and a child in others.
      • The foreign key relationship does not guarantee data co-location
  • Cloud Spanner automatically creates an index for each table’s primary key
  • Secondary indexes can be created for other columns

Cloud Spanner Scaling

  • Increase the compute capacity of the instance to scale up the server and storage resources in the instance.
  • Each node allows for an additional 2TB of data storage
  • Nodes provide additional compute resources to increase throughput
  • Increasing compute capacity does not increase the replica count but gives each replica more CPU and RAM, which increases the replica’s throughput (that is, more reads and writes per second can occur).

Cloud Spanner Backup & PITR

  • Cloud Spanner Backup and Restore helps create backups of Cloud Spanner databases on demand, and restore them to provide protection against operator and application errors that result in logical data corruption.
  • Backups are highly available, encrypted, and can be retained for up to a year from the time they are created.
  • Cloud Spanner point-in-time recovery (PITR) provides protection against accidental deletion or writes.
  • PITR works by letting you configure a database’s version_retention_period to retain all versions of data and schema, from a minimum of 1 hour up to a maximum of 7 days.

Cloud Spanner Best Practices

  • Design a schema that prevents hotspots and other performance issues.
  • For optimal write latency, place compute resources for write-heavy workloads within or close to the default leader region.
  • For optimal read performance outside of the default leader region, use staleness of at least 15 seconds.
  • To avoid single-region dependency for the workloads, place critical compute resources in at least two regions.
  • Provision enough compute capacity to keep high priority total CPU utilization under
    • 65% in each region for regional configuration
    • 45% in each region for multi-regional configuration

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your customer has implemented a solution that uses Cloud Spanner and notices some read latency-related performance issues on one table. This table is accessed only by their users using a primary key. The table schema is shown below. You want to resolve the issue. What should you do?
    1. Remove the profile_picture field from the table.
    2. Add a secondary index on the person_id column.
    3. Change the primary key to not have monotonically increasing values.
    4. Create a secondary index using the following Data Definition Language (DDL) CREATE INDEX person_id_ix ON Persons (
      person_id, firstname, lastname ) STORING ( profile_picture )
  2. You are building an application that stores relational data from users. Users across the globe will use this application. Your CTO is concerned about the scaling requirements because the size of the user base is unknown. You need to implement a database solution that can scale with your user growth with minimum configuration changes. Which storage solution should you use?
    1. Cloud SQL
    2. Cloud Spanner
    3. Cloud Firestore
    4. Cloud Datastore
  3. A financial organization wishes to develop a global application to store transactions happening from different part of the world. The storage system must provide low latency transaction support and horizontal scaling. Which GCP service is appropriate for this use case?
    1. Bigtable
      B Datastore
      C Cloud Storage
      D Cloud Spanner

References

Google_Cloud_Spanner

AWS EC2 Best Practices

AWS EC2 Best Practices

AWS recommends the following best practices to get maximum benefit and satisfaction from EC2

Security & Network

  • Implement the least permissive rules for the security group.
  • Regularly patch, update, and secure the operating system and applications on the instance
  • Manage access to AWS resources and APIs using identity federation, IAM users, and IAM roles
  • Establish credential management policies and procedures for creating, distributing, rotating, and revoking AWS access credentials
  • Launch the instances into a VPC instead of EC2-Classic (If AWS account is newly created VPC is used by default)
  • Encrypt EBS volumes and snapshots.

Storage

  • EC2 supports Instance store and EBS volumes, so its best to understand the implications of the root device type for data persistence, backup, and recovery
  • Use separate Amazon EBS volumes for the operating system (root device) versus your data.
  • Ensure that the data volume (with the data) persists after instance termination.
  • Use the instance store available for the instance to only store temporary data. Remember that the data stored in the instance store is deleted when an instance is stopped or terminated.
  • If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance.

Resource Management

  • Use instance metadata and custom resource tags to track and identify your AWS resources
  • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you’ll need them.

Backup & Recovery

  • Regularly back up the instance using Amazon EBS snapshots (not done automatically) or a backup tool.
  • Data Lifecycle Manager (DLM) to automate the creation, retention, and deletion of snapshots taken to back up the EBS volumes
  • Create an Amazon Machine Image (AMI) from the instance to save the configuration as a template for launching future instances.
  • Implement High Availability by deploying critical components of the application across multiple Availability Zones, and replicate the data appropriately
  • Monitor and respond to events.
  • Design the applications to handle dynamic IP addressing when the instance restarts.
  • Implement failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance
  • Regularly test the process of recovering your instances and Amazon EBS volumes if they fail.

References