SRE – Site Reliability Engineering Best Practices

Site Reliability Engineering Best Practices

SRE Implements DevOps. The goal of SRE is to accelerate product development teams and keep services running in reliable and continuous way.

SRE Concepts

  • Remove Silos and help increase sharing and collaboration between the Development and Operations team
  • Accidents are Normal. It is more profitable to focus on speeding recovery than preventing accidents.
  • Focus on small and gradual changes. This strategy, coupled with automatic testing of smaller changes and reliable rollback of bad changes, leads to approaches to change management like CI/CD.
  • Measurement is Crucial.

SRE Foundations

  • SLIs, SLOs, and SLAs
  • Monitoring
  • Alerting
  • Toil reduction
  • Simplicity

SLI, SLO, and SLAs

  • SRE does not attempt to give everything 100% availability
  • SLIs – Service Level Indicators
    • “A carefully defined quantitative measure of some aspect of the level of service that is provided”
    • SLIs define what to measure
    • SLIs are metrics over time – specific to a user journey such as request/response, data processing – which shows how well the service is performing.
    • SLIs is the ratio between two numbers: the good and the total:
      • Success Rate = No. of successful HTTP request / total HTTP requests
      • Throughput Rate = No. of consumed jobs in a queue / total number of jobs in a queue
    • SLI is divided into specification and implementation. for e.g.
      • Specification: ration of requests loaded in < 100 ms
      • Implementation is a way to measure for e.g. based on: a) server logs b) client code instrumentation
    • SLI ranges from 0% to 100%, where 0% means nothing works, and 100% means nothing is broken
    • Types of SLIs
      • Availability – The proportion of requests which result in a successful state
      • Latency – The proportion of requests below some time threshold
      • Freshness – The proportion of data transferred to some time threshold. Replication or Data pipeline
      • Correctness – The proportion of input that produces correct output
      • Durability – The proportion of records written that can be successfully read
  • SLO – Service Level Objective
    • “SLOs specify a target level for the reliability of your service.”
    • SLO is a goal that the service provider wants to reach.
    • SLOs are tools to help determine what engineering work to prioritize.
    • SLO is a target percentage based on SLIs and can be a single target value or range of values for e.g. SLI <= SLO or (lower bound <= SLI <= upper bound) = SLO
    • SLOs also define the concept of error budget.
    • The Product and SRE team should select an appropriate availability target for the service and its user base, and the service is managed to that SLO.
  • Error Budget
    • Error budgets are a tool for balancing reliability with other engineering work, and a great way to decide which projects will have the most impact.
    • An Error budget is 100% minus the SLO
    • If an Error budget is exhausted, a team can declare an emergency with high-level approval to deprioritize all external demands until the service meets SLOs and exit criteria.
  • SLOs & Error budget approach
    • SLOs are agreed and approved by all stakeholders
    • It is possible to meet SLOs needs under normal conditions
    • The organization is committed to using the error budget for decision making and prioritizing
    • Error budget policy should cover the policy if the error budget is exhausted.
  • SLO and SLI in practice
    • The strategy to implement SLO, SLI in the company is to start small.
    • Consider the following aspects when working on the first SLO.
      • Choose one application for which you want to define SLOs
      • Decide on a few key SLIs specs that matter to your service and users
      • Consider common ways and tasks through which your users interact with service
      • Draw a high-level architecture diagram of your system
      • Show key components. The requests flow. The data flow
    • The result is a narrow and focused proof of concept that would help to make the benefits of SLO, SLI concise and clear.


  • Monitoring allows you to gain visibility into a system, which is a core requirement for judging service health and diagnosing your service when things go wrong
  • from an SRE perspective,

    • Alert on conditions that requires attention
    • Investigate and diagnose issues
    • Display information about the system visually
    • Gain insight into system health and resource usage for long-term planning
    • Compare the behavior of the system before and after a change, or between two control groups
  • Monitoring features that might be relevant
    • Speed of data retrieval and freshness of data.
    • Data retention and calculations
    • Interfaces: graphs, tables, charts. High level or low level.
    • Alerts: multiple categories, notifications flow, suppress functionality.
  • Monitoring sources
    • Metrics are numerical measurements representing attributes and events, typically harvested via many data points at regular time intervals.
    • Logs are an append-only record of events.


  • Alerting helps ensure alerts are triggered for a significant event, an event that consumes a large fraction of the error budget.
  • Alerting should be configured to notify an on-caller only when there are actionable, specific threats to the error budget.
  • Alerting considerations
    • Precision – The proportion of events detected that were significant.
    • Recall – The proportion of significant events detected.
    • Detection time – How long it takes to send notification in various conditions. Long detection time negatively impacts the error budget.
    • Reset time – How long alerts fire after an issue is resolved
  • Ways to alerts
    • The recommendation is to combine several strategies to enhance your alert quality from different directions.
    • Target error rate ≥ SLO threshold.
      • Choose a small time window (for example, 10 minutes) and alert if the error rate over that window exceeds the SLO.
      • Upsides: Short detection time, Fast recall time
      • Downsides: Precision is low
    • Increased Alert Windows.
      • By increasing the window size, you spend a higher budget amount before triggering an alert. for e.g. if an event consumes 5% of the 30-day error budget – a 36-hour window.
      • Upsides: good detection time, better precision
      • Downside: poor reset time
    • Increment Alert Duration.
      • For how long alert should be triggered to be significant.
      • Upsides: Higher precision.
      • Downside: poor recall and poor detection time
    • Alert on Burn Rate.
      • How fast, relative to SLO, the service consumes an error budget.
      • Example: 5% error budget over 1 hour period.
      • Upside: Good precision, short time window, good detection time.
      • Downside: low recall, long reset time
    • Multiple Burn Rate Alerts.
      • Burn rate is how fast, relative to the SLO, the service consumes the error budget
      • Depend on burn rate determine the severity of alert which lead to page notification or a ticket
      • Upsides: good recall, good precision
      • Downsides: More parameters to manage, long reset time
    • Multi-window, multi burn alerts.
      • Upsides: Flexible alert framework, good precision, good recall
      • Downside: even harder to manage, lots of parameters to specify

Toil Reduction

It’s better to fix root causes when possible. If I fixed the symptom, there would be no incentive to fix the root cause.

  • Toils is a repetitive, predictable, constant stream of tasks related to maintaining a service.
  • Any time spent on operational tasks means time not spent on project work and project work is how we make our services more reliable and scalable.
  • Toil can be defined using following characteristics
    • Manual. When the tmp directory on a web server reaches 95% utilization, you need to login and find a space to clean up
    • Repetitive. A full tmp directory is unlikely to be a one-time event
    • Automatable. If the instructions are well defined then it’s better to automate the problem detection and remediation
    • Reactive. When you receive too many alerts of “disks full”, they distract more than help. So, potentially high-severity alerts could be missed
    • Lacks enduring value. The satisfaction of completed tasks is short term because it is to prevent the issue in the future
    • Grow at least as fast as its source. The growing popularity of the service will require more infrastructure and more toil work
  • Potential benefits of toil automation
    • Engineering work might reduce toil in the future
    • Increased team morale and reduced burnout
    • Less context switching for interrupts, which raises team productivity
    • Increased process clarity and standardization
    • Enhanced technical skills and career growth for team members
    • Reduced training time
    • Fewer outages attributable to human errors
    • Improved security
    • Shorter response times for user requests
  • Toil Measurement

    • Identify it.
    • Measure the amount of human effort applied to this toil
    • Track these measurements before, during, and after toil reduction efforts
  • Toil categorization
    • Business processes. A most common source of toil.
    • Production interrupts. The key tasks to keep the system running.
    • Product releases. Depending on the tooling and release size they could generate toil (release requests, rollbacks, hotfixes, and repetitive manual configuration changes)
    • Migrations. Large-scale migration or even small database structure change is likely done manually as a one-time effort. Such thinking is a mistake because this work is repetitive.
    • Cost engineering and capacity planning. Ensure a cost-effective baseline. Prepare for critical high traffic events.
    • Troubleshooting
  • Toil management strategies in practices
    • Identify and measure
    • Engineer toil out of the system
    • Reject the toil
    • Use SLO to reduce toil
    • Organizational:
      • Start with human-backed interfaces. For complex businesses, problems start with a partially automated approach.
      • Get support from management and colleagues. Toil reduction is a worthwhile goal.
      • Promote toil reduction as a feature. Create strong business case for toil reduction.
      • Start small and then improve
    • Standardization and automation:
      • Increase uniformity. Lean-to standard tools, equipment and processes.
      • Access risk within automation. Automation with admin-level privileges should have safety mechanism which checks automation actions against the system. It will prevent outages caused by bugs in automation tools.
      • Automate toil response. Think how to approach toil automation. It shouldn’t eliminate human understanding of what’s going on.
      • Use open-source and third-party tools.
    • Use feedback to improve. Seek for feedback from users who interact with your tools, workflows and automation.


  • Simple software breaks less often and is easier and faster to fix when it does break.
  • Simple systems are easier to understand, easier to maintain, and easier to test
  • Measure complexity
    • Training time. How long does it take for a newcomer engineer to get on full speed.
    • Explanation time. The time it takes to provide a view on system internals.
    • Administrative diversity. How many ways are there to configure similar settings
    • Diversity of deployed configuration
    • Age. How old is the system
  • SRE work on simplicity
    • SRE understand the system’s as a whole to prevent and fix the source of complexity
    • SRE should be involved in the design, system architecture, configuration, deployment processes, or elsewhere.
    • SRE leadership empowers SRE teams to push for simplicity and to explicitly reward these efforts.

SRE Practices

SRE practices apply software engineering solutions to operational problems

  • SRE teams are responsible for the day-to-day functioning of the systems we support, our engineering work often focuses

SRE Practices

Incident Management & Response

  • Incident Management involves coordinating the efforts of responding teams in an efficient manner and ensuring that communication flows both between the responders and those interested in the incident’s progress.
  • Incident management is to respond to an incident in a structured way.
  • Incident Response involves mitigating the impact and/or restoring the service to its previous condition.
  • Basic principles of incident response include the following:
    • Maintain a clear line of command.
    • Designate clearly defined roles.
    • Keep a working record of debugging and mitigation as you go.
    • Declare incidents early and often.
  • Key roles in an Incident Response
    • Incident Commander (IC)
      • the person who declares the incident typically steps into the IC role and directs the high-level state of the incident
      • Commands and coordinates the incident response, delegating roles as needed.
      • By default, the IC assumes all roles that have not been delegated yet.
      • Communicates effectively.
      • Stays in control of the incident response.
      • Works with other responders to resolve the incident.
      • Remove roadblocks that prevent Ops from working most effectively.
    • Communications Lead (CL)
      • CL is the public face of the incident response team.
      • The CL’s main duties include providing periodic updates to the incident response team and stakeholders and managing inquiries about the incident.
    • Operations or Ops Lead (OL)
      • OL works to respond to the incident by applying operational tools to mitigate or resolve the incident.
      • The operations team should be the only group modifying the system during an incident.
  • Live Incident State Document
    • Live Incident State Document can live in a wiki, but should ideally be editable by several people concurrently.
    • This living doc can be messy, but must be functional and not usually shared with shareholders.
    • Live Incident State Document is Incident commander’s most important responsibility.
    • Using a template makes generating this documentation easier, and keeping the most important information at the top makes it more usable.
    • Retain this documentation for postmortem analysis and, if necessary, meta analysis.
  • Incident Management Best Practices
    • Prioritize – Stop the bleeding, restore service, and preserve the evidence for root-causing.
    • Prepare – Develop and document your incident management procedures in advance, in consultation with incident participants.
    • Trust – Give full autonomy within the assigned role to all incident participants.
    • Introspect – Pay attention to your emotional state while responding to an incident. If you start to feel panicky or overwhelmed, solicit more support.
    • Consider alternatives – Periodically consider your options and re-evaluate whether it still makes sense to continue what you’re doing or whether you should be taking another tack in incident response.
      Practice.Use the process routinely so it becomes second nature.
    • Change it around – Were you incident commander last time? Take on a different role this time. Encourage every team member to acquire familiarity with each role.


  • A postmortem is a written record of an incident, its impact, the actions taken to mitigate or resolve it, the root cause(s), and the follow-up actions to prevent the incident from recurring.
  • Postmortems are expected after any significant undesirable event.
  • The primary goals of writing a postmortem are to ensure that the incident is documented, that all contributing root cause(s) are well understood, and, especially, that effective preventive actions are put in place to reduce the likelihood and/or impact of recurrence.
  • Writing a postmortem is not punishment – it is a learning opportunity for the entire company.
  • Postmortem Best Practices
    • Blameless
      • Postmortems should be Blameless.
      • It must focus on identifying the contributing causes of the incident without indicting any individual or team for bad or inappropriate behavior.
    • Collaborate and Share Knowledge
      • Postmortems should be used to collaborate and share knowledge. It should e shared broadly, typically with the larger engineering team or on an internal mailing list.
      • The goal should be to share postmortems to the widest possible audience that would benefit from the knowledge or lessons imparted.
    • No Portmortem Left Unreviewed
      • An unreviewed postmortem might as well never have existed.
    • Ownership
      • Declaring official ownership results in accountability, which leads to action.
      • It’s better to have a single owner and multiple collaborators.


  • SRE practices require a significant amount of time and skilled SRE people to implement right
  • A lot of tools are involved in day to day SRE work
  • SRE processes are one of a key to the success of a tech company


Google Cloud Operations

Google Cloud Operations

Google Cloud Operations provides integrated monitoring, logging, and trace managed services for applications and systems running on Google Cloud and beyond.

Cloud Monitoring

  • Cloud Monitoring collects measurements of key aspects of the service and of the Google Cloud resources used.
  • Cloud Monitoring provides tools to visualize and monitor this data.
  • Cloud Monitoring helps gain visibility into the performance, availability, and health of the applications and infrastructure.
  • Cloud Monitoring collects metrics, events, and metadata from Google Cloud, AWS, hosted uptime probes, and application instrumentation.

Cloud Logging

  • Cloud Logging is a service for storing, viewing and interacting with logs.
  • Answers the questions “Who did what, where and when” within the GCP projects
  • Maintains non-tamperable audit logs for each project and organizations
  • Logs buckets are a regional resource, which means the infrastructure that stores, indexes, and searches the logs are located in a specific geographical location.

Error Reporting

  • Error Reporting aggregates and displays errors produced in the running cloud services.
  • Error Reporting provides a centralized error management interface, to help find the application’s top or new errors so that they can be fixed faster.

Cloud Profiler

  • Cloud Profiles helps with continuous CPU, heap, and other parameters profiling to improve performance and reduce costs.
  • Cloud Profiler is a continuous profiling tool that is designed for applications running on Google Cloud:
    • It’s a statistical, or sampling, profiler that has low overhead and is suitable for production environments.
    • It supports common languages and collects multiple profile types.
  • Cloud Profiler consists of the profiling agent, which collects the data, and a console interface on Google Cloud, which lets you view and analyze the data collected by the agent.

Cloud Trace

  • Cloud Trace is a distributed tracing system that collects latency data from the applications and displays it in the Google Cloud Console.
  • Cloud Trace helps understand how long it takes the application to handle incoming requests from users or applications, and how long it takes to complete operations like RPC calls performed when handling the requests.
  • CloudTrace can track how requests propagate through the application and receive detailed near real-time performance insights.
  • Cloud Trace automatically analyzes all of the application’s traces to generate in-depth latency reports to surface performance degradations and can capture traces from all the VMs, containers, or App Engines.

Cloud Debugger

  • Cloud Debugger helps inspect the state of an application, at any code location, without stopping or slowing down the running app.
  • Cloud Debugger makes it easier to view the application state without adding logging statements.
  • Cloud Debugger adds less than 10ms to the request latency only when the application state is captured. In most cases, this is not noticeable by users.
  • Debugger can be used with or without access to your app’s source code.

Debug Snapshots

  • Debug Snapshots capture local variables and the call stack at a specific line location in the app’s source code without stopping or slowing it down.
  • Certain conditions and locations can be specified to return a snapshot of the app’s data.
  • Debug Snapshots support canarying wherein the debugger agent tests the snapshot on a subset of the instances.

Debug Logpoints

  • Debug Logpoints allow you to inject logging into running services without restarting or interfering with the normal function of the service.
  • Debug Logpoints are useful for debugging production issues without having to add log statements and redeploy.
  • Debug Logpoints remain active for 24 hours after creation, or until they are deleted or the service is redeployed.
  • If a logpoint is placed on a line that receives lots of traffic, the Debugger throttles the logpoint to reduce its impact on the application.
  • Debug Logpoints support canarying wherein the debugger agent tests the logpoints on a subset of the instances.



Google Cloud CI/CD – Continuous Integration & Continuous Deployment

Google Cloud CI/CD

Google Cloud CI/CD provides various tools for continuous integration and deployment and also integrates seamlessly with third-party solutions.

Google Cloud CI/CD - Continuous Integration Continuous Deployment

Google Cloud Source Repositories – CSR

  • Cloud Source Repositories are fully-featured, private Git repositories hosted on Google Cloud.
  • Cloud Source Repositories can be used for collaborative, version-controlled development of any app or service, including those that run on App Engine and Compute Engine.
  • Cloud Source Repositories can connect to an existing GitHub or Bitbucket repository. Connected repositories are synchronized with Cloud Source Repositories automatically.
  • Cloud Source Repositories automatically send logs on repository activity to Cloud Logging to help track and troubleshoot data access.
  • Cloud Source Repositories offer security key detection to block git push transactions that contain sensitive information which helps improve the security of the source code.
  • Cloud Source Repositories provide built-in integrations with other GCP tools like Cloud Build, Cloud Debugger, Cloud Operations, Cloud Logging, Cloud Functions, and others that let you automatically build, test, deploy, and debug code within minutes.
  • Cloud Source Repositories publishes messages about the repository to Pub/Sub topic.
  • Cloud Source Repositories provide a search feature to search for specific files or code snippets.
  • Cloud Source Repositories allow permissions to be controlled at the project (all projects) or at the repo level.

Cloud Build

  • Cloud Build is a fully-managed, serverless service that executes builds on Google Cloud Platform’s infrastructure.
  • Cloud Build can pull/import source code from variety of repositories or cloud storage spaces, execute a build to produce containers or artifacts, and push them to the artifact registry.
  • Cloud Build executes the build as a series of build steps, where each build step specifies an action to be performed and is run in a Docker container.
  • Build steps can be provided by Cloud Build and the Cloud Build community or can be custom as well.
  • Build config file contains instructions for Cloud Build to perform tasks based on your specifications for e.g., the build config file can contain instructions to build, package, and push Docker images.
  • Builds can be started either manually or using build triggers.
  • Cloud Build uses build triggers to enable CI/CD automation.
  • Build triggers can listen for incoming events, such as when a new commit is pushed to a repository or when a pull request is initiated, and then automatically execute a build when new events come in.
  • Cloud Build publishes messages on a Pub/Sub topic called cloud-builds when the build’s state changes, such as when the build is created, when the build transitions to a working state, and when the build completes.

Container Registry

  • Container Registry is a private container image registry that supports Docker Image Manifest V2 and OCI image formats.
  • Container Registry provides a subset of Artifact Registry features.
  • Container Registry stores its tags and layer files for container images in a Cloud Storage bucket in the same project as the registry.
  • Access to the bucket is configured using Cloud Storage’s identity and access management (IAM) settings.
  • Container Registry integrates seamlessly with Google Cloud services.
    Container Registry works with popular continuous integration and continuous delivery systems including Cloud Build and third-party tools such as Jenkins.

Artifact Registry

  • Artifact Registry is a fully-managed service with support for both container images and non-container artifacts, Artifact Registry extends the capabilities of Container Registry.
  • Artifact Registry is the recommended service for container image storage and management on Google Cloud.
  • Artifact Registry comes with fine-grained access control via Cloud IAM. This enables scoping permissions as granularly as possible, for example to specific regions or environments as necessary.
  • Artifact Registry supports the creation of regional repositories

Container Registry vs Artifact Registry

Google Cloud Container Registry Vs Artifact Registry

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.


Google Cloud Container Registry Vs Artifact Registry

Container Registry vs Artifact Registry

Google Cloud - Container Registry vs Artifact Registry

Container Registry

  • Container Registry is a private container image registry that supports Docker Image Manifest V2 and OCI image formats.
  • provides a subset of Artifact Registry features.
  • stores its tags and layer files for container images in a Cloud Storage bucket in the same project as the registry.
  • does not support fine-grained IAM access control. Access to the bucket is configured using Cloud Storage’s permissions.
  • integrates seamlessly with Google Cloud services and works with popular continuous integration and continuous delivery systems including Cloud Build and third-party tools such as Jenkins.
  • is used to store only docker images and does not support languages or os packages.
  • is only multi-regional and does not support regional repository.
  • supports a single repository within a project and automatically creates a repository in a multi-region.
  • uses hosts.
  • uses gcloud container images commands.
  • supports CMEK(Customer-Managed encryption keys) to encrypt the storage buckets that contain the images.
  • supports several authentication methods for pushing and pulling images with a third-party client.
  • caches the most frequently requested Docker Hub images on
  • supports VPC-Service Controls and can be added to a service perimeter.
  • hosts Google provided images on
  • publishes changes to the gcr topic.
  • images can be viewed and managed from the Container registry section of Cloud Console.
  • pricing is based on Cloud Storage usage, including storage and network egress.

Artifact Registry

  • Artifact Registry is a fully-managed service with support for both container images and non-container artifacts, Artifact Registry extends the capabilities of Container Registry.
  • Artifact Registry is the recommended service for container image storage and management on Google Cloud. It is considered the successor of the Container Registry.
  • Artifact Registry comes with fine-grained access control via Cloud IAM using Artifact Registry permission. This enables scoping permissions as granularly as possible for e.g. to specific regions or environments as necessary
  • supports multi-regional or regional repositories.
  • uses hosts.
  • uses gcloud artifacts docker commands.
  • supports CMEK(Customer-Managed encryption keys) to encrypt individual repositories.
  • supports multiple repositories within the project and the repository should be manually created before pushing any images.
  • supports multiple artifact formats, including Container images, Java packages, and Node.js modules.
  • supports the same authentication method as Container Registry.
  • continues to cache frequently requested images from Docker Hub.
  • supports VPC-Service Controls and can be added to a service perimeter.
  • hosts Google provided images on
  • publishes changes to the gcr topic.
  • Artifact Registry and Container Registry repositories can be viewed from the Artifact Registry section of Cloud Console.
  • pricing is based on storage and network egress.

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.



Artifact Registry vs Container Registry Feature Comparison

Certified Kubernetes Security Specialist CKS Learning Path

Certified Kubernetes Security Specialist Certificate

Certified Kubernetes Security Specialist CKS Learning Path

With Certified Kubernetes Security Specialist CKS certification, I have completed the triad of Kubernetes certification. After knowing how to use and administer Kubernetes, the last piece was to understand the security intricacies and CKS preparation does provide you a deep dive into it.

  • CKS focuses on securing container-based applications and Kubernetes platforms during build, deployment, and runtime
  • CKS focuses more on hands-on experience and is an open book test, where you have access to the official Kubernetes documentation as well as some of the products documentation.
  • Unlike AWS and GCP certifications, you are required to provision, solve, debug actual problems, and provision resources on a Kubernetes cluster
  • Even though it is an open book test, you need to know where the information is and what to use.

CKS Exam Pattern

  • CKS exam curriculum includes these general domains and their weights on the exam:
    • Cluster Setup – 10%
    • Cluster Hardening – 15%
    • System Hardening – 15%
    • Minimize Microservice Vulnerabilities – 20%
    • Supply Chain Security – 20%
    • Monitoring, Logging and Runtime Security – 20%
  • CKS requires you to solve 15 questions in 2 hours.
  • CKS was already upgraded to use the k8s 1.22 version.
  • You are allowed to open another browser tab which can be from or other products documentation like Falco. Do not open any other windows.
  • Exam questions can be attempted in any order and don’t have to be sequential. So be sure to move ahead and come back later.

CKS Exam Preparation and Tips

  • I used the courses from KodeKloud for practicing and it would good enough to cover what is required for the exam.
  • When you book your exam, there are 2 exam simulator sessions provided by These mock exams are VERY tough as compared to the actual exams, as they mention, but do provide a great learning experience. Do not get demotivated if you flunk badly on time on this one :).
  • Time was surely a constraint on the actual exam and I was able to complete the 15 questions only with 15 mins left. There was not much time to review and could only get through half of them.
  • Each exam question carries weight so be sure you attempt the exams with higher weights before focusing on the lower ones. So target the ones with higher weights and quicker solutions like debugging ones.
  • The exam is provided by with 6-8 different preconfigured K8s clusters. Each question refers to a different Kubernetes cluster, and the context needs to be switched. Be sure to execute the kubectl use context command, which is available with every question and you just need to copy-paste it.
  • Check for the namespace mentioned in the question, to find resources and create resources. Use the -n <namespace>
  • You would be performing most of the interaction from the client node. However, pay attention to the node (master or worker) you need to execute the exams and make sure you return back to the base node.
  • With CKS is important to move the master node for any changes to the cluster kube-apiserver .
  • SSH to nodes and gaining root access is allowed if needed.
  • Read carefully the Information provided within the questions with the mark. They would provide very useful hints in addressing the question and save time. for e.g. namespaces to look into. for a failed pod, what has already been created like configmap, secrets, network policies so that you do not create the same.
  • Make sure you know the imperative commands to create resources, as you won’t have much time to create and edit YAML files.
  • If you need to edit further use --dry-run -o yaml to get a headstart with the YAML spec file and edit the same.
  • I personally use alias kk=kubectl to avoid typing kubectl

CKS Resources

CKS Key Topics

Cluster Setup – 10%

Cluster Hardening – 15%

System Hardening – 15%

  • Practice CKS Exercises – System Harding
  • Minimize host OS footprint (reduce attack surface)
    • Control access using SSH, disable root and password-based logins
    • Remove unwanted packages and ports
  • Minimize IAM roles
    • IAM roles are usually with Cloud providers and relate to the least privilege access principle.
  • Minimize external access to the network
    • External access can be controlled using Network Policies through egress policies.
  • Appropriately use kernel hardening tools such as AppArmor, seccomp
    • Runtime classes provided by gvisor and kata containers can help provide further isolation of the containers
    • Secure Computing – Seccomp tool helps control syscalls made by containers
    • AppArmor can be configured for any application to reduce its potential host attack surface and provide a greater in-depth defense.
    • PodSecurityPolicies – PSP enables fine-grained authorization of pod creation and updates.
      • Apply host updates
      • Install minimal required OS fingerprint
      • Identify and address open ports
      • Remove unnecessary packages
      • Protect access to data with permissions
    • Exam tip: Know how to load AppArmor profiles, and enable them for the pods. AppArmor is in beta and needs to be enabled using<container_name>: <profile_ref>

Minimize Microservice Vulnerabilities – 20%

  • Practice CKS Exercises – Minimize Microservice Vulnerabilities
  • Setup appropriate OS-level security domains e.g. using PSP, OPA, security contexts.
    • Pod Security Contexts help define security for pods and containers at the pod or at the container level. Capabilities can be added at the container level only.
    • Pod Security Policies enable fine-grained authorization of pod creation and updates and is implemented as an optional admission controller.
    • Open Policy Agent helps enforce custom policies on Kubernetes objects without recompiling or reconfiguring the Kubernetes API server.
    • Admission controllers
      • can be used for validating configurations as well as mutating the configurations.
      • Mutating controllers are triggered before validating controllers.
      • Allows extension by adding custom controllers using MutatingAdmissionWebhook and ValidatingAdmissionWebhook.
    • Exam tip: Know how to configure Pod Security Context, Pod Security Policies
  • Manage Kubernetes secrets
    • Exam Tip: Know how to read secret values, create secrets and mount the same on the pods.
  • Use container runtime sandboxes in multi-tenant environments (e.g. gvisor, kata containers)
    • Exam tip: Know how to create a Runtime and associate it with a pod using runtimeClassName
  • Implement pod to pod encryption by use of mTLS
    • Practice manage TLS certificates in a Cluster
    • Service Mesh Istio can be used to establish MTLS for Intra pod communication.
    • Istio automatically configures workload sidecars to use mutual TLS when calling other workloads. By default, Istio configures the destination workloads using PERMISSIVE mode. When PERMISSIVE mode is enabled, a service can accept both plain text and mutual TLS traffic. In order to only allow mutual TLS traffic, the configuration needs to be changed to STRICT mode.
    • Exam tip: No questions related to mTLS appeared in the exam

Supply Chain Security – 20%

  • Practice CKS Exercises – Supply Chain Security
  • Minimize base image footprint
    • Remove unnecessary tools. Remove shells, package manager & vi tools.
    • Use slim/minimal images with required packages only. Do not include unnecessary software like build tools and utilities, troubleshooting, and debug binaries.
    • Build the smallest image possible – To reduce the size of the image, install only what is strictly needed
    • Use distroless, Alpine, or relevant base images for the app.
    • Use official images from verified sources only.
  • Secure your supply chain: whitelist allowed registries, sign and validate images
  • Use static analysis of user workloads (e.g.Kubernetes resources, Docker files)
    • Tools like Kubesec can be used to perform a static security risk analysis of the configurations files.
  • Scan images for known vulnerabilities
    • Aqua Security Trivy & Anchore can be used for scanning vulnerabilities in the container images.
    • Exam Tip: Know how to use the Trivy tool to scan images for vulnerabilities. Also, remember to use the --severity for e.g. --severity=CRITICAL flag for filtering a specific category.

Monitoring, Logging and Runtime Security – 20%

  • Practice CKS Exercises – Monitoring, Logging, and Runtime Security
  • Perform behavioral analytics of syscall process and file activities at the host and container level to detect malicious activities
  • Detect threats within a physical infrastructure, apps, networks, data, users, and workloads
  • Detect all phases of attack regardless of where it occurs and how it spreads
  • Perform deep analytical investigation and identification of bad actors within the environment
    • Tools like strace and Aqua Security Tracee can be used to check the syscalls. However, with a number of processes, it would be tough to track and monitor all and they do not provide alerting.
    • Tools like Falco & Sysdig provide deep, process-level visibility into dynamic, distributed production environments and can be used to define rules to track, monitor, and alert on activities when a certain rule is violated.
    • Exam Tip: Know how to use Falco, define new rules, enable logging. Make use of the falco_rules.local.yaml file for overrides. (I did not get questions for Falco in my exam).
  • Ensure immutability of containers at runtime
    • Immutability prevents any changes from being made to the container or to the underlying host through the container.
    • It is recommended to create new images and perform a rolling deployment instead of modifying the existing running containers.
    • Launch the container in read-only mode using the --read-only flag from the docker run or by using the readOnlyRootFilesystem option in Kubernetes.
    • PodSecurityContext and PodSecurityPolicy can be used to define and enforce container immutability
      • ReadOnlyRootFilesystem – Requires that containers must run with a read-only root filesystem (i.e. no writable layer).
      • Privileged – determines if any container in a pod can enable privileged mode. This allows the container nearly all the same access as processes running on the host.
    • Task @ Configure Pod Container Security Context
    • Exam Tip: Know how to define a PodSecurityPolicy to enforce rules. Remember, Cluster Roles and Role Binding needs to be configured to provide access to the PSP to make it work.
  • Use Audit Logs to monitor access
    • Kubernetes auditing is handled by the kube-apiserver which requires defining an audit policy file.
    • Auditing captures the stages as RequestReceived -> (Authn and Authz) -> ResponseStarted (-w) -> ResponseComplete (for success) OR Panic (for failures)
    • Exam Tip: Know how to configure audit policies and enable audit on the kube-apiserver. Make sure the kube-apiserver is up and running.
    • Task @ Kubernetes Auditing

CKS Articles

CKS General information and practices

  • The exam can be taken online from anywhere.
  • Make sure you have prepared your workspace well before the exams.
  • Make sure you have a valid government-issued ID card as it would be checked.
  • You are not allowed to have anything around you and no one should enter the room.
  • The exam proctor will be watching you always, so refrain from doing any other activities. Your screen is also always shared.
  • Copy + Paste works fine.
  • You will have an online notepad on the right corner to note down. I hardly used it, but it can be useful to type and modify text instead of using VI editor.

All the Best …

Kubernetes Resources

Kubernetes Resources

Kubernetes Resources


  • Namespaces provide a mechanism for isolating groups of resources within a single cluster.
  • Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc).
  • Names of resources need to be unique within a namespace, but not across namespaces.
  • Kubernetes starts with four initial namespaces:
    • default – default namespace for objects with no other namespace.
    • kube-system – namespace for objects created by the Kubernetes system.
    • kube-public – namespace is created automatically and is readable by all users (including those not authenticated).
    • kube-node-lease – namespace holds Lease objects associated with each node. Node leases allow the kubelet to send heartbeats so that the control plane can detect node failure.
  • Resource Quotas can be defined for each namespace to limit the resources consumed.
  • Resources within the namespaces can refer to each other with their service names.
  • Resources across namespace can be reached using the full DNS <<service_name>>.<<namespace_name>>.svc.cluster.local

Practice Namespace Exercises


  • A Kubernetes pod is a group of containers and is the smallest unit that Kubernetes administers.
  • Pods have a single IP address applied to every container within the pod.
  • Pods are always co-located and co-scheduled and run in a shared context.
  • Containers in a pod share the same resources such as memory and storage.
  • Shared context allows the individual Linux containers inside a pod to be treated collectively as a single application as if all the containerized processes were running together on the same host in more traditional workloads.

Practice Pod Exercises


  • ReplicaSet ensures to maintain a stable set of replica Pods running at any given time. It helps guarantee the availability of a specified number of identical Pods.
  • ReplicaSet includes the pod definition template, a selector to match the pods, and a number of replicas.
  • ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired replica number using the Pod template.
  • It is recommended to use Deployments instead of directly using ReplicaSets, as they help manage ReplicaSets and provide declarative updates to Pods.

Practice ReplicaSet Exercises


  • Deployment provides declarative updates for Pods and ReplicaSets.
  • Deployments describe the number of desired identical pod replicas to run and the preferred update strategy used when updating the deployment.
  • A Deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive.
  • Deployments represent a set of multiple, identical Pods with no unique identities.
  • Deployments are well-suited for stateless applications that use ReadOnlyMany or ReadWriteMany volumes mounted on multiple replicas but are not well-suited for workloads that use ReadWriteOnce volumes. Use StatefulSets instead.

Deploy Container Resources

Practice Deployment Exercises


  • Service is an abstraction over the pods, and essentially, the only interface the various application consumers interact with.
  • The lifetime of an individual pod cannot be relied upon; everything from their IP addresses to their very existence is prone to change.
  • Kubernetes doesn’t treat its pods as unique, long-running instances; if a pod encounters an issue and dies, it’s Kubernetes’ job to replace it so that the application doesn’t experience any downtime.
  • As pods are replaced, their internal names and IPs might change.
  • A service exposes a single machine name or IP address mapped to pods whose underlying names and numbers are unreliable.
  • A service ensures that, to the outside network, everything appears to be unchanged.

Practice Services Exercises



  • Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
  • Traffic routing is controlled by rules defined on the Ingress resource.
  • An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL/TLS and offer name-based virtual hosting
  • An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
  • An Ingress with no rules sends all traffic to a single default backend.

Practice Ingress Exercises


  • A DaemonSet ensures that all (or some) Nodes run a copy of a Pod.
  • DaemonSet ensures pods are added to the newly created nodes and garbage collected as nodes are removed.
  • Some typical uses of a DaemonSet are:
    • running a cluster storage daemon on every node
    • running a logs collection daemon on every node
    • running a node monitoring daemon on every node

Refer DaemonSet Exercises


StatefulSet Architecture

  • StatefulSet is ideal for stateful applications using ReadWriteOnce volumes.
  • StatefulSets are designed to deploy stateful applications and clustered applications that save data to persistent storage, such as persistent disks.
  • StatefulSets represent a set of Pods with unique, persistent identities and stable hostnames that Kubernetes maintains regardless of where they are scheduled.
  • State information and other resilient data for any given StatefulSet Pod are maintained in persistent disk storage associated with the StatefulSet.
  • StatefulSets use an ordinal index for the identity and ordering of their Pods. By default, StatefulSet Pods are deployed in sequential order and are terminated in reverse ordinal order.
  • StatefulSets are suitable for deploying Kafka, MySQL, Redis, ZooKeeper, and other applications needing unique, persistent identities and stable hostnames.


  • ConfigMap helps to store non-confidential data in key-value pairs.
  • Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
  • ConfigMap helps decouple environment-specific configuration from the container images so that the applications are easily portable.
  • ConfigMap does not provide secrecy or encryption. If the data you want to store are confidential, use a Secret rather than a ConfigMap, or use additional (third party) tools to keep your data private.
  • A ConfigMap is not designed to hold large chunks of data and cannot exceed 1 MiB.
  • ConfigMap can be configured on a container inside a Pod as
    • Inside a container command and args
    • Environment variables for a container
    • Add a file in read-only volume, for the application to read
    • Write code to run inside the Pod that uses the Kubernetes API to read a ConfigMap
  • ConfigMap can be configured to be immutable as it helps
    • protect from accidental (or unwanted) updates that could cause applications outages
    • improve performance of the cluster by significantly reducing the load on kube-apiserver , by closing watches for ConfigMaps marked as immutable.
  • Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. The ConfigMap needs to be deleted and recreated.

Practice ConfigMaps Exercises


  • Secret provides a container for sensitive data such as a password without putting the information in a Pod specification or in a container image.
  • Secrets are similar to ConfigMaps but are specifically intended to hold confidential data.
  • Secrets are not really encrypted but only base64 encoded.
  • Secrets are, by default, stored unencrypted in the API server’s underlying data store (etcd). Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd. Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment.
  • To safeguard secrets, take at least the following steps:
    • Enable Encryption at Rest for Secrets.
    • Enable or configure RBAC rules that restrict reading data in Secrets.

Practice Secrets Exercises

Jobs & Cron Jobs

  • Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate.
  • As pods successfully complete, the Job tracks the successful completions.
  • When a specified number of successful completions is reached, the task (ie, Job) is complete.
  • Deleting a Job will clean up the Pods it created. Suspending a Job will delete its active Pods until the Job is resumed again.
  • A job can run multiple Pods in parallel using Parallelism field.
  • A CronJob creates Jobs on a repeating schedule.

Practice Jobs Exercises


Kubernetes Volumes

  • Container on-disk files are ephemeral and lost if the container crashes.
  • Kubernetes supports Persistent volumes that exist beyond the lifetime of a pod. When a pod ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes.
  • Persistent Volumes is supported using API resources
    • PersistentVolume (PV)
      • is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
      • is a cluster-level resource and not bound to a namespace
      • are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV.
    • PersistentVolumeClaim (PVC)
      • is a request for storage by a user.
      • is similar to a Pod.
      • Pods consume node resources and PVCs consume PV resources.
      • Pods can request specific levels of resources (CPU and Memory).
      • Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany, or ReadWriteMany, see AccessModes).
  • Persistent Volumes can be provisioned
    • Statically – where the cluster administrator creates the PVs which is available for use by cluster users
    • Dynamically using StorageClasses where the cluster may try to dynamically provision a volume especially for the PVC.

Practice Volumes Exercises

Labels & Annotations

  • Labels and Annotations attach metadata to objects in Kubernetes.
  • Labels
    • are key/value pairs that can be attached to Kubernetes objects such as Pods and ReplicaSets.
    • can be arbitrary and are useful for attaching identifying information to Kubernetes objects.
    • provide the foundation for grouping objects and can be used to organize and to select subsets of objects.
    • are used in conjunction with selectors to identify groups of related resources.
  • Annotations
    • provide a storage mechanism that resembles labels
    • are key/value pairs designed to hold non-identifying information that can be leveraged by tools and libraries.

Practice Labels & Annotations Exercises


  • A Kubernetes node manages and runs pods; it’s the machine (whether virtualized or physical) that performs the given work.
  • Just as pods collect individual containers that operate together, a node collects entire pods that function together.
  • When you’re operating at scale, you want to be able to hand work over to a node whose pods are free to take it.

Practice Nodes Exercises

Kubernetes Architecture

Kubernetes Architecture

  • A Kubernetes cluster consists of at least one main (control) plane, and one or more worker machines, called nodes.
  • Both the control planes and node instances can be physical devices, virtual machines, or instances in the cloud.
  • In managed Kubernetes environments like AWS EKS, GCP GKE, Azure AKS the control plane is managed by the cloud provider.

Kubernetes Architecture

  • The control plane is also known as a master node or head node.
  • The control plane manages the worker nodes and the Pods in the cluster.
  • In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.
  • It is not recommended to run user workloads on master mode.
  • The Control plane’s components make global decisions about the cluster, as well as detect and respond to cluster events.
  • The control plane receives input from a CLI or UI via an API.
  • API server exposes a REST interface to the Kubernetes cluster. It is the front end for the Kubernetes control plane.
  • All operations against pods, services, and so forth, are executed programmatically by communicating with the endpoints provided by it.
  • It tracks the state of all cluster components and manages the interaction between them.
  • It is designed to scale horizontally.
  • It consumes YAML/JSON manifest files.
  • It validates and processes the requests made via API.
  • Etcd is a consistent, distributed, and highly-available key-value store.
  • is stateful, persistent storage that stores all of Kubernetes cluster data (cluster state and config).
  • is the source of truth for the cluster.
  • can be part of the control plane, or, it can be configured externally.
  • ETCD benefits include
    • Fully replicated: Every node in an etcd cluster has access to the full data store.
    • Highly available: etcd is designed to have no single point of failure and gracefully tolerate hardware failures and network partitions.
    • Reliably consistent: Every data ‘read’ returns the latest data ‘write’ across all clusters.
    • Fast: etcd has been benchmarked at 10,000 writes per second.
    • Secure: etcd supports automatic Transport Layer Security (TLS) and optional secure socket layer (SSL) client certificate authentication.
    • Simple: Any application, from simple web apps to highly complex container orchestration engines such as Kubernetes, can read or write data to etcd using standard HTTP/JSON tools.
  • The scheduler is responsible for assigning work to the various nodes. It keeps watch over the resource capacity and ensures that a worker node’s performance is within an appropriate threshold.
  • It schedules pods to worker nodes.
  • It watches api-server for newly created Pods with no assigned node, and selects a healthy node for them to run on.
  • If there are no suitable nodes, the pods are put in a pending state until such a healthy node appears.
  • It watches API Server for new work tasks.
  • Factors taken into account for scheduling decisions include:
    • Individual and collective resource requirements.
    • Hardware/software/policy constraints.
    • Affinity and anti-affinity specifications.
    • Data locality.
    • Inter-workload interference.
    • Deadlines and taints.
  • Controller manager is responsible for making sure that the shared state of the cluster is operating as expected.
  • It watches the desired state of the objects it manages and watches their current state through the API server.
  • It takes corrective steps to make sure that the current state is the same as the desired state.
  • It is a controller of controllers.
  • It runs controller processes. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
  • Some types of controllers are:
    • Node controller: Responsible for noticing and responding when nodes go down.
    • Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
    • Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
    • Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.
  • The cloud controller manager integrates with the underlying cloud technologies in your cluster when the cluster is running in a cloud environment.
  • The cloud-controller-manager only runs controllers that are specific to your cloud provider.
  • Cloud controller lets you link your cluster into cloud provider’s API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
  • The following controllers can have cloud provider dependencies:
    • Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding.
    • Route controller: For setting up routes in the underlying cloud infrastructure.
    • Service controller: For creating, updating, and deleting cloud provider load balancers.
  • The data plane is known as the worker node or compute node.
  • A virtual or physical machine that contains the services necessary to run containerized applications.
  • A Kubernetes cluster needs at least one worker node, but normally has many.
  • The worker node(s) host the Pods that are the components of the application workload.
  • Pods are scheduled and orchestrated to run on nodes.
  • Cluster can be scaled up and down by adding and removing nodes.
  • Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.
  • A Kubelet tracks the state of a pod to ensure that all the containers are running and healthy
  • provides a heartbeat message every few seconds to the control plane.
  • runs as an agent on each node in the cluster.
  • acts as a conduit between the API server and the node.
  • instantiates and executes Pods.
  • watches API Server for work tasks.
  • gets instructions from master and reports back to Masters.
  • Kube proxy is a networking component that routes traffic coming into a node from the service to the correct containers.
  • is a network proxy that runs on each node in a cluster.
  • manages IP translation and routing.
  • maintains network rules on nodes. These network rules allow network communication to Pods from inside or outside of cluster.
  • ensures each Pod gets a unique IP address.
  • makes possible that all containers in a pod share a single IP.
  • facilitates Kubernetes networking services and load-balancing across all pods in a service.
  • It deals with individual host sub-netting and ensures that the services are available to external parties.
  • Container runtime is responsible for running containers (in Pods).
  • Kubernetes supports any implementation of the Kubernetes Container Runtime Interface CRI specifications
  • To run the containers, each worker node has a container runtime engine.
  • It pulls images from a container image registry and starts and stops containers.
  • Kubernetes supports several container runtimes:


Kubernetes Overview

Kubernetes Overview

  • Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.
  • Kubernetes originates from Greek, meaning helmsman or pilot.
  • Kubernetes provides an orchestration framework to run distributed systems resiliently. It takes care of scaling and failover for the application, provides deployment patterns, and more.

Container Deployment Model

Deployment evolution

  • Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications.
  • Containers are lightweight and have their own filesystem, share of CPU, memory, process space, and more.
  • Containers are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.
  • Containers provide the following benefits
    • Agile application creation and deployment
    • Continuous development, integration, and deployment
    • Dev and Ops separation of concerns
    • Observability
    • Environmental consistency across development, testing, and production
    • Cloud and OS distribution portability
    • Application-centric management
    • Loosely coupled, distributed, elastic, liberated micro-services
    • Resource isolation & utilization

Kubernetes Features

  • Service discovery and load balancing
    • Kubernetes can expose a container using the DNS name or using their own IP address.
    • If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
  • Storage orchestration
    • Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and more.
  • Automated rollouts and rollbacks
    • Kubernetes can change the actual state of the deployed containers to the desired state at a controlled rate ensuring zero downtime.
  • Automatic bin packing
    • Kubernetes can fit containers onto the available nodes to make the best use of the resources as per the specified container specification.
  • Self-healing & High Availability

    • Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to the user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
  • Scalability
    • Kubernetes can help scale the application as per the load.
  • Secret and configuration management
    • Kubernetes helps store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys.
    • Secrets and application configuration can be deployed without rebuilding the container images, and without exposing secrets in the stack configuration.

Kubernetes Architecture

Refer to detailed blog post @ Kubernetes Architecture

Kubernetes ArchitectureMaster components

  • Master components provide the cluster’s control plane.
  • Master components make global decisions about the cluster (for example, scheduling), and that they detect and answer cluster events (for example, beginning a replacement pod when a deployment’s replicas field is unsatisfied).
  • Master components include
    • Kube-API server – Exposes the API.
    • Etcd – key-value stores all cluster data. (Can be run on the same server as a master node or on a dedicated cluster.)
    • Kube-scheduler – Schedules new pods on worker nodes.
    • Kube-controller-manager – Runs the controllers.
    • Cloud-controller-manager – Talks to cloud providers.

Node components

  • Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.
    • Kubelet – Agent that ensures containers in a pod are running.
    • Kube-proxy – Keeps network rules and performs forwarding.
    • Container runtime – Runs containers.

Kubernetes Components

Refer to blog post @ Kubernetes Components

Kubernetes Security

Refer to blog post @ Kubernetes Security


Kubernetes Security

Kubernetes Security



  • Kubernetes does not support the creation of users
  • Users can be passed as --basic-auth-file or --token-auth-file to the kube-apiserver using a static user + password (deprecated) or static user + token file.
  • This approach is deprecated.

X509 Client Certificates

  • Kubernetes requires PKI certificates for authentication over TLS.
  • Kubernetes requires PKI for the following operations:
    • Client certificates for the kubelet to authenticate to the API server
    • Server certificate for the API server endpoint
    • Client certificates for administrators of the cluster to authenticate to the API server
    • Client certificates for the API server to talk to the kubelets
    • Client certificate for the API server to talk to etcd
    • Client certificate/kubeconfig for the controller manager to talk to the API server
    • Client certificate/kubeconfig for the scheduler to talk to the API server.
      Client and server certificates for the front-proxy
  • Client certificates can be signed in two ways so that they can be used to authenticate with the Kubernetes API.
    1. Internally signing the certificate using the Kubernetes API.
      1. It involves the creation of a certificate signing request (CSR) by a client.
      2. Administrators can approve or deny the CSR.
      3. Once approved, the administrator can extract and provide a signed certificate to the requesting client or user.
      4. This method cannot be scaled for large organizations as it requires manual intervention.
    2. Use enterprise PKI, which can sign the client-submitted CSR.
      1. The signing authority can send signed certificates back to clients.
      2. This approach requires the private key to be managed by an external solution.

Refer Authentication Exercises

Service Accounts

  • Kubernetes service accounts can be used to provide bearer tokens to authenticate with Kubernetes API.
  • Bearer tokens can be verified using a webhook, which involves API configuration with option --authentication-token-webhook-config-file, which includes the details of the remote webhook service.
  • Kubernetes internally uses Bootstrap and Node authentication tokens to initialize the cluster.
  • Each namespace has a default service account created.
  • Each service account creates a secret object which stores the bearer token.
  • Existing service account for a pod cannot be modified, the pod needs to be recreated.
  • The service account can be associated with the pod using the serviceAccountName field in the pod specification and the service account secret is auto mounted on the pod.
  •  automountServiceAccountToken flag can be used to prevent the service account from being auto mounted.

Practice Service Account Exercises



  • Node authorization is used by Kubernetes internally and enables read, write, and auth-related operations by kubelet.
  • In order to successfully make a request, kubelet must use a credential that identifies it as being in the system:nodes group.
  • Node authorization can be enabled using the --authorization-mode=Node option in Kubernetes API Server configurations.


  • Kubernetes defines attribute-based access control (ABAC) as “an access control paradigm whereby access rights are granted to users through the use of policies which combine attributes together.”
  • ABAC can be enabled by providing a .json file to --authorization-policy-file and --authorization-mode=ABAC options in Kubernetes API Server configurations.
  • The .json file needs to be present before Kubernetes API can be invoked.
  • Any changes in the ABAC policy file require Kube API Server restart and hence the ABAC approach is not preferred.


  • AlwaysDeny or AlwaysAllow authorization mode is usually used in development environments where all requests to the Kubernetes API need to be allowed or denied.
  • AlwaysDeny or AlwaysAllow mode can be enabled using option --authorization-mode=AlwaysDeny/AlwaysAllow while configuring Kubernetes API.
  • This mode is considered insecure and hence is not recommended in production environments.


  • Role-based access control is the most secure and recommended authorization mechanism in Kubernetes.
  • It is an approach to restrict system access based on the roles of
    users within the cluster.
  • It allows organizations to enforce the principle of least privileges.
  • Kubernetes RBAC follows a declarative nature with clear permissions (operations), API objects (resources), and subjects (users, groups, or service accounts) declared in authorization requests.
  • RBAC authorization can be enabled using the --authorization-mode=RBAC option in Kubernetes API Server configurations.
  • RBAC can be configured using
    • Role or ClusterRole – is made up of verbs, resources, and subjects, which provide a capability (verb) on a resource
    • RoleBinding or ClusterRoleBinding – helps assign privileges to the user, group, or service account.
  • Role vs ClusterRole AND RoleBinding vs ClusterRoleBinding
    • ClusterRole is a global object whereas Role is a namespace object.
    • Roles and RoleBindings are the only namespaced resources.
    • ClusterRoleBindings (global resource) cannot be used with Roles, which is a namespaced resource.
    • RoleBindings (namespaced resource) cannot be used with ClusterRoles, which are global resources.
    • Only ClusterRoles can be aggregated.

RBAC Role Binding

Practice RBAC Exercises

Admission Controllers

  • Admission Controller is an interceptor to the Kubernetes API server requests prior to persistence of the object, but after the request is authenticated and authorized.
  • Admission controllers limit requests to create, delete, modify or connect to (proxy). They do not support read requests.
  • Admission controllers may be “validating”, “mutating”, or both.
  • Mutating controllers may modify the objects they admit; validating controllers may not.
  • Mutating controllers are executed before the validating controllers.
  • If any of the controllers in either phase reject the request, the entire request is rejected immediately and an error is returned to the end-user.
  • Admission Controllers provide fine-grained control over what can be performed on the cluster, that cannot be handled using Authentication or Authorization.

Kubernetes Admission Controllers

  • Admission controllers can only be enabled and configured by the cluster administrator using the --enable-admission-plugins and --admission-control-config-file flags.
  • Few of the admission controllers are as below
    • PodSecurityPolicy acts on the creation and modification of the pod and determines if it should be admitted based on the requested security context and the available Pod Security Policies.
    • ImagePolicyWebhook to decide if an image should be admitted.
    • MutatingAdmissionWebhook to modify a request.
    • ValidatingAdmissionWebhook to decide whether the request should be allowed to run at all.

Practice Admission Controller Exercises

Pod Security Policies

  • Pod Security Policies enable fine-grained authorization of pod creation and updates and is implemented as an optional admission controller.
  • A Pod Security Policy is a cluster-level resource that controls security-sensitive aspects of the pod specification.
  • PodSecurityPolicy is disabled, by default. Once enabled using --enable-admission-plugins, it applies itself to all the pod creation requests.
  • PodSecurityPolicies enforced without authorizing any policies will prevent any pods from being created in the cluster. The requesting user or target pod’s service account must be authorized to use the policy, by allowing the use verb on the policy.
  • PodSecurityPolicy acts both as validating and mutating admission controller. PodSecurityPolicy objects define a set of conditions that a pod must run with in order to be accepted into the system, as well as defaults for the related fields.

Practice Pod Security Policies Exercises

Pod Security Context

  • Security Context helps define privileges and access control settings for a Pod or Container that includes
    • Discretionary Access Control: Permission to access an object, like a file, is based on user ID (UID) and group ID (GID)
    • Security-Enhanced Linux (SELinux): Objects are assigned security labels.
    • Running as privileged or unprivileged.
    • Linux Capabilities: Give a process some privileges, but not all the privileges of the root user.
    • AppArmor: Use program profiles to restrict the capabilities of individual programs.
    • Seccomp: Filter a process’s system calls.
    • AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged OR 2) has CAP_SYS_ADMIN.
    • readOnlyRootFilesystem: Mounts the container’s root filesystem as read-only.
  • PodSecurityContext holds pod-level security attributes and common container settings.
  • Fields present in container.securityContext over the field values of PodSecurityContext.

Practice Pod Security Context Exercises

MTLS or Two Way Authentication

  • Service Mesh like Istio and Linkerd can help implement MTLS for intra-cluster pod-to-pod communication.
  • Istio deploys a side-car container that handles the encryption and decryption transparently.
  • Istio supports both permissive and strict modes

Network Policies

  • By default, pods are non-isolated; they accept traffic from any source.
  • NetworkPolicies help specify how a pod is allowed to communicate with various network “entities” over the network.
  • NetworkPolicies can be used to control traffic to/from Pods, Namespaces or specific IP addresses
  • Pod- or namespace-based NetworkPolicy uses a selector to specify what traffic is allowed to and from the Pod(s) that match the selector.

Practice Network Policies Exercises

Kubernetes Auditing

  • Kubernetes auditing provides a security-relevant, chronological set of records documenting the sequence of actions in a cluster for activities generated by users, by applications that use the Kubernetes API, and by the control plane itself.
  • Audit records begin their lifecycle inside the kube-apiserver component.
  • Each request on each stage of its execution generates an audit event, which is then pre-processed according to a certain policy and written to a backend.
  • Audit policy determines what’s recorded and the backends persist the records.
  • Backend implementations include logs files and webhooks.
  • Each request can be recorded with an associated stage as below
    • RequestReceived – generated as soon as the audit handler receives the request, and before it is delegated down the handler chain.
    • ResponseStarted – generated once the response headers are sent, but before the response body is sent. This stage is only generated for long-running requests (e.g. watch).
    • ResponseComplete – generated once the response body has been completed and no more bytes will be sent.
    • Panic – generated when a panic or a failure occurs.

Kubernetes Audit Policy

Kubernetes kube-apiserver.yaml file with audit configuration

Practice Kubernetes Auditing Exercises

Seccomp – Secure Computing

  • Seccomp stands for secure computing mode and has been a feature of the Linux kernel since version 2.6.12.
  • Seccomp can be used to sandbox the privileges of a process, restricting the calls it is able to make from user space into the kernel.
  • Kubernetes lets you automatically apply seccomp profiles loaded onto a Node to the Pods and containers.

Seccomp profile

Seccomp profile attached to the pod

Practice Seccomp Exercises


  • AppArmor is a Linux kernel security module that supplements the standard Linux user and group-based permissions to confine programs to a limited set of resources.
  • AppArmor can be configured for any application to reduce its potential attack surface and provide a greater in-depth defense.
  • AppArmor is configured through profiles tuned to allow the access needed by a specific program or container, such as Linux capabilities, network access, file permissions, etc.
  • Each profile can be run in either enforcing mode, which blocks access to disallowed resources or complain mode, which only reports violations.
  • AppArmor helps to run a more secure deployment by restricting what containers are allowed to do, and/or providing better auditing through system logs.
  • Use aa-status to check AppArmor status and profiles are loaded
  • Use apparmor_parser -q <<profile file>> to load profiles
  • AppArmor is in beta and needs annotations to enable it using<container_name>: <profile_ref>

AppArmor profile

AppArmor usage

Practice App Armor Exercises


  • Kubesec can be used to perform a static security risk analysis of the configurations files.

Sample configuration file

Kubesec Report

Practice Kubesec Exercises

Trivy (or Clair or Anchore)

  • Trivy is a simple and comprehensive scanner for vulnerabilities in container images, file systems, and Git repositories, as well as for configuration issues.
  • Trivy detects vulnerabilities of OS packages (Alpine, RHEL, CentOS, etc.) and language-specific packages (Bundler, Composer, npm, yarn, etc.).
  • Trivy scans Infrastructure as Code (IaC) files such as Terraform, Dockerfile, and Kubernetes, to detect potential configuration issues that expose your deployments to the risk of attack.
  • Use trivy image <<image_name>> to scan images
  • Use --severity flag to filter the vulnerabilities as per the category.

Practice Trivy Exercises


Falco Architecture

  1. Falco can be installed as a package on the nodes OR as Daemonsets on the Kubernetes cluster
  2. Falco is driven through configuration (defaults to /etc/falco/falco.yaml ) files which includes
    1. Rules
      1. Name and description
      2. Condition to trigger the rule
      3. Priority emergency, alert, critical, error, warning, notice, info, debug
      4. Output data for the event
      5. Multiple rule files can be specified, with the last one taking the priority in case of the same rule defined in multiple files
    2. Log attributes for Falco i.e. level, format
    3. Output file and format i.e JSON or text
    4. Alerts output destination which includes stdout, file, HTTP, etc.

Practice Falco Exercises

Reduce Attack Surface

  • Follow the principle of least privilege and limit access
  • Limit Node access,
    • keep nodes private
    • disable login using the root account PermitRootLogin No and use privilege escalation using sudo  .
    • disable password-based authentication PasswordAuthentication No and use SSH keys.
  • Remove any unwanted packages
  • Block or close unwanted ports
  • Keep the base image light and limited to the bare minimum required
  • Identify and fix any open ports

Black Friday & Cyber Monday Deals

Udemy – 2nd Dec to 3rd Dec

DolfinEd Courses

DolfinEd Cyber Monday

Braincert – Black Friday Sale – ends 26th Nov

Use Coupon Code – BLACK_FRIDAY_2021

AWS Certifications

GCP Certifications

Whizlabs – Black Friday Sale – 19th Nov to 2nd Dec

Both Subscriptions (W970 H90).png


Limited-time offer: Get Coursera Plus Monthly for $1!

CCNF – Cyber Monday