AWS Certified AI Practitioner AIF-C01 Exam Learning Path

AWS Certified AI Practitioner AIF-C01 Exam Learning Path

  • Started the AI journey by clearing the AWS Certified AI Practitioner AIF-C01 exam with a perfect score.
  • AWS Certified AI Practitioner AIF-C01 exam is the latest AWS exam released on October 8, 2024, following its beta period.
  • Candidates who earn this certification by February 15, 2025, receive an additional Early Adopter digital badge.
  • AI Practitioner exam validates knowledge of AI/ML, generative AI technologies, and associated AWS services and tools, independent of a specific job role.
  • Exam also validates a candidate’s ability to complete the following tasks:
    • Understand AI, ML, and generative AI concepts, methods, and strategies in general and on AWS.
    • Understand the appropriate use of AI/ML and generative AI technologies to ask relevant questions within the candidate’s organization.
    • Determine the correct types of AI/ML technologies to apply to specific use cases.
    • Use AI, ML, and generative AI technologies responsibly

Refer AWS Certified AI Practitioner AIF-C01 Exam Guide

AWS Certified AI Practitioner AIF-C01 Exam Summary

  • AIF-C01 exam consists of 65 questions (50 scored and 15 unscored) in 90 minutes, and the time is more than sufficient if you are well-prepared.
  • In addition to the usual types of multiple-choice and multiple-response questions, the AIF exams have introduced the following new types
    • Ordering: Has a list of 3-5 responses which you need to select and place in the correct order to complete a specified task.
    • Matching: Has a list of responses to match with a list of 3-7 prompts. You must match all the pairs correctly to receive credit for the question.
    • Case study: A case study presents a single scenario with multiple questions. Each question is evaluated independently, and credit is given for each correct answer.
  • AIF-C01 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 700.
  • Associate exams currently cost $ 100 + tax.
  • You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
  • AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
  • Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.

AWS Certified AI Practitioner AIF-C01 Exam Resources

AWS Certified AI Practitioner AIF-C01 Exam Topics

AIF-C01 Exam covers the AI and ML aspects in terms of AI & ML fundamentals, ML lifecycle, Generative AI, AI use cases and applications and building secure, responsible AI. 

Machine Learning Concepts

  • Exploratory Data Analysis
    • Feature selection and Engineering
      • remove features that are not related to training
      • remove features that have the same values, very low correlation, very little variance, or a lot of missing values
      • Apply techniques like Principal Component Analysis (PCA) for dimensionality reduction i.e. reduce the number of features.
      • Apply techniques such as One-hot encoding and label encoding to help convert strings to numeric values, which are easier to process.
      • Apply Normalization i.e. values between 0 and 1 to handle data with large variance.
      • Apply feature engineering for feature reduction e.g. using a single height/weight feature instead of both features.
    • Handle Missing data
      • remove the feature or rows with missing data
      • impute using Mean/Median values – valid only for Numeric values and not categorical features also does not factor correlation between features
      • impute using k-NN, Multivariate Imputation by Chained Equation (MICE), Deep Learning – more accurate and helps factors correlation between features
    • Handle unbalanced data
      • Source more data
      • Oversample minority or Undersample majority
      • Data augmentation using techniques like Synthetic Minority Oversampling Technique (SMOTE).
  • Modeling
    • Transfer learning (TL) is a machine learning (ML) technique where a model pre-trained on one task is fine-tuned for a new, related task.
    • Know about Algorithms – Supervised, Unsupervised and Reinforcement and which algorithm is best suitable based on the available data either labelled or unlabelled.
      • Supervised learning trains on labeled data e.g. Linear regression. Logistic regression, Decision trees, Random Forests
      • Unsupervised learning trains on unlabelled data e.g. PCA, SVD, K-means
      • Reinforcement learning trained based on actions and rewards e.g. Q-Learning
    • Hyperparameters
      • are parameters exposed by machine learning algorithms that control how the underlying algorithm operates and their values affect the quality of the trained models
      • some of the common hyperparameters are learning rate, batch, epoch (hint:  If the learning rate is too large, the minimum slope might be missed and the graph would oscillate If the learning rate is too small, it requires too many steps which would take the process longer and is less efficient
  • Evaluation
    • Know difference in evaluating model accuracy
      • Use Area Under the (Receiver Operating Characteristic) Curve (AUC) for Binary classification
      • Use root mean square error (RMSE) metric for regression
    • Understand Confusion matrix
      • A true positive is an outcome where the model correctly predicts the positive class. Similarly, a true negative is an outcome where the model correctly predicts the negative class.
      • false positive is an outcome where the model incorrectly predicts the positive class. A false negative is an outcome where the model incorrectly predicts the negative class.
      • Recall or Sensitivity or TPR (True Positive Rate): Number of items correctly identified as positive out of total true positives- TP/(TP+FN)  (hint: use this for cases like fraud detection,  cost of marking non fraud as frauds is lower than marking fraud as non-frauds)
      • Specificity or TNR (True Negative Rate): Number of items correctly identified as negative out of total negatives- TN/(TN+FP)  (hint: use this for cases like videos for kids, the cost of  dropping few valid videos is lower than showing few bad ones)
    • Training Problems
      • Overfitting occurs when the machine learning model gives accurate predictions for training data but not for new data.
      • Underfitting occurs when the model cannot determine a meaningful relationship between the input and output data. You get underfit models if they have not trained for the appropriate length of time on a large number of data points.
      • Underfit models experience high bias—they give inaccurate results for both the training data and test set. On the other hand, overfit models experience high variance—they give accurate results for the training set but not for the test set. More model training results in less bias but variance can increase. Data scientists aim to find the sweet spot between underfitting and overfitting when fitting a model. A well-fitted model can quickly establish the dominant trend for seen and unseen data sets. 
    • Handle Overfitting problems
      • Simplify the model, by reducing the number of layers
      • Early Stopping – form of regularization while training a model with an iterative method, such as gradient descent
      • Data Augmentation
      • Regularization – technique to reduce the complexity of the model
      • Dropout is a regularization technique that prevents overfitting
      • Never train on test data

Generative AI

  • Foundation Models:
    • Large, pre-trained models built on diverse data that can be fine-tuned for specific tasks like text, image, and speech generation. for e.g. GPT, BERT, and DALL·E.
  • Large Language Models (LLMs):
    • A subset of foundation models designed to understand and generate human-like text. Capable of answering questions, summarizing, translating, and more.
    • LLM Components
      • Tokens:
        • Basic units of text (words, subwords, or characters) that LLMs process.
      • Vectors
        • Numerical representations of tokens in high-dimensional space, enabling the model to perform mathematical operations on text.
        • Each token is converted into a vector for processing in the neural network.
      • Embeddings:
        • Pre-trained numerical vector representations of tokens that capture their semantic meaning.
  • Prompt Engineering:
    • Crafting effective input instructions to guide generative AI toward desired outputs. Key for improving performance without fine-tuning the model.
    • Techniques
      • Zero-Shot Prompting:
        • Instructs the model to perform a task without providing examples.
      • Few-Shot Prompting:
        • Provides a few examples of the task in the prompt to guide the model’s output.
      • Chain-of-Thought Prompting:
        • Encourages the model to explain its reasoning step-by-step before giving the final answer.
      • Instruction Prompting:
        • Provides explicit instructions to guide the model’s behavior.
      • Contextual Prompting:
        • Includes additional context or background information in the prompt for better responses.
      • Iterative Refinement:
        • Refines the prompt in multiple iterations based on model responses to improve accuracy.
      • Role-based Prompting:
        • Assigns a role to the model to influence its tone or expertise.
  • Retrieval-Augmented Generation (RAG):
    • Combines LLMs with external knowledge bases to retrieve accurate and up-to-date information during text generation. Useful for chatbots and domain-specific tasks.
  • Fine-Tuning:
    • Adjusting pre-trained models using domain-specific data to optimize performance for specific applications.
  • Responsible AI Features:
    • Incorporates fairness, transparency, and bias mitigation techniques to ensure ethical AI outputs.
  • Multi-Modal Capabilities:
    • Models that process and generate outputs across multiple data types, such as text, images, and audio.
  • Vector database
    • provides the ability to store and retrieve vectors as high-dimensional points.
    • add additional capabilities for efficient and fast lookup of nearest-neighbors in the N-dimensional space.
    • Amazon natively supports vector search through OpenSearch, Aurora PostgreSQL with pgvector and Partner solutions like Pinecone, Weaviate, and Milvus.
  • Controls
    • Temperature:
      • Adjusts randomness in the output; lower values produce focused results, while higher values generate creative outputs. Essential for creative tasks or deterministic responses.
      • Lower values (e.g., 0.2) make the output more focused and deterministic, while higher values (e.g., 1.0 or above) make it more creative and diverse.
    • Top P (Nucleus Sampling):
      • Determines the probability threshold for token selection for e.g., with Top P = 0.9, the model considers only the smallest set of tokens whose cumulative probability is 90%, filtering out less likely options.
    • Top K:
      • Limits the token selection to the top K most probable tokens for e.g. with Top K = 10, the model randomly chooses tokens only from the 10 most likely options, providing more control over diversity.
    • Token Length (Max Tokens):
      • Sets the maximum number of tokens the model can generate in a response.
  • Model Evaluation Metrics:
    • Techniques like BLEU, ROUGE, perplexity, and embeddings measure generative AI performance across different use cases.
    • ROUGE (Recall-Oriented Understudy for Gisting Evaluation):
      • Commonly used for text summarization tasks.
      • Compares overlap between the generated text and reference text, focusing on n-grams, word sequences, and longest common subsequences.
    • BERTScore:
      • Evaluates text generation tasks by comparing contextual embeddings from BERT for candidate and reference texts.
      • Captures semantic similarity beyond simple n-gram overlap.
    • Perplexity:
      • Used for language models to evaluate how well a model predicts a sample.
      • Lower perplexity indicates a better predictive model.
    • BLEU (Bilingual Evaluation Understudy):
      • Evaluates machine translation tasks by comparing the generated text against reference translations.
  • Limitations
    • Security: can be exploited to create malicious content, phishing attacks, or deepfakes.
    • Cost: Training and deploying large models require substantial computational resources, making them expensive.
    • Explainability: Decision-making process of generative models is often a “black box,” making them hard to interpret.
    • Hallucination: Models may confidently generate false or nonsensical outputs that appear accurate.
    • Toxicity: Without proper safeguards, AI can produce harmful, biased, or offensive content.
    • Creativity: While impressive, AI-generated content often lacks true originality and may rely on existing patterns.
    • Data Dependency: Quality of generated outputs depends heavily on the quality and diversity of the training data.
    • Regulation: Legal and ethical concerns surrounding misuse and intellectual property are yet to be fully addressed.
    • Latency: Real-time applications may experience delays due to the high computational demands of generative models.

AI  Services

Bedrock

  • is a fully managed service that offers a choice of industry leading foundation models (FMs) along with a broad set of capabilities needed to build generative AI applications, simplifying development with security, privacy, and responsible AI without the need to manage underlying infrastructure.
  • supports foundation models from Amazon Titan, Anthropic (Claude), Stability AI, Cohere, Meta Llama, Mistral AI and others.
  • supports custom fine-tuning of FMs using tagged data or by using continued pre-train feature to customize the model using non-tagged data.
  • supports Retrieval Augmented Generation (RAG) to enhance model responses with real-time, context-specific data retrieval from external knowledge bases.
  • Knowledge Bases 
    • Integrate custom datasets to tailor models for specific use cases and improve accuracy.
    • provides access to additional data that helps the model generate more relevant, context-specific, and accurate responses without continually retraining the FM.
  • Agents
    • are fully managed capabilities that can help build and deploy intelligent agents to automate workflows and enhance user interactions.
    • can complete complex tasks for a wide range of use cases and deliver up-to-date answers based on proprietary knowledge sources.
  • Guardrails
    • help implement safeguards for the generative AI applications based on the use cases and responsible AI policies.
    • helps control the interaction between users and FMs by filtering undesirable and harmful content and will soon redact personally identifiable information (PII), enhancing content safety and privacy in generative AI applications. 
    • helps continually monitor and analyze user inputs and FM responses that might violate customer-defined policies.
  • Pricing modes
    • On-Demand Throughput Mode:
      • Automatically scales based on request traffic, allowing flexible usage without the need for pre-configuration. Ideal for variable or unpredictable workloads.
    • Provisioned Throughput Mode:
      • Allows pre-allocating capacity to handle consistent or high-volume workloads, offering predictable performance and cost optimization.
      • Bedrock supports only Provisioned Throughput Mode for customized fine-tuned models to ensure stable and reliable performance during inference.
  • Model Evaluation: Test and evaluate foundation models to ensure they meet performance and accuracy benchmarks for your applications.
  • Responsible AI Support: Tools and guidance to monitor, mitigate, and reduce biases while ensuring fairness and ethical AI use.
  • Security
    • S3 allows storing and managing data securely with fine-grained access controls and encryption.
    • VPC PrivateLink allows operating Bedrock entirely within the VPC, ensuring secure communication and isolation from public networks without the use of an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
  • Scalability and Cost Efficiency: Automatically scales to meet workload demands with a pay-as-you-go pricing model.
  • Model Invocation Logging
    • helps collect invocation logs, model input data, and model output data for all invocations in the AWS account used in Amazon Bedrock.
    • includes full request data, response data, and metadata associated with all calls.
    • supported destinations include CloudWatch Logs and S3.

SageMaker

  • supports Model tracking capability to manage up to thousands of machine learning model experiments
  • supports automatic scaling for production variants. Automatic scaling dynamically adjusts the number of instances provisioned for a production variant in response to changes in your workload
  • provides pre-built Docker images for its built-in algorithms and the supported deep learning frameworks used for training & inference
  • Elastic Interface (now replaced by Inferentia) helps attach low-cost GPU-powered acceleration to EC2 and SageMaker instances or ECS tasks to reduce the cost of running deep learning inference.
  • SageMaker Inference options.
    • Real-time inference is ideal for online inferences that have low latency or high throughput requirements.
    • Serverless Inference is ideal for intermittent or unpredictable traffic patterns as it manages all of the underlying infrastructure with no need to manage instances or scaling policies.
    • Batch Transform is suitable for offline processing when large amounts of data are available upfront and you don’t need a persistent endpoint.
    • Asynchronous Inference is ideal when you want to queue requests and have large payloads with long processing times.
  • SageMaker Model deployment allows deploying multiple variants of a model to the same SageMaker endpoint to test new models without impacting the user experience.
  • SageMaker Managed Spot training can help use spot instances to save cost and with Checkpointing feature can save the state of ML models during training
  • SageMaker Feature Store
    • helps to create, share, and manage features for ML development.
    • is a centralized store for features and associated metadata so features can be easily discovered and reused.
  • SageMaker Debugger provides tools to debug training jobs and resolve problems such as overfitting, saturated activation functions, and vanishing gradients to improve the model’s performance.
  • SageMaker Model Monitor monitors the quality of SageMaker machine learning models in production and can help set alerts that notify when there are deviations in the model quality.
  • SageMaker Automatic Model Tuning helps find a set of hyperparameters for an algorithm that can yield an optimal model.
  • SageMaker Data Wrangler
    • reduces the time it takes to aggregate and prepare tabular and image data for ML from weeks to minutes.
    • simplifies the process of data preparation (including data selection, cleansing, exploration, visualization, and processing at scale) and feature engineering.
  • SageMaker Experiments is a capability of SageMaker that lets you create, manage, analyze, and compare machine learning experiments.
  • SageMaker Clarify
    • helps improve the ML models by detecting potential bias and helping to explain the predictions that the models make.
    • generates analysis like SHAP analysis, computer vision explainability analysis, and partial dependence plots (PDPs) analysis that can aid in bias analysis.
  • SageMaker Model Governance is a framework that gives systematic visibility into ML model development, validation, and usage.
  • SageMaker Model Cards
    • helps document critical details about the ML models in a single place for streamlined governance and reporting.
    • helps capture key information about the models throughout their lifecycle and implement responsible AI practices.
  • SageMaker Autopilot is an automated machine learning (AutoML) feature set that automates the end-to-end process of building, training, tuning, and deploying machine learning models.
  • SageMaker Neo enables machine learning models to train once and run anywhere in the cloud and at the edge.
  • SageMaker API and SageMaker Runtime support VPC interface endpoints powered by AWS PrivateLink that helps connect VPC directly to the SageMaker API or SageMaker Runtime using AWS PrivateLink without using an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

SageMaker Ground Truth

  • provides automated data labeling using machine learning
  • helps build highly accurate training datasets for machine learning quickly using Amazon Mechanical Turk
  • provides annotation consolidation to help improve the accuracy of the data object’s labels. It combines the results of multiple worker’s annotation tasks into one high-fidelity label.
  • automated data labeling uses machine learning to label portions of the data automatically without having to send them to human workers

AI Managed Services

  • Amazon Q Business
    • is a fully managed, generative-AI powered assistant that can be configured to answer questions, provide summaries, generate content, and complete tasks based on your enterprise data.
  • Comprehend
    • natural language processing (NLP) service to find insights and relationships in text.
    • identifies the language of the text; extracts key phrases, places, people, brands, or events; understands how positive or negative the text is; analyzes text using tokenization and parts of speech; and automatically organizes a collection of text files by topic.
  • Lex
    • provides conversational interfaces using voice and text helpful in building voice and text chatbots
  • Polly
    • text into speech
    • supports Speech Synthesis Markup Language (SSML) tags like prosody so users can adjust the speech rate, pitch or volume.
    • supports pronunciation lexicons to customize the pronunciation of words
  • Rekognition – analyze images and video
    • helps identify objects, people, text, scenes, and activities in images and videos, as well as detect any inappropriate content.
  • Translate – natural and fluent language translation
  • Transcribe – automatic speech recognition (ASR) speech-to-text
  • Kendra – an intelligent search service that uses NLP and advanced ML algorithms to return specific answers to search questions from your data.
  • Panorama brings computer vision to the on-premises camera network.
  • Augmented AI (Amazon A2I) is an ML service that makes it easy to build the workflows required for human review.
  • Forecast – highly accurate forecasts.

Security, Identity & Compliance

  • AWS Artifact is a self-service audit artifact retrieval portal that provides our customers with on-demand access to AWS’ compliance documentation and AWS agreements.
  • SageMaker can read data from KMS-encrypted S3. Make sure, the KMS key policies include the role attached with SageMaker
  • AWS Identity and Access Management (IAM) helps an administrator securely control access to AWS resources.
  • Amazon Inspector
    • is a vulnerability management service that continuously scans workloads for software vulnerabilities and unintended network exposure.
    • can assess EC2 instances and ECR repositories to provide detailed findings and recommendations for remediation.

Management & Governance Tools

  • Understand AWS CloudWatch for Logs and Metrics. (hint: SageMaker & Bedrock are integrated with CloudWatch for logs and metrics)CloudTrail to monitor and log API calls in AWS accounts.
  • CloudTrail records contain the API event, the user who made the API call, and the time that the call was made.

Whitepapers and articles

On the Exam Day

  • Make sure you are relaxed and get some good night’s sleep. The exam is not tough if you are well-prepared.
  • If you are taking the AWS Online exam
    • Try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
    • The online verification process does take some time and usually, there are glitches.
    • Remember, you would not be allowed to take the take if you are late by more than 30 minutes.
    • Make sure you have your desk clear, no hand-watches, or external monitors, keep your phones away, and nobody can enter the room.

Finally, All the Best 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.