Table of Contents
hide
AWS Certified Machine Learning -Specialty (MLS-C01) Exam Learning Path
- Finally Re-certified the updated AWS Certified Machine Learning ā Specialty (MLS-C01) certification exam after 3 months of preparation.
- In terms of the difficulty level of all professional and specialty certifications, I find this to be the toughest, partly because I am still diving deep into machine learning and relearned everything from basics for this certification.
- Machine Learning is a vast specialization in itself and with AWS services, there is a lot to cover and know for the exam. This is the only exam, where the majority of the focus is on concepts outside of AWS i.e. pure machine learning. It also includes AWS Machine Learning and Data Engineering services.
AWS Certified Machine Learning ā Specialty (MLS-C01) Exam Content
- AWS Certified Machine Learning ā Specialty (MLS-C01) exam validates
- Select and justify the appropriate ML approach for a given business problem.
- Identify appropriate AWS services to implement ML solutions.
- Design and implement scalable, cost-optimized, reliable, and secure ML solutions.
Refer AWS Certified Machine Learning ā Specialty Exam Guide for details
AWS Certified Machine Learning ā Specialty (MLS-C01) Exam Summary
-
Specialty exams are tough, lengthy, and tiresome. Most of the questions and answers options have a lot of prose and a lot of reading that needs to be done, so be sure you are prepared and manage your time well.
- MLS-C01 exam has 65 questions to be solved in 170 minutes which gives you roughly 2 1/2 minutes to attempt each question.
- MLS-C01 exam includes two types of questions, multiple-choice and multiple-response.
- MLS-C01 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 750.
- Specialty exams currently cost $ 300 + tax.
- You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
- As always, mark the questions for review, move on, and come back to them after you are done with all.
- As always, having a rough architecture or mental picture of the setup helps focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.
- AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
- Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
AWS Certified Machine Learning ā Specialty (MLS-C01) Exam Resources
- Online Courses
- Practice tests
AWS Certified Machine Learning ā Specialty (MLS-C01) Exam Topics
- AWS Certified Machine Learning ā Specialty exam covers a lot of Machine Learning concepts. It digs deep into Machine learning concepts, most of which are not related to AWS.
- AWS Certified Machine Learning ā Speciality exam covers the E2E Machine Learning lifecycle, right from data collection, transformation, making it usable and efficient for Machine Learning, pre-processing data for Machine Learning, training and validation, and implementation.
Machine Learning Concepts
- Exploratory Data Analysis
- Feature selection and Engineering
- remove features that are not related to training
- remove features that have the same values, very low correlation, very little variance, or a lot of missing values
- Apply techniques like Principal Component Analysis (PCA) for dimensionality reduction i.e. reduce the number of features.
- Apply techniques such as One-hot encoding and label encoding to help convert strings to numeric values, which are easier to process.
- Apply Normalization i.e. values between 0 and 1 to handle data with large variance.
- Apply feature engineering for feature reduction e.g. using a single height/weight feature instead of both features.
- Handle Missing data
- remove the feature or rows with missing data
- impute using Mean/Median values ā valid only for Numeric values and not categorical features also does not factor correlation between features
- impute using k-NN, Multivariate Imputation by Chained Equation (MICE), Deep Learning ā more accurate and helps factors correlation between features
- Handle unbalanced data
- Source more data
- Oversample minority or Undersample majority
- Data augmentation using techniques like Synthetic Minority Oversampling Technique (SMOTE).
- Feature selection and Engineering
- Modeling
- Know about Algorithms ā Supervised, Unsupervised and Reinforcement and which algorithm is best suitable based on the available data either labelled or unlabelled.
- Supervised learning trains on labeled data e.g. Linear regression. Logistic regression, Decision trees, Random Forests
- Unsupervised learning trains on unlabelled data e.g. PCA, SVD, K-means
- Reinforcement learning trained based on actions and rewards e.g. Q-Learning
- Hyperparameters
- are parameters exposed by machine learning algorithms that control how the underlying algorithm operates and their values affect the quality of the trained models
- some of the common hyperparameters are learning rate, batch, epoch (hint:Ā If the learning rate is too large, the minimum slope might be missed and the graph would oscillate If the learning rate is too small, it requires too many steps which would take the process longer and is less efficient)Ā
- Know about Algorithms ā Supervised, Unsupervised and Reinforcement and which algorithm is best suitable based on the available data either labelled or unlabelled.
- Evaluation
- Know difference in evaluating model accuracy
- Use Area Under the (Receiver Operating Characteristic) Curve (AUC)Ā for Binary classification
- Use root mean square error (RMSE) metricĀ for regression
- Understand Confusion matrix
- A true positiveĀ is an outcome where the modelĀ correctlyĀ predicts theĀ positiveĀ class. Similarly, aĀ true negativeĀ is an outcome where the modelĀ correctlyĀ predicts theĀ negative class.
- AĀ false positive is an outcome where the model incorrectly predicts theĀ positive class. A false negativeĀ is an outcome where the modelĀ incorrectlyĀ predicts theĀ negativeĀ class.
- Recall or Sensitivity or TPR (True Positive Rate): Number of items correctly identified as positive out of total true positives- TP/(TP+FN)Ā (hint: use this for cases like fraud detection,Ā cost of marking non fraud as frauds is lower than marking fraud as non-frauds)
- Specificity or TNR (True Negative Rate):Ā Number of items correctly identified as negative out of total negatives- TN/(TN+FP)Ā (hint: use this for cases like videos for kids, the cost ofĀ dropping few valid videos is lower than showing few bad ones)
- Handle Overfitting problems
- Simplify the model, by reducing the number of layers
- Early Stopping ā form of regularization while training a model with an iterative method, such as gradient descent
- Data Augmentation
- Regularization ā technique to reduce the complexity of the model
- Dropout is a regularization technique that prevents overfitting
- Never train on test data
- Know difference in evaluating model accuracy
Machine Learning Services
- SageMaker
- supports both File mode, Pipe mode, and Fast File mode
- File mode loads all of the data from S3 to the training instance volumes VS Pipe mode streams data directly from S3
- File mode needs disk space to store both the final model artifacts and the full training dataset. VS Pipe mode which helps reduce the required size for EBS volumes.
- Fast File mode combines the ease of use of the existing File Mode with the performance of Pipe Mode.
- Using RecordIO format allows algorithms to take advantage of Pipe modeĀ when training the algorithms that support it.Ā
- supports Model tracking capability to manage up to thousands of machine learning model experiments
- supports automatic scaling for production variants. Automatic scalingĀ dynamically adjusts the number of instances provisioned for a production variant in response to changes in your workload
- provides pre-built Docker images for its built-in algorithms and the supported deep learning frameworks used for training & inference
- SageMaker Automatic Model Tuning
- is the process of finding a set of hyperparameters for an algorithm that can yield an optimal model.
- Best practices
- limit the search to a smaller number as the difficulty of a hyperparameter tuning job depends primarily on the number of hyperparameters that Amazon SageMaker has to search
- DO NOT specify a very large range to cover every possible value for a hyperparameter as it affects the success of hyperparameter optimization.
- log-scaled hyperparameter can be converted to improve hyperparameter optimization.
- running one training job at a time achieves the best results with the least amount of compute time.
- Design distributed training jobs so that you get they report the objective metric that you want.
- know how to take advantage of multiple GPUsĀ (hint: increase learning rate and batch size w.r.t to the increase in GPUs)
- Elastic Interface (now replaced by Inferentia) helps attach low-cost GPU-powered acceleration to EC2 and SageMaker instances or ECS tasks to reduce the cost of running deep learning inference.
- SageMaker Inference options.
- Real-time inference is ideal for online inferences that have low latency or high throughput requirements.
- Serverless Inference is ideal for intermittent or unpredictable traffic patterns as it manages all of the underlying infrastructure with no need to manage instances or scaling policies.
- Batch Transform is suitable for offline processing when large amounts of data are available upfront and you donāt need a persistent endpoint.
- Asynchronous Inference is ideal when you want to queue requests and have large payloads with long processing times.
- SageMaker Model deployment allows deploying multiple variants of a model to the same SageMaker endpoint to test new models without impacting the user experience
- Production Variants
- supports A/B or Canary testing where you can allocate a portion of the inference requests to each variant.
- helps compare production variantsā performance relative to each other.
- Shadow Variants
- replicates a portion of the inference requests that go to the production variant to the shadow variant.
- logs the responses of the shadow variant for comparison and not returned to the caller.
- helps test the performance of the shadow variant without exposing the caller to the response produced by the shadow variant.
- Production Variants
- SageMaker Managed Spot training can help use spot instances to save cost and with Checkpointing feature can save the state of ML models during training
- SageMaker Feature Store
- helps to create, share, and manage features for ML development.
- is a centralized store for features and associated metadata so features can be easily discovered and reused.
- SageMaker Debugger provides tools to debug training jobs and resolve problems such as overfitting, saturated activation functions, and vanishing gradients to improve the modelās performance.
- SageMaker Model Monitor monitors the quality of SageMaker machine learning models in production and can help set alerts that notify when there are deviations in the model quality.
- SageMaker Automatic Model Tuning helps find a set of hyperparameters for an algorithm that can yield an optimal model.
- SageMaker Data Wrangler
- reduces the time it takes to aggregate and prepare tabular and image data for ML from weeks to minutes.
- simplifies the process of data preparation (including data selection, cleansing, exploration, visualization, and processing at scale) and feature engineering.
- SageMaker Experiments is a capability of SageMaker that lets you create, manage, analyze, and compare machine learning experiments.
- SageMaker Clarify helps improve the ML models by detecting potential bias and helping to explain the predictions that the models make.
- SageMaker Model Governance is a framework that gives systematic visibility into ML model development, validation, and usage.
- SageMaker Autopilot is an automated machine learning (AutoML) feature set that automates the end-to-end process of building, training, tuning, and deploying machine learning models.
- SageMaker Neo enables machine learning models to train once and run anywhere in the cloud and at the edge.
- SageMaker API and SageMaker Runtime support VPC interface endpoints powered by AWS PrivateLink that helps connect VPC directly to the SageMaker API or SageMaker Runtime using AWS PrivateLink without using an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
- Algorithms ā
- Blazing text providesĀ Word2vec and text classification algorithms
- DeepAR provides supervised learning algorithm for forecasting scalar (one-dimensional) time series (hint: train for new products based on existing products sales data).
- Factorization machines provide supervisedĀ classification and regression tasks, helpsĀ capture interactions between features within high dimensional sparse datasets economically.
- Image classification algorithm is a supervised learning algorithm that supports multi-label classification.
- IP Insights is an unsupervised learning algorithm that learns the usage patterns for IPv4 addresses.
- K-means is an unsupervised learning algorithm for clustering as it attempts to find discrete groupings within data, where members of a group are as similar as possible to one another and as different as possible from members of other groups.
- k-nearest neighbors (k-NN) algorithm is an index-based algorithm. It uses a non-parametric method for classification or regression.
- Latent Dirichlet Allocation (LDA) algorithm is an unsupervised learning algorithm that attempts to describe a set of observations as a mixture of distinct categories. Used to identify number of topics shared by documents within a text corpus
- Neural Topic Model (NTM) Algorithm is an unsupervised learning algorithm that is used to organize a corpus of documents intoĀ topicsĀ that contain word groupings based on their statistical distribution
- Linear models are supervised learning algorithms used for solving either classification or regression problems.Ā
- For regression (predictor_type=āregressorā), the score is the prediction produced by the model.
- For classification (predictor_type=ābinary_classifierā or predictor_type=āmulticlass_classifierā)
- Object Detection algorithm detects and classifies objects in images using a single deep neural network
- Principal Component Analysis (PCA) is an unsupervised machine learning algorithm that attempts to reduce the dimensionality (number of features) (hint: dimensionality reduction)
- Random Cut Forest (RCF) is an unsupervised algorithm for detecting anomalous data points (hint: anomalyĀ detection)
- Sequence to Sequence is a supervised learning algorithm where the input is a sequence of tokens (for example, text, audio) and the output generated is another sequence of tokens. (hint: text summarizationĀ is the key use case)
- supports both File mode, Pipe mode, and Fast File mode
- SageMaker Ground Truth
- provides automated data labeling using machine learning
- helps build highly accurate training datasets for machine learning quickly using Amazon Mechanical Turk
- provides annotation consolidation to help improve the accuracy of the data objectās labels. It combines the results of multiple workerās annotation tasks into one high-fidelity label.
- automated data labeling uses machine learning to label portions of the data automatically without having to send them to human workers
Machine Learning & AI Managed Services
- Comprehend
- natural language processing (NLP) service to find insights and relationships in text.
- identifies the language of the text; extracts key phrases, places, people, brands, or events; understands how positive or negative the text is; analyzes text using tokenization and parts of speech; and automatically organizes a collection of text files by topic.
- Lex
- provides conversational interfaces using voice and textĀ helpful in building voice and text chatbots
- Polly
- text into speech
- supports Speech Synthesis Markup Language (SSML) tags like prosody so users can adjust the speech rate, pitch or volume.
- supports pronunciation lexiconsĀ to customize the pronunciation of words
- Rekognition ā analyze images and video
- helps identify objects, people, text, scenes, and activities in images and videos, as well as detect any inappropriate content.
- Translate ā natural and fluent language translation
- Transcribe ā automatic speech recognition (ASR) speech-to-text
- Kendra ā an intelligent search service that uses NLP and advanced ML algorithms to return specific answers to search questions from your data.
- Panorama brings computer vision to the on-premises camera network.
- Augmented AI (Amazon A2I) is an ML service that makes it easy to build the workflows required for human review.
- Forecast ā highly accurate forecasts.
Analytics
- Make sure you know and understand data engineering concepts mainly in terms of data capture, migration, transformation, and storage.
- Kinesis
- Understand Kinesis Data Streams and Kinesis Data Firehose in depth
- Kinesis Data Analytics can process and analyze streaming data using standard SQL and integrates with Data Streams and Firehose
- Know Kinesis Data Streams vs Kinesis Firehose
- Know Kinesis Data Streams is open ended on both producer and consumer. It supports KCL and works with Spark.
- Know Kinesis Firehose is open ended for producer only. Data is stored in S3, Redshift and ElasticSearch.
- Kinesis Firehose works in batches with minimum 60secs interval.
- Kinesis Data Firehose supports data transformation and record format conversion using Lambda function (hint: can be used for transforming csv or JSON into parquet)
- Kinesis Video Streams provides a fully managed service to ingest, index store, and stream live video. HLS can be used to view a Kinesis video stream, either for live playback or to view archived video.
- OpenSearch (ElasticSearch) is a search service that supports indexing, full-text search, faceting, etc.
- Data Pipeline helps define data-driven flows to automate and schedule regular data movement and data processing activities in AWS
- Glue is a fully managed, ETL (extract, transform, and load) service that automates the time-consuming steps of data preparation for analytics
- helps setup, orchestrate, and monitor complex data flows.
- Glue Data Catalog is a central repository to store structural and operational metadata for all the data assets.
- Glue crawler connects to a data store, extracts the schema of the data, and then populates the Glue Data Catalog with this metadata
- Glue DataBrew is a visual data preparation tool that enables users to clean and normalize data without writing any code.
- DataSync is an online data transfer service that simplifies, automates, and accelerates moving data between storage systems and services.
Security, Identity & Compliance
- Security is covered very lightly. (hint : SageMaker can read data from KMS-encrypted S3. Make sure, the KMS key policies include the role attached with SageMaker)
Management & Governance Tools
- Understand AWS CloudWatch for Logs and Metrics. (hint: SageMaker is integrated with Cloudwatch and logs and metrics are all stored in it)
Storage
- Understand Data Storage Options ā Know patterns for S3 vs RDS vs DynamoDB vs Redshift. (hint: S3 is, by default, the data storage option or Big Data storage, and look for it in the answer.)
Congrats Jay.
For an expert like you it took 4 months, must be difficult. Thank you for sharing the details
Its mostly the time i could spend on it. If you have Machine learning background it might be much quicker for you.
I cleared AWS Solution Architect Associate Exam 3 weeks back and i am familiar with basics of Machine and Deep leaning concepts. I am planning to write this AWS machine learning exam and found this blog very helpful. Thank you so much for creating it. Very helpful.
Just wanted to know will job opportunities increases if we do this certification. Please let me know.
Job opportunities should surely increase, but it would ultimately depend on the job needs and experience needed.
4 months for uā¦ what about me?
Might be earlier š just that i got busy with the day to day work.
Hello Jay ā Have already completed successfully AWS CSA,Pro and AWS Big Data certifications. Now planning for AWS ML. However, AWS ML seems to quite different from other AWS certifications.Also enrolled for Udemy course (AWS Certified Machine Learning Specialty 2020 ā Hands On!) in terms of ML I found concepts are wage little difficult to understand for ML beginner. Is there any ML course materials did you refer during the preparation ? wherein ML concepts are made to understand easily. I believe based on the exam patter AWS ML = ML Concepts + AWS ML Services + AWS Big Data.
Any help would be greatly appreciated
Hi Balaji, AWS Machine Learning is an exam which cannot be cleared with only AWS knowledge. Almost 50-60% concepts are ML related and over 40% is not related to AWS services.
Frankly for me, this was the toughest exam as like you I am an ML beginner.
I took 2 courses Linux Academy and Stephane Maerek one and practice on Braincert for this exam to make sure I atleast understood the concepts.
Hi Jai,
I am also newbie in ML. Any idea about the course on Udemy from Chandra? Is this course enough for passing exam. Any thoughts
AWS Machine Learning exam is one of the toughest one as it does not rely only on AWS Concepts. You need to be sure for AWS Concepts for ML and Big Data, but also ML concepts. Not sure for Chandra course, but i have used Linux Academy and Stephane course and Braincert exams. They were a perfect fit for the exam.
Hi Guys,
Last week I cleared AWS ML specialty exam with flying colors. I got 892 score. This is my third AWS certification. I passed AWS Solution Architect Associate and Big Data Specialty exams. These exam preparations helped me a lot. Its one of the toughest exam for ML newbies. I did attend couple of Udemy courses and Linux Academy course. These courses helped me. Coursera offers free ML introduction course. Please check it out. This blog helped to to revision my skills. Thanks Jayendra.
Congrats Sridhar and thanks for the feedback.
How relevant were Braincert questions?
they were quite relevant to the topics required for the exam and explained most of the concepts in detail.
Does this exam include bayesian ML as well? I have not worked on Bayesian ML. Also, does it include AWS ML studio related questions? Stephan has a great course and so is Linux academy. Are they enough for theory?
havenāt seen anything on bayesian ML. I used Linux Academy and braincert were good enough to clear the exam.
Thanks, Great Information!
Thank you Jayendra for suggesting Brain cert. How difficult is the real exam compared to brain cert practice test.
The difficulty level matches to the actual exam.
I have finished AWS Solution Architect (Associate) last week. I have good background for Machine Learning.
In AWS Solution Architect exam, questions were lengthy describing the scenario in the organization.
For Machine Learning, do I expect similar questions? I am asking this because, the practice exam for ML has short questions.
Hi Vishwas, its a mix for Machine Learning actually. There are long questions, but most of them are short single line or few words answer options as well.
4 months..how many hours a day have you given for this exam
1-2 hours mainly.
Thanks Jay for the valuable information.Curious to know how much you could score being a novice in ML ?
Thank you to all the resources and Acloudguru to get me going.
Online Courses that helped me
1) Udemy āĀ https://www.udemy.com/course/aws-machine-learning/
Practice tests close to the final exam
1) Udemy practice test āĀ https://www.udemy.com/course/aws-machine-learning-practice-exam/
2) Growth Tree Academy āĀ https://awsmlpractice.com/
Thanks to this blog. I passed my certification in ML Speciality.
Congrats Kajal, glad it helped.