

Microsoft Certified: Azure AI Engineer Associate - (AI-102) Exam Questions
Total Questions
Last Updated
1st Try Guaranteed

Experts Verified
Question 1 Single Choice
You collect IoT sensor data from 100 industrial machines for a year. Each machine has 50 different sensors that generate data at one-minute intervals. In total, you have 5,000 time series datasets.
You need to identify unusual values in each time series to help predict machinery failures.
Which Azure service should you use?
Explanation

Click "Show Answer" to see the explanation here
Azure AI Anomaly Detector is CORRECT because it is specifically designed to detect anomalies in time series data. It uses advanced machine learning models to identify unexpected patterns or behaviors in data, which is essential for predictive maintenance. With the ability to analyze and detect irregularities across multiple time series datasets, this service is ideal for monitoring IoT sensor data to predict machinery failures.

Azure AI Computer Vision is INCORRECT because this service is primarily focused on analyzing images and extracting information from visual data. It does not have the capabilities to handle time series data or detect anomalies in sensor readings.
Cognitive Search is INCORRECT because it is a search-as-a-service solution that provides indexing and querying capabilities for various types of content, such as documents and databases. It is not designed for time series data analysis or anomaly detection.
Azure AI Document Intelligence is INCORRECT because it focuses on extracting structured information from unstructured documents, such as forms and receipts. It does not have the functionality to analyze time series data for anomalies.
Read More:
https://learn.microsoft.com/en-us/azure/ai-services/anomaly-detector/overview
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/overview
https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search
https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/faq?view=doc-intel-4.0.0
Explanation
Azure AI Anomaly Detector is CORRECT because it is specifically designed to detect anomalies in time series data. It uses advanced machine learning models to identify unexpected patterns or behaviors in data, which is essential for predictive maintenance. With the ability to analyze and detect irregularities across multiple time series datasets, this service is ideal for monitoring IoT sensor data to predict machinery failures.

Azure AI Computer Vision is INCORRECT because this service is primarily focused on analyzing images and extracting information from visual data. It does not have the capabilities to handle time series data or detect anomalies in sensor readings.
Cognitive Search is INCORRECT because it is a search-as-a-service solution that provides indexing and querying capabilities for various types of content, such as documents and databases. It is not designed for time series data analysis or anomaly detection.
Azure AI Document Intelligence is INCORRECT because it focuses on extracting structured information from unstructured documents, such as forms and receipts. It does not have the functionality to analyze time series data for anomalies.
Read More:
https://learn.microsoft.com/en-us/azure/ai-services/anomaly-detector/overview
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/overview
https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search
https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/faq?view=doc-intel-4.0.0
Question 2 Single Choice
You are building an internet-based training solution. The solution requires that a user's camera and microphone remain enabled.
You need to monitor a video stream of the user and detect when the user asks an instructor a question. The solution must minimize development effort.
What should you include in the solution?
Explanation

Click "Show Answer" to see the explanation here
speech-to-text in the Azure AI Speech service is CORRECT because it is designed to convert spoken language into text, which is essential for detecting when a user asks a question during a video stream. This service can process the audio captured by the user's microphone, transcribe the spoken words in real-time, and allow the system to recognize when a question is being asked with minimal development effort.

Spatial Analysis in Azure AI Vision is INCORRECT because it is used for understanding spatial relationships and patterns in video streams, which is not directly related to detecting spoken questions.
content moderation in Azure AI Video Indexer is INCORRECT because it is used for detecting inappropriate content in videos, not for transcribing or understanding spoken questions.
object detection in Azure AI Custom Vision is INCORRECT because it is used for detecting and classifying objects in images and videos, not for processing audio or detecting spoken questions.
Read More:
https://learn.microsoft.com/en-us/azure/ai-services/speech-service/speech-to-text
Explanation
speech-to-text in the Azure AI Speech service is CORRECT because it is designed to convert spoken language into text, which is essential for detecting when a user asks a question during a video stream. This service can process the audio captured by the user's microphone, transcribe the spoken words in real-time, and allow the system to recognize when a question is being asked with minimal development effort.

Spatial Analysis in Azure AI Vision is INCORRECT because it is used for understanding spatial relationships and patterns in video streams, which is not directly related to detecting spoken questions.
content moderation in Azure AI Video Indexer is INCORRECT because it is used for detecting inappropriate content in videos, not for transcribing or understanding spoken questions.
object detection in Azure AI Custom Vision is INCORRECT because it is used for detecting and classifying objects in images and videos, not for processing audio or detecting spoken questions.
Read More:
https://learn.microsoft.com/en-us/azure/ai-services/speech-service/speech-to-text
Question 3 Single Choice
You have an Azure OpenAI resource named AI1 that hosts three deployments of the GPT 3.5 model. Each deployment is optimized for a unique workload.
You plan to deploy three apps. Each app will access AI1 by using the REST API and will use the deployment that was optimized for the app's intended workload.
You need to provide each app with access to AI1 and the appropriate deployment. The solution must ensure that only the apps can access AI1.
What should you use to provide access to AI1?
Explanation

Click "Show Answer" to see the explanation here
An API key is CORRECT because it is a common method for authenticating applications when accessing APIs, including Azure OpenAI services. API keys are straightforward to use and provide the necessary authentication to ensure that only authorized applications can access the specific deployments optimized for their intended workloads.

A bearer token is INCORRECT because bearer tokens are typically used for authentication in OAuth 2.0 frameworks, which involves a more complex process including obtaining tokens from an authorization server. This is not as straightforward as using API keys for the given requirement.
A shared access signature (SAS) token is INCORRECT because SAS tokens are primarily used to grant limited access to Azure Storage resources. They are not designed for authenticating access to Azure OpenAI resources via REST API.
Read More:
https://learn.microsoft.com/en-us/azure/ai-services/openai/reference
https://learn.microsoft.com/en-us/azure/ai-services/authentication
https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview
Explanation
An API key is CORRECT because it is a common method for authenticating applications when accessing APIs, including Azure OpenAI services. API keys are straightforward to use and provide the necessary authentication to ensure that only authorized applications can access the specific deployments optimized for their intended workloads.

A bearer token is INCORRECT because bearer tokens are typically used for authentication in OAuth 2.0 frameworks, which involves a more complex process including obtaining tokens from an authorization server. This is not as straightforward as using API keys for the given requirement.
A shared access signature (SAS) token is INCORRECT because SAS tokens are primarily used to grant limited access to Azure Storage resources. They are not designed for authenticating access to Azure OpenAI resources via REST API.
Read More:
https://learn.microsoft.com/en-us/azure/ai-services/openai/reference
https://learn.microsoft.com/en-us/azure/ai-services/authentication
https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview
Question 4 Single Choice
You have an Azure OpenAI resource named AI1 that hosts three deployments of the GPT 3.5 model. Each deployment is optimized for a unique workload.
You plan to deploy three apps. Each app will access AI1 by using the REST API and will use the deployment that was optimized for the app's intended workload.
You need to provide each app with access to AI1 and the appropriate deployment. The solution must ensure that only the apps can access AI1.
What should each app use to connect to its appropriate deployment?
Explanation

Click "Show Answer" to see the explanation here
A deployment name is CORRECT because when using the Azure OpenAI REST API, the application must specify the deployment name in the URL path to indicate which specific model deployment it is interacting with. Each deployment (even of the same base model like GPT-3.5) can be tuned or optimized for a specific workload, and the deployment name is the identifier that routes the API request to the correct model configuration.
This approach allows multiple deployments to coexist under the same Azure OpenAI resource, and each app can target the right one by specifying the proper deployment name in its API call.
An API key is INCORRECT because while it is required for authenticating the request, it does not determine which deployment is accessed. The API key gives general access to the Azure OpenAI resource but not to any specific deployment.
A deployment endpoint is INCORRECT because Azure OpenAI uses a single endpoint per resource, and the specific deployment is selected by including the deployment name in the request path. There's no separate endpoint per deployment.
A deployment type is INCORRECT because this refers to the model type (e.g., gpt-35-turbo, gpt-4) but not to a specific named deployment. The deployment type is set when the deployment is created, but clients use the deployment name to invoke it.
Read More:
https://learn.microsoft.com/en-us/azure/ai-services/openai/reference
Explanation
A deployment name is CORRECT because when using the Azure OpenAI REST API, the application must specify the deployment name in the URL path to indicate which specific model deployment it is interacting with. Each deployment (even of the same base model like GPT-3.5) can be tuned or optimized for a specific workload, and the deployment name is the identifier that routes the API request to the correct model configuration.
This approach allows multiple deployments to coexist under the same Azure OpenAI resource, and each app can target the right one by specifying the proper deployment name in its API call.
An API key is INCORRECT because while it is required for authenticating the request, it does not determine which deployment is accessed. The API key gives general access to the Azure OpenAI resource but not to any specific deployment.
A deployment endpoint is INCORRECT because Azure OpenAI uses a single endpoint per resource, and the specific deployment is selected by including the deployment name in the request path. There's no separate endpoint per deployment.
A deployment type is INCORRECT because this refers to the model type (e.g., gpt-35-turbo, gpt-4) but not to a specific named deployment. The deployment type is set when the deployment is created, but clients use the deployment name to invoke it.
Read More:
https://learn.microsoft.com/en-us/azure/ai-services/openai/reference
Question 5 Single Choice
You are building a solution in Azure that will use Azure AI Language service to process sensitive customer data.
You need to ensure that only specific Azure processes can access the Language service. The solution must minimize administrative effort.
What should you include in the solution?
Explanation

Click "Show Answer" to see the explanation here
virtual network rules are CORRECT because they allow you to restrict access to the Azure AI Language service to specific Azure resources within a virtual network. By configuring virtual network rules, you can ensure that only designated processes and services within your Azure infrastructure can access the Language service. This approach minimizes administrative effort by leveraging Azure's built-in network security capabilities.
IPsec rules are INCORRECT because IPsec rules are typically used for securing network traffic over IP networks. While IPsec can provide secure communication, it requires more complex configuration and management compared to virtual network rules, and it is not specific to Azure services.
Azure Application Gateway is INCORRECT because it is a web traffic load balancer that enables you to manage traffic to your web applications. It is not specifically designed for restricting access to Azure Cognitive Services and would require additional configuration to achieve similar security controls.
a virtual network gateway is INCORRECT because it provides a connection between your on-premises network and your Azure virtual network. While it can be part of a secure network architecture, it is not the most direct or efficient method for restricting access to Azure Cognitive Services for Language.
Read More:
https://learn.microsoft.com/en-us/azure/ai-services/cognitive-services-virtual-networks?tabs=portal
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoints-overview
https://learn.microsoft.com/en-us/azure/application-gateway/overview
Explanation
virtual network rules are CORRECT because they allow you to restrict access to the Azure AI Language service to specific Azure resources within a virtual network. By configuring virtual network rules, you can ensure that only designated processes and services within your Azure infrastructure can access the Language service. This approach minimizes administrative effort by leveraging Azure's built-in network security capabilities.
IPsec rules are INCORRECT because IPsec rules are typically used for securing network traffic over IP networks. While IPsec can provide secure communication, it requires more complex configuration and management compared to virtual network rules, and it is not specific to Azure services.
Azure Application Gateway is INCORRECT because it is a web traffic load balancer that enables you to manage traffic to your web applications. It is not specifically designed for restricting access to Azure Cognitive Services and would require additional configuration to achieve similar security controls.
a virtual network gateway is INCORRECT because it provides a connection between your on-premises network and your Azure virtual network. While it can be part of a secure network architecture, it is not the most direct or efficient method for restricting access to Azure Cognitive Services for Language.
Read More:
https://learn.microsoft.com/en-us/azure/ai-services/cognitive-services-virtual-networks?tabs=portal
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoints-overview
https://learn.microsoft.com/en-us/azure/application-gateway/overview
Question 6 Single Choice
You have an Azure OpenAI model named AI1.
You are building a web app named App1 by using the Azure OpenAI SDK.
You need to configure App1 to connect to AI1.
What information must you provide?
Explanation

Click "Show Answer" to see the explanation here
the deployment name, endpoint, and key is CORRECT because these three pieces of information are essential for configuring App1 to connect to AI1 using the Azure OpenAI SDK.
Here's why each is necessary:
Deployment name: This specifies which deployment of the model you want to use. In Azure OpenAI, you might have multiple deployments of the same model optimized for different tasks.
Endpoint: This is the URL where your AI1 model is hosted. It directs the API requests to the correct Azure service instance.
Key: This is the authentication key that allows your web app to access the Azure OpenAI service securely. It ensures that only authorized applications can use the service.

Read More:
Explanation
the deployment name, endpoint, and key is CORRECT because these three pieces of information are essential for configuring App1 to connect to AI1 using the Azure OpenAI SDK.
Here's why each is necessary:
Deployment name: This specifies which deployment of the model you want to use. In Azure OpenAI, you might have multiple deployments of the same model optimized for different tasks.
Endpoint: This is the URL where your AI1 model is hosted. It directs the API requests to the correct Azure service instance.
Key: This is the authentication key that allows your web app to access the Azure OpenAI service securely. It ensures that only authorized applications can use the service.

Read More:
Question 7 Single Choice
You have an app named App1 that uses a custom Azure AI Document Intelligence model to recognize contract documents.
You need to ensure that the model supports an additional contract format. The solution must minimize development effort.
What should you do?
Explanation

Click "Show Answer" to see the explanation here
Add the additional contract format to the existing training set. Retrain the model is CORRECT. This approach directly addresses the need to support an additional contract format without starting from scratch. By adding examples of the new contract format to the existing training set and retraining the model, you enhance the model's ability to recognize the new format while leveraging the already trained capabilities for existing formats. This minimizes development effort compared to creating and training a completely new model.
Lower the confidence score threshold of App1 is INCORRECT because lowering the confidence score threshold might allow the model to recognize more documents as contracts, but it won't improve the model's ability to accurately recognize the new contract format. It could also increase the likelihood of false positives.
Create a new training set and add the additional contract format to the new training set. Create and train a new custom model is INCORRECT because creating a new custom model from scratch is more labor-intensive and time-consuming than updating and retraining the existing model with additional data.
Lower the accuracy threshold of App1 is INCORRECT because lowering the accuracy threshold does not enhance the model's ability to correctly identify and process the new contract format. It simply reduces the criteria for what is considered a successful match, which could lead to inaccurate results.
Read More:
Explanation
Add the additional contract format to the existing training set. Retrain the model is CORRECT. This approach directly addresses the need to support an additional contract format without starting from scratch. By adding examples of the new contract format to the existing training set and retraining the model, you enhance the model's ability to recognize the new format while leveraging the already trained capabilities for existing formats. This minimizes development effort compared to creating and training a completely new model.
Lower the confidence score threshold of App1 is INCORRECT because lowering the confidence score threshold might allow the model to recognize more documents as contracts, but it won't improve the model's ability to accurately recognize the new contract format. It could also increase the likelihood of false positives.
Create a new training set and add the additional contract format to the new training set. Create and train a new custom model is INCORRECT because creating a new custom model from scratch is more labor-intensive and time-consuming than updating and retraining the existing model with additional data.
Lower the accuracy threshold of App1 is INCORRECT because lowering the accuracy threshold does not enhance the model's ability to correctly identify and process the new contract format. It simply reduces the criteria for what is considered a successful match, which could lead to inaccurate results.
Read More:
Question 8 Single Choice
You have a collection of press releases stored as PDF files.
You need to extract text from the files and perform sentiment analysis.
Which service should you use to extract text ?
Explanation

Click "Show Answer" to see the explanation here
Azure AI Document Intelligence is CORRECT because it is specifically designed to extract structured and unstructured text from documents, including PDF files. This service uses advanced AI and machine learning models to accurately extract text, tables, and other data from various document formats, making it the ideal solution for extracting text from your collection of press releases.
Azure AI Search is INCORRECT because it is primarily used for indexing and searching large collections of documents or other data. While it can perform some text extraction as part of its indexing process, it is not specifically optimized for the detailed extraction of text from PDF files.
Azure AI Vision is INCORRECT because it is designed for analyzing and extracting information from images and videos. While it includes some capabilities for extracting text from images (e.g., OCR), it is not optimized for extracting text from PDF documents.
Read More:
Explanation
Azure AI Document Intelligence is CORRECT because it is specifically designed to extract structured and unstructured text from documents, including PDF files. This service uses advanced AI and machine learning models to accurately extract text, tables, and other data from various document formats, making it the ideal solution for extracting text from your collection of press releases.
Azure AI Search is INCORRECT because it is primarily used for indexing and searching large collections of documents or other data. While it can perform some text extraction as part of its indexing process, it is not specifically optimized for the detailed extraction of text from PDF files.
Azure AI Vision is INCORRECT because it is designed for analyzing and extracting information from images and videos. While it includes some capabilities for extracting text from images (e.g., OCR), it is not optimized for extracting text from PDF documents.
Read More:
Question 9 Single Choice
You have a collection of press releases stored as PDF files.
You need to extract text from the files and perform sentiment analysis.
Which service should you use to perform sentiment analysis ?
Explanation

Click "Show Answer" to see the explanation here
Azure AI Language is CORRECT because it includes sentiment analysis capabilities specifically designed to analyze the sentiment of text data. By extracting the text from your PDF files and then using the sentiment analysis feature of Azure AI Language, you can accurately assess the sentiment of the content in your press releases.

Azure AI Search is INCORRECT because it is primarily used for indexing and searching text within large collections of documents. It does not provide built-in sentiment analysis capabilities.
Azure AI Computer Vision is INCORRECT because it focuses on analyzing and extracting information from images and videos, not on performing sentiment analysis on text data.
Azure AI Document Intelligence is INCORRECT because it is designed for extracting text and structured information from documents. While it can extract text from PDF files, it does not include built-in sentiment analysis capabilities. You would need to use the text extracted by Azure AI Document Intelligence with another service, like Azure AI Language, to perform sentiment analysis.
Read More:
https://learn.microsoft.com/en-us/azure/ai-services/language-service/overview
https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/overview
https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/faq?view=doc-intel-4.0.0
Explanation
Azure AI Language is CORRECT because it includes sentiment analysis capabilities specifically designed to analyze the sentiment of text data. By extracting the text from your PDF files and then using the sentiment analysis feature of Azure AI Language, you can accurately assess the sentiment of the content in your press releases.

Azure AI Search is INCORRECT because it is primarily used for indexing and searching text within large collections of documents. It does not provide built-in sentiment analysis capabilities.
Azure AI Computer Vision is INCORRECT because it focuses on analyzing and extracting information from images and videos, not on performing sentiment analysis on text data.
Azure AI Document Intelligence is INCORRECT because it is designed for extracting text and structured information from documents. While it can extract text from PDF files, it does not include built-in sentiment analysis capabilities. You would need to use the text extracted by Azure AI Document Intelligence with another service, like Azure AI Language, to perform sentiment analysis.
Read More:
https://learn.microsoft.com/en-us/azure/ai-services/language-service/overview
https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/overview
https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/faq?view=doc-intel-4.0.0
Question 10 Single Choice
You have an Azure AI service model named Model1 that identifies the intent of text input.
You develop a Python app named App1.
You need to configure App1 to use Model1.
Which package should you add to App1?
Explanation

Click "Show Answer" to see the explanation here
azure-ai-language-conversations is CORRECT. This package is specifically designed for working with Azure AI services that handle language understanding and intent recognition. Since Model1 is used to identify the intent of text input, this package will provide the necessary tools and APIs to integrate your Python app, App1, with Model1.
azure-cognitiveservices-language-textanalytics is INCORRECT because this package is focused on text analytics tasks such as sentiment analysis, key phrase extraction, and entity recognition, rather than intent recognition.
azure-mgmt-cognitiveservices is INCORRECT because this package is used for managing Azure Cognitive Services resources, such as creating and configuring them, rather than directly interacting with the models for language understanding and intent recognition.
azure-cognitiveservices-speech is INCORRECT because this package is used for integrating speech recognition and synthesis capabilities, which is not relevant for a model that identifies the intent of text input.
Read More:
https://pypi.org/project/azure-ai-language-conversations
https://pypi.org/project/azure-cognitiveservices-language-textanalytics
Explanation
azure-ai-language-conversations is CORRECT. This package is specifically designed for working with Azure AI services that handle language understanding and intent recognition. Since Model1 is used to identify the intent of text input, this package will provide the necessary tools and APIs to integrate your Python app, App1, with Model1.
azure-cognitiveservices-language-textanalytics is INCORRECT because this package is focused on text analytics tasks such as sentiment analysis, key phrase extraction, and entity recognition, rather than intent recognition.
azure-mgmt-cognitiveservices is INCORRECT because this package is used for managing Azure Cognitive Services resources, such as creating and configuring them, rather than directly interacting with the models for language understanding and intent recognition.
azure-cognitiveservices-speech is INCORRECT because this package is used for integrating speech recognition and synthesis capabilities, which is not relevant for a model that identifies the intent of text input.
Read More:
https://pypi.org/project/azure-ai-language-conversations
https://pypi.org/project/azure-cognitiveservices-language-textanalytics



