Microsoft Certified: Fabric Data Engineer Associate (DP-700) Logo
Microsoft Logo

Microsoft Certified: Fabric Data Engineer Associate (DP-700) Exam Questions

133

Total Questions

SEP
2025

Last Updated

1st

1st Try Guaranteed

Expert Verified

Experts Verified

Question 1 Single Choice

You are working on a Microsoft Fabric project where you need to create a Dataflow Gen 2 that ingests data from an external SQL database and applies transformation steps before loading it into a Lakehouse table. During testing, you notice that the ingestion pipeline runs successfully, but the transformed data is not appearing in the destination table.

Upon investigation, you realize that the dataflow execution completes without errors, but the output mode configuration might be the reason for missing data.

Which of the following configurations would best ensure that transformed data is loaded into the Lakehouse table correctly?

Question 2 Single Choice

You are designing a Fabric Data Pipeline that automates data ingestion using Dataflow Gen 2. The pipeline needs to:

  1. Extract customer transaction data from an external SQL database.

  2. Transform the data using Dataflow Gen 2.

  3. Load the transformed data into a Lakehouse table for downstream analytics.

  4. Ensure the process automatically reruns if any failures occur.

After configuring the pipeline, you notice that the Dataflow Gen 2 step executes successfully, but the pipeline does not retry upon failure, causing data freshness issues.

Which of the following changes would best ensure that the pipeline retries automatically on failure?

Question 3 Multiple Choice

You have a KQL database with a table TransformedData that records bike station information, including the number of available bikes (No_Bikes), station identifiers (BikepointID), and timestamps (Timestamp). You aim to create a materialized view that always provides the most recent number of bikes at each station. Which of the following KQL commands would achieve this? (Select all that apply)

Question 4 Multiple Choice

You are tasked with creating a Real-Time Dashboard in Microsoft Fabric to monitor live data from a KQL database. Which of the following steps are necessary to successfully create and visualize data on this dashboard? (Select all that apply)

Options:

Question 5 Single Choice

You are designing a Dataflow Gen 2 in Microsoft Fabric to ingest and transform sales data from an Azure SQL Database before loading it into a Lakehouse table. The Dataflow needs to:

  1. Connect securely to the Azure SQL database.

  2. Transform sales records by filtering transactions from the last 12 months.

  3. Load the transformed data into the Lakehouse table with incremental refresh enabled.

  4. Optimize data refresh performance by minimizing unnecessary data loads.

After configuring the dataflow, you notice that full data reloads occur during each refresh cycle, increasing processing time.

Which of the following steps would best ensure that only new and updated records are processed during each refresh?

Question 6 Single Choice

You are designing a Fabric Data Pipeline to orchestrate the movement of customer order data from an Azure Data Lake Storage (ADLS Gen2) container to a Fabric Lakehouse table. The pipeline consists of the following steps:

  1. Ingest data from a CSV file stored in ADLS Gen2.

  2. Transform the data using Dataflow Gen 2, applying schema validation and removing duplicate records.

  3. Load the transformed data into a Lakehouse table.

  4. Trigger the pipeline execution automatically whenever a new file arrives in the ADLS container.

After deploying the pipeline, you observe that new files are uploaded to ADLS, but the pipeline does not trigger automatically.

Which of the following modifications would best resolve this issue?

Question 7 Single Choice

You are tasked with setting up a Fabric Data Pipeline that automatically ingests sales transaction data from an Azure Data Lake Storage Gen2 (ADLS Gen2) container into a Microsoft Fabric Lakehouse.

The pipeline should:

  1. Trigger automatically when a new sales data file arrives in ADLS Gen2.

  2. Process and clean the data using Dataflow Gen 2 (removing duplicates, schema validation).

  3. Load the transformed data into a Lakehouse table for reporting.

Which of the following is the correct sequence of steps to configure this pipeline?

Question 8 Single Choice

You are using Copy Data activity in Fabric Data Pipelines to move data from an Azure SQL Database to a Lakehouse table. The pipeline should:

1️⃣ Automatically detect schema changes in the source.
2️⃣ Ensure new columns are included without breaking existing workflows.
3️⃣ Maintain high performance for large datasets.

After deploying the pipeline, you notice that when new columns are added in the source, the pipeline fails instead of dynamically adjusting to the changes.

What is the best way to handle this issue?

Question 9 Single Choice

You are designing a Fabric Data Pipeline to automate the ingestion and transformation of customer order data from an Azure SQL Database into a Fabric Lakehouse.

Your company requires the pipeline to:

1️⃣ Standardize pipeline creation across multiple teams by using pre-built templates.
2️⃣ Ensure consistency in data processing steps such as data ingestion, transformation, and loading.
3️⃣ Reduce development time by leveraging reusable pipeline components.
4️⃣ Enable customization where necessary to adjust for different data sources.

After setting up the pipeline using a pre-built template, you notice that the transformation logic does not match your specific business requirements.

Which of the following actions would best allow you to modify the template while still leveraging its benefits?

Question 10 Single Choice

You are responsible for monitoring and troubleshooting a Fabric Data Pipeline that loads customer transactions from an Azure Blob Storage container into a Fabric Warehouse.

Your team has reported that some pipeline runs complete with warnings, but not all data appears in the Warehouse table.

To diagnose the issue, you must:
1️⃣ Identify failed or partially successful runs using Fabric's monitoring tools.
2️⃣ Pinpoint the exact step where issues occurred and review logs.
3️⃣ Ensure pipeline execution continues even if minor errors occur.

Which of the following steps should you take first to investigate the issue?

Page: 1 / 14