70-776 PDF Dumps

How to use our free microsoft 70-776 PDF Dumps

Our Free 70-776 PDF dumps are based on the full 70-776 mock exams which are available on our Web Site. The microsoft 70-776 PDF consists in questions and answers with detailed explanations.
You can use the PDF 70-776 practice exam as a study material to pass the 70-776 exam, and don't forget to try also our 70-776 testing engine Web Simulator.

On-Line Users: {{voteInfo.result.viewUsers}}
Subscribed Users: {{voteInfo.result.subscribedUsers}}
Thank you for your vote {{voteInfo.result.stars}} Your vote has already been submitted ({{voteInfo.result.votingPeople}} votes)

Follow us on SlideShare to see the latest available 70-776 tests pdf.

										
											Q1.Note: This question is part of a series of questions that use the same scenario. For your convenience,
the scenario is repeated in each question. Each question presents a different goal and answer choices,
but the text of the scenario is exactly the same in each question in this series.
Start of repeated scenario
You are developing a Microsoft Azure SQL data warehouse to perform analytics on the transit system of a city.
The data warehouse will contain data about customers, trips, and community events.
You have two storage accounts named StorageAccount1 and StorageAccount2. StorageAccount1 is
associated to the data warehouse. StorageAccount2 contains weather data files stored in the CSV format. The
files have a naming format of city_state_yyymmdd.csv.
Microsoft SQL Server is installed on an Azure virtual machine named AzureVM1.
You are migrating from an existing on premises solution that uses Microsoft SQL Server 2016 Enterprise. The
planned schema is shown in the exhibit. (Click the Exhibit button)




[PIC-1]

The first column of each table will contain unique values. A table named Customer will contain 12 million rows.
A table named Trip will contain 3 billion rows.
You have the following view.

[PIC-2]

You plan to use Azure Data Factory to perform the following four activities:


Activity1: Invoke an R script to generate a prediction column.
Activity2: Import weather data from a set of CSV files in Azure Blob storage
Activity3: Execute a stored procedure in the Azure SQL data warehouse.
Activity4: Copy data from an Amazon Simple Storage Service (S3).
You plan to detect the following two threat patterns:
Pattern1: A user logs in from two physical locations.
Pattern2: A user attempts to gain elevated permissions.
End of repeated scenario
You plan to create the Azure Data Factory pipeline.
Which activity requires that you create a custom activity?
 - A:   Activity2
 - B:   Activity4
 - C:   Activity3
 - D:   Activity1

 solution: D

Explanation:
Explanation: 
Incorrect Answers:
A: Supported copy activities include Copy data in GZip compressed text (CSV) format from Azure Blob and
write to Azure SQL Database.
B: Amazon S3 is supported as a source data store.
C: You can use the SQL Server Stored Procedure activity in a Data Factory pipeline to invoke a stored
procedure in one of the following data stores: Azure SQL Database, Azure SQL Data Warehouse, SQL Server
Database in your enterprise or an Azure VM.
Note: There are two types of activities that you can use in an Azure Data Factory pipeline.
Data movement activities to move data between supported source and sink data stores.
Data transformation activities to transform data using compute services such as Azure HDInsight, Azure
Batch, and Azure Machine Learning.
To move data to/from a data store that Data Factory does not support, or to transform/process data in a way
that isn't supported by Data Factory, you can create a Custom activity with your own data movement or
transformation logic and use the activity in a pipeline. The custom activity runs your customized code logic on
an Azure Batch pool of virtual machines.
References: https://docs.microsoft.com/en-us/azure/data-factory/transform-data-using-dotnet-custom-activity


Q2.Note: This question is part of a series of questions that use the same scenario. For your convenience,
the scenario is repeated in each question. Each question presents a different goal and answer choices,
but the text of the scenario is exactly the same in each question in this series.
Start of repeated scenario
You are developing a Microsoft Azure SQL data warehouse to perform analytics on the transit system of a city.
The data warehouse will contain data about customers, trips, and community events.
You have two storage accounts named StorageAccount1 and StorageAccount2. StorageAccount1 is
associated to the data warehouse. StorageAccount2 contains weather data files stored in the CSV format. The
files have a naming format of city_state_yyymmdd.csv.
Microsoft SQL Server is installed on an Azure virtual machine named AzureVM1.
You are migrating from an existing on premises solution that uses Microsoft SQL Server 2016 Enterprise. The


planned schema is shown in the exhibit. (Click the Exhibit button)

[PIC-3]

The first column of each table will contain unique values. A table named Customer will contain 12 million rows.
A table named Trip will contain 3 billion rows.
You have the following view.

[PIC-4]


You plan to use Azure Data Factory to perform the following four activities:
Activity1: Invoke an R script to generate a prediction column.
Activity2: Import weather data from a set of CSV files in Azure Blob storage
Activity3: Execute a stored procedure in the Azure SQL data warehouse.
Activity4: Copy data from an Amazon Simple Storage Service (S3).
You plan to detect the following two threat patterns:
Pattern1: A user logs in from two physical locations.
Pattern2: A user attempts to gain elevated permissions.
End of repeated scenario
You need to copy the weather data for June 2016 to StorageAccount1.
Which command should you run on AzureVM1?
 - A:   azcopy.exe
 - B:   robocopy.exe
 - C:   sqlcmd.exe
 - D:   bcp.exe

 solution: A

Explanation:
AzCopy is a command-line utility designed for copying data to/from Microsoft Azure Blob, File, and Table
storage, using simple commands designed for optimal performance. You can copy data between a file system
and a storage account, or between storage accounts.
From scenario: You have two storage accounts. StorageAccount1 is associated to the data warehouse.
StorageAccount2 contains weather data files stored in the CSV format.
Incorrect Answers:
B: Robocopy is a free and robust file copy utility included in Windows, for doing large file copies.
References: https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy


Q3.You are designing a data loading process for a Microsoft Azure SQL data warehouse. Data will be loaded to
Azure Blob storage, and then the data will be loaded to the data warehouse.
Which tool should you use to load the data to Azure Blob storage?
 - A:   AdlCopy
 - B:   bcp
 - C:   FTP
 - D:   AzCopy

 solution: D

Explanation:
AzCopy is a command-line utility designed for copying data to/from Microsoft Azure Blob, File, and Table
storage, using simple commands designed for optimal performance. You can copy data between a file system
and a storage account, or between storage accounts.
References: https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy#copy-blobs-in-blob-
storage


Q4.You have a Microsoft Azure SQL data warehouse to which 1,000 Data Warehouse Units (DWUs) are allocated.
You plan to load 10 million rows of data to the data warehouse.
You need to load the data in the least amount of time possible. The solution must ensure that queries against
the new data execute as quickly as possible.
What should you use to optimize the data load?
 - A:   resource classes
 - B:   resource pools
 - C:   MAXDOP
 - D:   Resource Governor

 solution: A

Explanation:
Resource classes are pre-determined resource limits that govern query execution. SQL Data Warehouse limits
the compute resources for each query according to resource class.
Resource classes help you manage the overall performance of your data warehouse workload.Using resource
classes effectively helps you manage your workload by setting limits on the number of queries that run
concurrently and the compute-resources assigned to each query.
Smaller resource classes use less compute resources but enable greater overall query concurrency
Larger resource classes provide more compute resources but restrict the query concurrency
References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/resource-classes-for-workload-
management


Q5.You have a Microsoft Azure SQL data warehouse named DW1 that is used only from Monday to Friday.
You need to minimize Data Warehouse Unit (DWU) usage during the weekend.
What should you do?
 - A:   Run the Suspend-AzureRmSqlDatabase Azure PowerShell cmdlet
 - B:   Call the Create or Update Database REST API
 - C:   From the Azure CLI, run the account set command
 - D:   Run the ALTER DATABASE statement

 solution: A

Explanation:
Pause compute
To save costs, you can pause and resume compute resources on-demand. For example, if you are not using
the database during the night and on weekends, you can pause it during those times, and resume it during the
day. There is no charge for compute resources while the database is paused. However, you continue to be
charged for storage.
To pause a database, use the Suspend-AzureRmSqlDatabase cmdlet.
References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/pause-and-resume-compute-
powershell


Q6.You have a Microsoft Azure SQL data warehouse that has a fact table named FactOrder. FactOrder contains
three columns named CustomerID, OrderID, and OrderDateKey. FactOrder is hash distributed on CustomerID.
OrderID is the unique identifier for FactOrder. FactOrder contains 3 million rows.
Orders are distributed evenly among different customers from a table named dimCustomers that contains 2
million rows.
You often run queries that join FactOrder and dimCustomers by selecting and grouping by the OrderDateKey
column.
You add 7 million rows to FactOrder. Most of the new records have a more recent OrderDateKey value than the
previous records.
You need to reduce the execution time of queries that group on OrderDateKey and that join dimCustomers and
FactOrder.
What should you do?
 - A:   Change the distribution for the FactOrder table to round robin
 - B:   Change the distribution for the FactOrder table to be based on OrderID
 - C:   Update the statistics for the OrderDateKey column
 - D:   Change the distribution for the dimCustomers table to OrderDateKey

 solution: C

Explanation:
Updating statistics
One best practice is to update statistics on date columns each day as new dates are added. Each time new
rows are loaded into the data warehouse, new load dates or transaction dates are added. These change the
data distribution and make the statistics out of date.
References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-statistics


Q7.You have a Microsoft Azure SQL data warehouse.
You need to configure Data Warehouse Units (DWUs) to ensure that you have six compute nodes. The
solution must minimize costs.
Which value should set for the DWUs?
 - A:   DW200
 - B:   DW400
 - C:   DW600
 - D:   DW1000

 solution: C

Explanation:
The following table shows how the number of distributions per Compute node changes as the data warehouse
units change. DWU6000 provides 60 Compute nodes and achieves much higher query performance than
DWU100.
[PIC-5]
References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-
compute-overview


Q8.You plan to deploy a Microsoft Azure virtual machine that will a host data warehouse. The data warehouse will
contain a 10-TB database.
You need to provide the fastest read and writes times for the database.
Which disk configuration should you use?
 - A:   spanned volumes
 - B:   storage pools with striped disks
 - C:   RAID 5 volumes
 - D:   storage pools with mirrored disks
 - E:   stripped volumes

 solution: B



Q9.You need to connect to a Microsoft Azure SQL data warehouse from an Azure Machine Learning experiment.
Which data source should you use?
 - A:   Azure Table
 - B:   SQL Database
 - C:   Web URL via HTTP
 - D:   Data Feed Provider

 solution: B

Explanation:
Use Azure SQL Database as the Data Source.
References: https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/import-from-
azure-sql-database


Q10.You have a fact table named PowerUsage that has 10 billion rows. PowerUsage contains data about customer
power usage during the last 12 months. The usage data is collected every minute. PowerUsage contains the
columns configured as shown in the following table.




[PIC-6]

LocationNumber has a default value of 1. The MinuteOfMonth column contains the relative minute within each
month. The value resets at the beginning of each month.
A sample of the fact table data is shown in the following table.

[PIC-7]

There is a related table named Customer that joins to the PowerUsage table on the CustomerId column. Sixty
percent of the rows in PowerUsage are associated to less than 10 percent of the rows in Customer. Most
queries do not require the use of the Customer table. Many queries select on a specific month.
You need to minimize how long it takes to find the records for a specific month.
What should you do?
 - A:   Implement partitioning by using the MonthKey column. Implement hash distribution by using the CustomerId

column.
 - B:   Implement partitioning by using the CustomerId column. Implement hash distribution by using the MonthKey

column.
 - C:   Implement partitioning by using the MinuteOfMonth column. Implement hash distribution by using the

MeasurementId column.
 - D:   Implement partitioning by using the MonthKey column. Implement hash distribution by using the



MeasurementId column.

 solution: C