DP-100 Practice Tests Azure Prep Exam

DP-100  Azure Practice Tests Prep Exam

DP-100 Azure Practice Tests Prep Exam

How about you prepare with DP-100 Azure practice tests with the latest reference.

Why become a Data Scientist?

Nowadays, every industry is becoming data-driven.

Every company whether big or small are looking for people who can very well understand and analyze data. 

Job as a data scientist is increasing its hype with its growing demand. Blooming in each sector of industries from IT to marketing or retails.

Data science is emerging as the most trusted as well as a strategic partner to their management.

Since it’s an evolving field which is adding value to the business, it is flourishing with jobs. The average annual salary of data scientist is 120,000 USD.

However this figure may fluctuate depending upon the size of the industry. Not only big companies but new startups are also relying on data scientists.

All objectives of the exam are covered in depth so you’ll be ready for any question on the exam.

Who should take the Microsoft DP-100 Azure Exam?

Exam candidates must be architects or engineers with extensive experience and knowledge of the SAP system landscape and specific industry standards for the long-term operation of an SAP solution on Microsoft Azure.

DP-100 Azure Practice Tests

The DP-100: Designing and Implementing a Data Science Solution on Azures covers the following topics :

  • Set up an Azure Machine Learning workspace.

  • Run experiments and train models.

  • Run experiments and train models.

  • Optimize and manage models.

  • Deploy and consume models.

     

DP-100 Azure Practice Tests

Question 1.

You are developing a hands-on workshop to introduce Docker for Windows to attendees.
You need to ensure that workshop attendees can install Docker on their devices.
Which two prerequisite components should attendees install on the devices? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. Microsoft Hardware-Assisted Virtualization Detection Tool
  • B. Kitematic
  • C. BIOS-enabled virtualization
  • D. VirtualBox
  • E. Windows 10 64-bit Professional

Answer 1. C, E

C: Make sure your Windows system supports Hardware Virtualization Technology and that virtualization is enabled.
Ensure that hardware virtualization support is turned on in the BIOS settings.

E: To run Docker, your machine must have a 64-bit operating system running Windows 7 or higher.

Question 2.

Your team is building a data engineering and data science development environment.
The environment must support the following requirements:
✑ support Python and Scala
✑ compose data storage, movement, and processing services into automated data pipelines
✑ the same tool should be used for the orchestration of both data engineering and data science
✑ support workload isolation and interactive workloads
✑ enable scaling across a cluster of machines
You need to create the environment.
What should you do?

  • A. Build the environment in Apache Hive for HDInsight and use Azure Data Factory for orchestration.
  • B. Build the environment in Azure Databricks and use Azure Data Factory for orchestration.
  • C. Build the environment in Apache Spark for HDInsight and use Azure Container Instances for orchestration.
  • D. Build the environment in Azure Databricks and use Azure Container Instances for orchestration

Answer 2. B

In Azure Databricks, we can create two different types of clusters.
✑ Standard, these are the default clusters and can be used with Python, R, Scala and SQL
✑ High-concurrency
Azure Databricks is fully integrated with Azure Data Factory.
Incorrect Answers:
D: Azure Container Instances is good for development or testing. Not suitable for production workloads.

Question 3.

You are implementing a machine learning model to predict stock prices.
The model uses a PostgreSQL database and requires GPU processing.
You need to create a virtual machine that is pre-configured with the required tools.
What should you do?

  • A. Create a Data Science Virtual Machine (DSVM) Windows edition.
  • B. Create a Geo Al Data Science Virtual Machine (Geo-DSVM) Windows edition.
  • C. Create a Deep Learning Virtual Machine (DLVM) Linux edition.
  • D. Create a Deep Learning Virtual Machine (DLVM) Windows edition.

DP-100 Practice Tests

Answer 3. A

In the DSVM, your training models can use deep learning algorithms on hardware that’s based on graphics processing units (GPUs).
PostgreSQL is available for the following operating systems: Linux (all recent distributions), 64-bit installers available for macOS (OS X) version 10.6 and newer
Windows (with installers available for 64-bit version; tested on latest versions and back to Windows 2012 R2.
Incorrect Answers:
B: The Azure Geo AI Data Science VM (Geo-DSVM) delivers geospatial analytics capabilities from Microsoft’s Data Science VM. Specifically, this VM extends the
AI and data science toolkits in the Data Science VM by adding ESRI’s market-leading ArcGIS Pro Geographic Information System.
C, D: DLVM is a template on top of DSVM image. In terms of the packages, GPU drivers etc are all there in the DSVM image. Mostly it is for convenience during creation where we only allow DLVM to be created on GPU VM instances on Azure.

Question 4.

You are developing deep learning models to analyze semi-structured, unstructured, and structured data types.
You have the following data available for model building:
✑ Video recordings of sporting events
✑ Transcripts of radio commentary about events
✑ Logs from related social media feeds captured during sporting events
You need to select an environment for creating the model.
Which environment should you use?

  • A. Azure Cognitive Services
  • B. Azure Data Lake Analytics
  • C. Azure HDInsight with Spark MLib
  • D. Azure Machine Learning Studio

Answer 4. A

Azure Cognitive Services expand on Microsoft’s evolving portfolio of machine learning APIs and enable developers to easily add cognitive features such as emotion and video detection; facial, speech, and vision recognition; and speech and language understanding into their applications.

The goal of Azure Cognitive Services is to help developers create applications that can see, hear, speak, understand, and even begin to reason.

The catalog of services within Azure Cognitive
Services can be categorized into five main pillars – Vision, Speech, Language, Search, and Knowledge.

Question 5.

You must store data in Azure Blob Storage to support Azure Machine Learning.
You need to transfer the data into Azure Blob Storage.
What are three possible ways to achieve the goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

  • A. Bulk Insert SQL Query
  • B. AzCopy
  • C. Python script
  • D. Azure Storage Explorer
  • E. Bulk Copy Program (BCP)

Answer 5. B, C, D

You can move data to and from Azure Blob storage using different technologies:
✑ Azure Storage-Explorer
✑ AzCopy
✑ Python
✑ SSIS

DP-100 Practice Tests

Question 6.

You are moving a large dataset from Azure Machine Learning Studio to a Weka environment.
You need to format the data for the Weka environment.
Which module should you use?

  • A. Convert to CSV
  • B. Convert to Dataset
  • C. Convert to ARFF
  • D. Convert to SVMLight

Answer 6. C

Use the Convert to ARFF module in Azure Machine Learning Studio, to convert datasets and results in Azure Machine Learning to the attribute-relation file format used by the Weka toolset. This format is known as ARFF.
The ARFF data specification for Weka supports multiple machine learning tasks, including data preprocessing, classification, and feature selection.

In this format, data is organized by entites and their attributes, and is contained in a single text file.

Question 7.

You plan to create a speech recognition deep learning model.
The model must support the latest version of Python.
You need to recommend a deep learning framework for speech recognition to include in the Data Science Virtual Machine (DSVM).
What should you recommend?

  • A. Rattle
  • B. TensorFlow
  • C. Weka
  • D. Scikit-learn

Answer 7. B

TensorFlow is an open-source library for numerical computation and large-scale machine learning.

It uses Python to provide a convenient front-end API for building applications with the framework
TensorFlow can train and run deep neural networks for handwritten digit classification, image recognition, word embeddings, recurrent neural networks, sequence- to-sequence models for machine translation, natural language processing, and PDE (partial differential equation) based simulations.
Incorrect Answers:
A: Rattle is the R analytical tool that gets you started with data analytics and machine learning.
C: Weka is used for visual data mining and machine learning software in Java.
D: Scikit-learn is one of the most useful libraries for machine learning in Python.

It is on NumPy, SciPy and matplotlib, this library contains a lot of efficient tools for machine learning and statistical modeling including classification, regression, clustering and dimensionality reduction.

Question 8.

You plan to use a Deep Learning Virtual Machine (DLVM) to train deep learning models using Compute Unified Device Architecture (CUDA) computations.
You need to configure the DLVM to support CUDA.
What should you implement?

  • A. Solid State Drives (SSD)
  • B. Computer Processing Unit (CPU) speed increase by using overclocking
  • C. Graphic Processing Unit (GPU)
  • D. High Random Access Memory (RAM) configuration
  • E. Intel Software Guard Extensions (Intel SGX) technology

Answer 8. C

A Deep Learning Virtual Machine is a pre-configured environment for deep learning using GPU instances. DP-100 Azure.

Question 9.

You plan to use a Data Science Virtual Machine (DSVM) with the open source deep learning frameworks Caffe2 and PyTorch.
You need to select a pre-configured DSVM to support the frameworks.
What should you create?

  • A. Data Science Virtual Machine for Windows 2012
  • B. Data Science Virtual Machine for Linux (CentOS)
  • C. Geo AI Data Science Virtual Machine with ArcGIS
  • D. Data Science Virtual Machine for Windows 2016
  • E. Data Science Virtual Machine for Linux (Ubuntu)

Answer 9. E

Caffe2 and PyTorch is supported by Data Science Virtual Machine for Linux.
Microsoft offers Linux editions of the DSVM on Ubuntu 16.04 LTS and CentOS 7.4.
Only the DSVM on Ubuntu is preconfigured for Caffe2 and PyTorch.
Incorrect Answers:
D: Caffe2 and PytOCH are only supported in the Data Science Virtual Machine for Linux.

Question 10.

You are developing a data science workspace that uses an Azure Machine Learning service.
You need to select a compute target to deploy the workspace.
What should you use?

  • A. Azure Data Lake Analytics
  • B. Azure Databricks
  • C. Azure Container Service
  • D. Apache Spark for HDInsight

Answer 10. C

Azure Container Instances can be used as compute target for testing or development. Use for low-scale CPU-based workloads that require less than 48 GB of RAM.

————————————– Please comment if you need more Q & A ————————————–

No posts found!

Agile project management Artificial Intelligence aws azure blockchain cloud computing coding interview coding interviews Collaboration Coursera css cybersecurity cyber threats data analysis data science data visualization devops django docker excel flask html It Certification java javascript ketan kk Kubernetes machine learning machine learning engineer Network & Security network protocol nodejs online courses online learning Operating Systems Other It & Software pen testing Project Management python Software Engineering Terraform Udemy courses VLAN web development

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.