100% Real Microsoft Azure AI AI-102 Exam Questions & Answers, Accurate & Verified By IT Experts
Instant Download, Free Fast Updates, 99.6% Pass Rate
AI-102 Premium File: 263 Questions & Answers
Last Update: Oct 25, 2024
AI-102 Training Course: 74 Video Lectures
AI-102 PDF Study Guide: 741 Pages
€79.99
Microsoft Azure AI AI-102 Practice Test Questions in VCE Format
File | Votes | Size | Date |
---|---|---|---|
File Microsoft.train4sure.AI-102.v2024-09-25.by.freddie.65q.vce |
Votes 1 |
Size 3 MB |
Date Sep 25, 2024 |
File Microsoft.examquestions.AI-102.v2021-12-27.by.ida.57q.vce |
Votes 1 |
Size 1.55 MB |
Date Dec 27, 2021 |
File Microsoft.train4sure.AI-102.v2021-10-18.by.jose.53q.vce |
Votes 1 |
Size 1.47 MB |
Date Oct 18, 2021 |
File Microsoft.selftesttraining.AI-102.v2021-07-20.by.lola.41q.vce |
Votes 1 |
Size 1.14 MB |
Date Jul 20, 2021 |
File Microsoft.train4sure.AI-102.v2021-05-14.by.leja.25q.vce |
Votes 1 |
Size 1.65 MB |
Date May 14, 2021 |
File Microsoft.pass4sure.AI-102.v2021-04-30.by.christopher.14q.vce |
Votes 1 |
Size 566.78 KB |
Date Apr 30, 2021 |
Microsoft Azure AI AI-102 Practice Test Questions, Exam Dumps
Microsoft AI-102 Designing and Implementing a Microsoft Azure AI Solution exam dumps vce, practice test questions, study guide & video training course to study and pass quickly and easily. Microsoft AI-102 Designing and Implementing a Microsoft Azure AI Solution exam dumps & practice test questions and answers. You need avanset vce exam simulator in order to study the Microsoft Azure AI AI-102 certification exam dumps & Microsoft Azure AI AI-102 practice test questions in vce format.
So the last element that we'll talk about in this overview about cognitive services is the concept of cognitive services in containers. Now, if you're not familiar, a container is basically a deployment model that makes you package up your code with all of the dependencies into what's called an image. You compile that up, you create that in your local environment, and then you're just pushing the image from development to staging to production without having to recompile and without all the complexities of deploying multiple things that are all dependent upon each other. So it's a lot simpler deployment model in the context of cognitive services; being able to deploy cognitive services inside a container means that you can run this cognitive services API in your own environment, not relying on the cloud. So what Microsoft has done is taken some of their services, such as anomaly detection, language services such as text analytics, key phrase extraction, and sentiment analysis, and you can basically download a docker, file a docker image with this docker pull request here, and basically get a container containing everything needed to run the cognitive services on your local environment without having to go to the cloud. Now, pricing still matters, and so they are going to charge you based on consumption. You're going to have to set up your container for this metering data. And yeah, they're basically not licenced to run without being connected to Azure for metering. So you're not using the cloud to run the cognitive service. You're not uploading your images, your text, or your speech into the cloud over the open internet to do the analysis. You're doing the analysis on your local system, and it's just having to do the count by connecting to Azure. So there's a privacy concern: this is going to be faster in some ways that you can control; if you have a very demanding application, you can basically control the performance of it. There's no SLA associated with this. So Microsoft cannot guarantee you an uptime for running your own containers because that's totally within your control, obviously. But this is another way of deploying cognitive services within your own environment, outside of Azure. And you can certainly reduce your reliance on external forces such as Azure regions experiencing regional issues, the internet experiencing slowness and lost packets, and so on. Now, not every cognitive service is available. So we see some of the language services are here, but it's not all of them, right? Then, for something like speech, you have to make a request, so you can't just get your speech to tech services running on your local. You do need to let Microsoft know, and they will approve you for that. Again, with the vision stuff, and again with the big announcement, they're not allowing police services to use Azure vision services until there are laws and regulations in place. So we can see some of the version services are here, but here's an example. The Face service is not available to run in a container on your local network, okay? So you can sort of see the availability of different services to run this. So containerized services are good for very specific circumstances. Certainly, if you don't want to manage these servers and the uptime, leave it to Azure and deploy this as a cloud-based service. If you are more concerned with your privacy, the lag time between submitting images, voice, and video over the Internet for the cloud-based analysis could be a concern for doing this on your local network as well.
In the following few sections of the course, we're dealing with the next topic of this exam, which is implementing computer vision solutions, currently worth 20% to 25% of the exam score. Basically, you're going to see these topics of the exam covered in a series of videos each. The first topic is computer vision. Now, as you know, computer vision is one of the core services provided by cognitive services. This has to do with image descriptions and tags, identifying landmarks and celebrities' brands, adult content moderation, and thumbnails. In the following sections of the course, we'll deal with the other sections of this test, including text and images, facial recognition, image classification using custom vision, object detection using custom vision, and finally, the video index or service. So there's a lot of content. We're going to be covering a lot of topics. But again, this is only a quarter of the exam in this set of sections. So stay tuned. In the next set of videos, we're going to start to get right into computer vision, and we're going to be looking at the Python code, the SDK for Python, and how that all gets implemented. So stand by.
So in this section of the course, we're going to be looking at computer vision, and we're going to start looking at code. Now I have an AI 102 files repository on my GitHub hub. The link has been provided previously under AI 102 files. I want you to go into Computer Vision, and we're going to be talking about the first project under there, which is called Analyze Images Using Computer Vision API. Now in this video, we're specifically talking about tags. We're going to talk about tags, and we're talking about description. Now this is again on GitHub code.We are going to have to import the Computer Vision client and obviously get the credentials. The first thing that we do is sort of set a variable here to get our credentials for Cognitive Services credentials.This is provided by Azure. When you create your Cognitive Services account, you'll get your endpoint and your key. And in order to access the Computer Vision service, we're going to use the Computer Vision client. We're going to provide the credentials and the endpoint together to create a Computer Vision client. Now in this example, we're going to be trying a couple of different ways of extracting visual tags from images. The first image is listed here. We can see it's sort of hard-coded here in terms of a remote image. These are hosted on my GitHub. Feel free to clone the repository and run it in your own local environment or to fork the repository and host it yourself. So this is an ancient ruin, but it is clearly some type of ancient ruin that we're looking at. And we're going to ask Microsoft Azure to extract the tags from that image. And I'm going to show this to you in a second. The method for that is called tag images. So we just simply create the client, and then we call the tag image method, passing in the remote URL, and it's going to return an array of tags relating to this. We can see that we're using a for each statement to iterate over the tags, and it'll print out the tag name and the tag confidence. I'll show that in a second. You can also use a local image. So if you have the images hosted in your own environment on a network drive, there's another method for that called Tag Image in Stream. It will then have to upload the image as a set of binary into the Computer Vision client, and we will get the same result. So let's have a look at how that looks. So I'm going to load this code into PyCharm. I'm going to go over to this, right-click on the tab, and choose Run. and that's going to actually execute the code. Remember, it's calling the remote API of the Cognitive Services. It is identified in that image as having a building with 99.9% confidence, outdoor sky ruins with 76% confidence, and an amphitheatre with 56% confidence. So the Azure Cognitive Services Computer Vision API has done a pretty good job of identifying that image. It's almost exactly what I said when I looked at that image myself. You saw how fast that was and how easy that was to set up. It also came up with the same results on the local image as it did on the remote image as well.
So now we're going to look at the image description. It's still on GitHub. It's under the "retrieve image descriptions dot PY file." We're using the same computer vision client and cognitive service credentials. The image that we're going to analyse is also the same as the landmark, and really the only thing that's different is this describe image method. So we call the client with the described image and the remote URL, and instead of getting a list of tags, we get a list of captions, and then we iterate over the captions that are returned, and we can see the caption text and the confidence score. We can also do that with a local image, which is again done with the in-stream method. We pass in the location of our local image, and it will upload it and do the analysis in the same way. So let's switch over to PyCharm and see how that goes. We're going to load the code into PyCharm. Make sure you've set your computer vision endpoint and key. We're going to hit Run, and it's going to go and get the description of the image. Now it says a large stone structure with many arches and the Colosseum in the background, and it has a confidence of only 26% on that. Now if we look at the image again, we can see that it actually does look like the coliseum. It certainly is an amphitheatre with ancient ruins. And so the description is pretty solid; this was generated by a computer. Now it does have a low confidence score, but don't really blame it for that because there could very well be many different amphitheatres around the world that have very similar looks to them. And so it's got a 26 percent confidence level, but that's the highest confidence level that it has for that description. So we can see with the tags and now with the description that it's pretty easy to call the cognitive services API, computer vision service, to retrieve these things and have Microsoft Azure analyse the contents of images.
Now, what we're using here with this Computer Vision API is some pre-built machine learning models. And so in the past two examples with tags and descriptions, we're just asking generically for them to look at the image and identify tags and extract the descriptions. Now, Microsoft does have what are called domain-specific models for machine learning. and so we can basically call these models. Right now, there are two. One is for celebrities and one is for landmarks. And so if we know that our images contain celebrities, well, we can call the model a celebrity-specific model that has been trained on photos of celebrities. If we know that the image contains a landmark, we can call it a landmark-specific model. Perhaps the landmark model will provide better results for our coliseum image than the generic model. And so let's have a look at how that looks in reality. So switching back to GitHub, we're going to move over to the Identify Landmarks Python script. We're using the same client and the same credentials. So everything is the same up until we set up our computer vision client to point to a remote image URL. And in this case, we're going to be calling the computer vision client Analyze Image by Domain. And this is what I'm going to say. We want to know what this image contains, and we're going to use the landmarks model. And so this has been specifically trained, like I said, on landmarks. And so we're going to then be able to look at the results of this. It's going to come back slightly differently. We can see that the landmark is going to come back as an array, and then inside of it is an array of results as well. And we're going to see if we can detect the name of this landmark. We can do the same thing for local, which is going to use, again, the stream-basically method. And we can basically upload a local image using the landmark model. So let's switch over to Pie Charm. We're going to execute the identified landmarks, and we're going to look at the results. It's going to say that landmarks exist in the remote image coliseum. So you can see that I've basically pulled out the name of it. These are not tags or descriptions. It's determining what the landmark is. And similarly, if we switch over to identify celebrities PY, again, everything's the same except we're going to call Analyze Image by Domain and we're going to pass in celebrities. Now in this case, we're going to use a fairly standard Microsoft Cognitive Services image relating to this family. And presumably Microsoft has the right to use this, but we're going to use this image and see if it can identify any celebrities among these faces. So that's the image we're going to use. Again, we can do this in the local with the Instream tag. Now, switching to PyCharm, we're going to execute the code, and it's going to come back with a result that says Burn Cola Sal. and I don't know who that is. I'm probably out of the loop when it comes to celebrities, but let's see if Google agrees. And so if I do a search for his name, I will see a variety of images, including the Microsoft Reference image. And so this gentleman on the second from the left actually does look like this actor, who is named Brian Bernard Cola Sal. Hopefully I'm saying his name correctly, but this seems to be the correct result. And so we can see that Microsoft does have celebrities and landmarks as specific machine learning domains. So, if you know that your image source is in those areas, you can actually get better results and a more specific result. instead of having just tags or descriptions to rely on.
Go to testing centre with ease on our mind when you use Microsoft Azure AI AI-102 vce exam dumps, practice test questions and answers. Microsoft AI-102 Designing and Implementing a Microsoft Azure AI Solution certification practice test questions and answers, study guide, exam dumps and video training course in vce format to help you study with ease. Prepare with confidence and study using Microsoft Azure AI AI-102 exam dumps & practice test questions and answers vce from ExamCollection.
Purchase Individually
Microsoft AI-102 Video Course
Top Microsoft Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
Add Comment
Feel Free to Post Your Comments About EamCollection VCE Files which Include Microsoft Azure AI AI-102 Exam Dumps, Practice Test Questions & Answers.