This article originally appeared on Feb. 22, 2018.
When navigating the sea of tech buzzwords and trends, one topic for the past couple years has consistently held the interest of developers and consumers alike. “Cognitive computing,” while being old in concept and theory, is just now coming to the forefront of use in a wide array of products and applications.
The complexity and cost of aggregating learning data, producing complicated deep learning algorithms and training machine learning models poses a problem for smaller companies that want to integrate Artificial Intelligence (AI) and machine learning into their applications.
Fortunately, Microsoft Cognitive Services gives us a way to access cognitive functionality from an easy-to-use Software Development Kit (SDK) and a powerful API. Though the SDK is still in preview, we can start to use and understand how to interface with these powerful APIs and how we can introduce value-adding business logic into our applications.
This article is aimed at helping developers get familiar with working with the Microsoft Cognitive Services Suite and demonstrates just how easy it is to build a valuable application using social media data and Cognitive Services.
In this demo, we’ll be using ASP.NET Core 2.0, the Twitter API and Microsoft Cognitive Services to build a simple MVC application that will search Twitter for Tweets that an account is mentioned in, analyze those Tweets for sentiment and key phrases, and return and curate the results of the Cognitive Services API so that we can derive some helpful output.
This type of functionality can be very useful if you have a large set of text to analyze for patterns. I invite you to clone this project from GitHub to have as a reference while you read.
The Twitter API
For the sake of brevity, I’ve already configured my application to access the Twitter API and implemented some code with the help of the CoreTweet library to search Twitter for Tweets mentioning a user-entered Twitter handle. For more information on registering your application and using the Twitter API, view the Twitter API documentation.
Microsoft Cognitive Services
To access Microsoft Cognitive Services, we’ll first need an Azure subscription. If you don’t already have one, you can get one for free by visiting the Microsoft Azure page. Once we have an Azure subscription, we’ll need to set up a Cognitive Services resource in Azure.
Microsoft Cognitive Services offers an array of different API endpoints that provide different cognitive functionality, ranging from image and facial recognition to intelligent recommendations and semantic search. Learn more about the features offered by Microsoft Cognitive Services.
For our application, we’re going to make use of the Text Analytics API endpoint. This endpoint offers sentiment analyses and key phrase extraction features. To set up our Cognitive Services resource, we’ll want to log in to our Azure Dashboard and click the + New button in the top left corner. In the search box, we can search for Cognitive Services. Select the Cognitive Services option in the drop-down search menu, and this will bring us to the Create blade.
Once in the Create blade, we’ll need to give our service a name and select the subscription it’s associated with. We then want to select Text Analytics API from the drop-down list and select a Resource group. We won’t cover Resource groups in this tutorial, so we’ll just create a new Resource group called Cognitive Resources. For resource group location, I’m going to select East US 2. Now we can click the Create button, and Azure will generate access keys for us to use the Text Analytics API.
In the resource overview for the Cognitive Service resource, under the Keys tab, we can find our access key that we’ll use to access the Cognitive Services API. Microsoft provides a page for us to test our API subscription key. There, we can select our API region, enter our API key and input some test data. This is useful for viewing what the API accepts as input and what it provides for output.
The Microsoft Cognitive Services SDK gives us an easy way to interface with our API endpoint. We’ll need to add the Cognitive Services SDK to our project. We can add the SDK in Visual Studio by going to Tools > NuGet Package Manager > Package Manager Console and entering:
Install-Package Microsoft.Azure.CognitiveServices.Language -Version 1.0.0-preview
You can also do this through the NuGet Package Manager. Just make sure the Include Prerelease checkbox is selected.
For our demo, we’ll pull in the following namespaces:
These contain the TextAnalyticsAPI class and associated model classes we need to interact with our Cognitive Services endpoint. This library is still in preview, but it will provide a clean and efficient way for us to interface with the Text Analytics API and will work just fine for this demo. However, if we wanted to use only HTTP requests to interface with the API, we certainly can.
For this demo, we’re going to use the following two features of the Text Analytics API:
- Sentiment Analysis
- Key Phrase Extraction
These two functionalities together are very powerful because we can derive negative or positive sentiment and isolate the subject of that attitude. When evaluating large sets of data, this can be useful for recognizing patterns and common subjects that bring about positive or negative reactions.
For our demo, I’ve added a service class to call the Text analytics API. I called this class TextAnalyticsService.
I’m using dependency injection to inject all the dependencies our Text Analytics service needs. The textAnalyticsAPI instance is a singleton that contains the following properties:
- AzureRegion — This is an enum that’s included within our Cognitive Services model namespace. The current SDK only supports values for certain regions. For our project, I have this property set to Eastus2.
- SubscriptionKey — This API key is the one we got from our Keys tab in Azure. I have this set in our appsettings.json file, but it’s important to keep this key secure and never check it into source control.
We have a constant called _language that’s set to a value of “en” for English. In a real-life application, we could make use of the language recognition API Cognitive Services offers and map each Tweet to its proper language because Twitter users speak many different languages. For the sake of this demo, we’ll just assume all of the Tweets we’re analyzing are in English.
In our TextAnalyticsService, we’ve exposed a method called AnalyzeTweets. This method iterates through the list of Tweets and formats them for processing by the API.
The API calls to the sentiment, and key phrases endpoints take a MultiLanguageBatchInput object as an argument. Models exist for a language input that’s default language is English and an input model for just a single text input, but currently the methods used to call the APIs only accept a MultiLanguageBatchInput because the SDK is still in preview. To construct this MultiLanguageBatchInput, we must pass in a list of MultiLanguageInput. This object has three properties:
- Language — the language of the text we’re analyzing
- Id — a unique string Id for the text in the list. For our demo, I’m setting this from a counter. This Id field will be useful when we want to associate an API result with an input.
- Text — the text the API will analyze
Once our Tweets are ready for the API, we’ll invoke the KeyPhrasesAsync method and the SentimentAsync method of the TextAnalyticsAPI class. I’ve separated these into a private async method called GetAnalysisResultsAsync. This method will call our Cognitive Services endpoint, analyze the results for information and pass them back in a format our view can use.
When we invoke our API methods, we’ll load the results into a KeyPhraseBatchResult and SentimentBatchResult object. We can then iterate through the results, get the sentiment and the text associated to the key phrases using Linq, and analyze our sentiment score.
Two things we want to know about our sentiment are:
- The significance of the score
- The attitude the score denotes
With the help of UI framework Bootstrap, here are a few examples of some of our output:
These are just a few examples, but what we can see here is some simple code and brief setup that yields a payload of high-potential information. Although these single outputs may not offer much, we could envision that storing these outputs, aggregating them to a report and running analytics on those reports can offer a lot of insight for targeted customer engagement.
We could see that maybe Razer should do more giveaways, or HP might need to rethink the way it manages its mailing list, or that avocado toast is a good candidate for a marketing campaign for Amazon and Whole Foods.
A brief disclaimer about this output screen. If you decide to set up the application and run it yourself (which I sincerely hope you will), be wary of the fact that we’re dealing with live, uncensored data. Since the application only returns text, it’s possible (and quite likely) the application will return some profanity.
This demo is just a small sample of all of the capabilities Microsoft Cognitive Services has to offer. To expand on this project, we could take the same functionality and have it monitor our Twitter mentions on a daily or weekly basis and aggregate the results into useful reports. For example, we could implement a front-end library such as Chart.js to make a more robust output screen.
All of the project code is available on GitHub. We’ve just seen how easy it is to integrate Microsoft Cognitive Services into our applications. There are many other powerful features to explore with Microsoft Cognitive Services. I hope this tutorial has inspired and motivated you to dive headfirst into AI and machine learning integration with Microsoft Cognitive Services.