Using the Project Oxford Emotion API in C# and JavaScript

Machine Learing is a hot topic at the moment and Microsoft has some great tools for making Machine Learning accessible to developers, not just Data Scientists.

One of these tools is a set of REST APIs which are collectively called Project Oxford and/or Cortana Analytics. These services takes some very clever Machine Learning algorithms which Microsoft have already applied to very broad sample data sets to provide models which are callable via REST APIs. These map to common machine Learning scenarios like Face Recogniton, Computer Vision and Speech.

One of the more interesting APIs is the Emotion API which can analyse the emotions shown on a photo. I was very excited about this API when I first heard about it. I know this because I uploaded a selfie to it and it told me I was excited. So I thought I'd set to work on a little sample which shows how to use the Emotion API in my two favourite languages C# and JavaScript.

How does the Emotion API work?

The emotion API works by taking an image which contains a face as an input and returns JSON result set which looks something like this:

{ "faceRectangle": { "left": 1235, "top": 525, "width": 677, "height": 677 }, "scores": { "anger": 0.1022928, "contempt": 0.579521, "disgust": 0.2233283, "fear": 0.000002794455, "happiness": 0.00103985844, "neutral": 0.09139749, "sadness": 0.00239139958, "surprise": 0.0000263756556 } } 

The JSON above is the result set for the image below which is my 3-year-old daughter, Madison.

My daughter, Madison

As you can see from the data, the API recognised her face (it can actually support up to 64 faces in a single image) and has analysed her emotions in the picture.

Madison's primary emotion was contempt for which she scored 57%, then disgust which was 22% and a light smattering of anger at 10%. In case you are wondering, this is what a 3-year-old with very little respect for her father thinks when she it told that she cannot have another piece of chocolate!

I'll spend the rest of the article outlining how you work with the API. You can find code samples in both C# and JavaScript in my corresponding GitHub repo. If anyone wants to implement the API in another language or technology (Windows UWP, IOS, Android, Office add-ins, PHP, Node, Powershell, Whatever-floats-yer-boat.js), I'm very open to pull requests! 🙂

Step 1: Get your key

As with almost any API, your first task is to obtain an API key. This is really easy to do for all Project Oxford APIs.

You simply go to the API's home page (for the Emotion API it is and click the 'Try for free' button. It will ask you to sign in with a Microsoft account (,, etc).

Once you are signed in it will ask you to 'request new keys' and subscribe to the API. Most of the Project Oxford APIs have a free pricing tier which permits a number of requests per month for free. Typically it is 10,000 but can vary.

Once subscribed, you will be able to view your key. In most scenarios you only need your 'primary key' which will look something like this 1dd1y4e23c5743139329688ba30a7153

Step 2: Make a request

You need to pass your API key as an Ocp-Apim-Subscription-Key header with your request.

You'll also need to pass the image in the body. As you'll see in the API Reference, the image is accepted in a application/json format body (for a POST request) which is a URL to the image. The JSON in this case looks like this

{ "url": "" } 

The API also accepts a binary image file in application/octet-stream format. This is a more typical scenario where a user has uploaded an image to your application for a form or some kind of file picker.

Step 3. Do something awesome with the JSON

Once you have a JSON response, you can now use the data in your application as you would any other JSON result set.

Code samples

As mentioned at the top, I have implemented the Emotion API with both the URL and the File request options in both C# (via ASP.Net Core 1.0) and JavaScript with JQuery. You can see these implementations in my Project-Oxford-Emotion-API-sample Github Repo.

For completeness, here are the main sections of code that you';ll need

JavaScript with JQuery File/Octet-Stream Sample

This is a full HTML file. You should be able to copy and paste this into a HTML file and it will work

 <script src=""></script>
 <p>This example shows how to post an image to the Project Oxford Emotion API using a binary file</p>
 <input type="file" id="filename" name="filename">
 <button id="btn">Click here</button>
 <p id="response"></p>
 <script type="text/javascript">
 //apiKey: Replace this with your own Project Oxford Emotion API key, please do not use my key. I include it here so you can get up and running quickly but you can get your own key for free at 
 var apiKey = "1dd1f4e23a5743139399788aa30a7153";
 //apiUrl: The base URL for the API. Find out what this is for other APIs via the API documentation
 var apiUrl = "";
 $('#btn').click(function () {
 //file: The file that will be sent to the api
 var file = document.getElementById('filename').files[0];
 CallAPI(file, apiUrl, apiKey);
 function CallAPI(file, apiUrl, apiKey)
 url: apiUrl,
 beforeSend: function (xhrObj) {
 xhrObj.setRequestHeader("Content-Type", "application/octet-stream");
 xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key", apiKey);
 type: "POST",
 data: file,
 processData: false
 .done(function (response) {
 .fail(function (error) {
 function ProcessResult(response)
 var data = JSON.stringify(response);

C# File/Octet-Stream sample (ASP.NET Core 1.0)

This is a C# example using ASP.NET Core 1.0 MVC. This is not the full code, but an extract from the main 'Home' controller. See the GitHub Repository for the full, working sample

 // POST: Home/FileExample
 public async Task<IActionResult> FileExample(IFormFile file)
 using (var httpClient = new HttpClient())
 //setup HttpClient
 httpClient.BaseAddress = new Uri(_apiUrl);
 httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", _apiKey);
 httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/octet-stream"));
 //setup data object
 HttpContent content = new StreamContent(file.OpenReadStream());
 content.Headers.ContentType = new MediaTypeWithQualityHeaderValue("application/octet-stream");
 //make request
 var response = await httpClient.PostAsync(_apiUrl, content);
 //read response and write to view
 var responseContent = await response.Content.ReadAsStringAsync();
 ViewData["Result"] = responseContent;
 return View();

In Summary

The Project Oxford APIs are very simple to get started with and use in your applications. Hopefully the sample in this article and the accompanying GitHub Repository will help you get up and running.

I plan to do similar articles on other Project Oxford and Cortana Analytic APIs. Please use the comments if there are particular APIs you'd like to see examples of.


Comments (7)

Cancel reply

  1. o365spo says:

    Excellent. Martin, by chance do you have a JavaScript REST sample returning speech from a text string? Thanks JC

    1. Martin Kearn says:

      Thanks – I don’t have an example like this but a few people have asked for JS examples of the Speech API so may consider it for a future article

  2. Alejandro says:

    Thanks very much Martin:
    I have tried this, but using a canvas to send the binary data form a camera screnshot. I always get the error “{“error”:{“code”:”BadArgument”,”message”:”Invalid Media Type.”}}”

    From my canvas, I get the toDataURL: var dataCanvas=canvas.toDataURL(“image/jpg”); and send it by ajax us:

    url: “” + $.param(params),
    beforeSend: function(xhrObj){
    // Request headers
    type: “POST”,
    processData: false,
    data: {
    imgBase64: dataCanvas


    Have tried lots of options but always get the same error response. Any help?

  3. pesquera says:

    Hi Martin, I really appreciate your help.. I’m new con C# and was looking a UWP sample without much luck until now.. your sample is excellent!
    Now, I want to do some play with the Face API.. I could start with this one.. but, just to know.. do you already have done this sample with the face api? Thanks again

  4. Zic says:

    I have read your work.
    It’s excellent .
    I Really appreciate about your help.
    Thanks a lot.
    But I am facing some problem with emotion api with video.
    I do a little bit modification on your code , in order to test emotion api with video.

    I use javascript and I only edit the api url to recognizeinvideo and my insert my own Ocp-Apim-Subscription-Key.
    var apiUrl = “”;
    => var apiUrl = “”;

    But it detect nothing, and if I upload file which is not a video, it will return me error code.
    So I think I have successfully connect to the api.

    May I ask you that is there any possible problem with my code?

  5. Alan Oxford says:

    HI Martin did you ever get anywhere with JS examples of the Speech API . I could really do with some assistance.

  6. michael says:

    Very impressive.

    Can this API be used with live camera stream?

Skip to main content