TL;DR: Thank you for passing by. This article is, as usual, geek oriented. However, if you are not a geek, and/or you are in a hurry, you can jump to the conclusion: Any real application?

During the month of may, I have had the chance to attend to the Google Next event in London and the dotAI in Paris. In both conferences I learned a lot about machine learning.

What those great speakers have taught me is that you should not reinvent the wheel in AI. Actually a lot of research is done and there are very good implementation of the latest efficient algorithm.

The tool that every engineer that wants to try AI must know is tensorflow. Tensorflow is a generic framework that has been developed by Google’s Machine Intelligence research organization. The tool has been open-sourced last year and has reached the v1.0 earlier this year.

So what makes tensorflow so great?

Bindings

First of all, it has bindings so it can be used within various programming languages such as:

  • python
  • c++
  • java
  • go

However, to be honest, mainly python and c++ are described in the documentation. And to be even more honest I think that python is the language that you should use to prototype applications.

ML and neuron network examples

Tensorflow is easy to use for machine learning, and a lot of deep-learning implementation are available. Actually it is very easy to download a trained model and use it to recognize some pictures for example.

Built-in computation at scale

Tensorflow’s model has a built-in way to perform distributed computation. It is really important as machine learning is usually a very intensive task in term of computation.

GCP’s ML engine

Tensorflow is the engine used by Google for their service called ML engine. That means that you can write your function locally and run them serverless on the cloud. You only pay for what you have effectively consumed. That means for example that you can train a neuron network on GCP (so you don’t need GPU. TPU, or whatever computing power) and transfer your model locally.

For example, this is how the mobile app “google translate” works. A pre-trained model is downloaded on your phone, and the live translation is done locally.

Image

Note Other ML services from GCP such as cloud vision, translate, or image search, are “just” API that query a neuron network with a model trained by google.

So What?

I want to play with image recognition. Actually I already did a test with AWS’s rekognition service (See this post). However, the problems were:

  • I relied on a low-level webcam implementation. Therefore, the code was not portable;
  • I had no preview of what my computer was looking at;
  • I could not execute it on any mobile app for a demo;

As I am using a Chromebook for a while, I found a solution: Using a Javascript API and the Chrome browser to access the camera. Then, the pictures can be transfered to a backend via a websocket. The backend would do the ML and reply with whatever information via the websocket. I can then display the result or even use the voice api of Chrome to tell the result loud.

Chrome as the eye of the computer

The idea is to get a video stream and grab pictures from this stream in order to activate a neural network.

I will present different objects in front of my webcam, and their name will be displayed on the screen.

The architecture is client server: The Chrome is the eye of my bot, it communicates with the brain (a webservice in go that is running a pre-trained tensorflow neural network) via a websocket.

The rest of this paragraph is geek/javascript, if you’re not interested you can jump to the next paragraph about the brain implementation called Cortical

getUserMedia

I am using the Web API MediaDevices.getUserMedia() to open the webcam and get the stream.

This API is compatible with chrome on desktop and mobile on Android phone (but not on iOS). This means that I will be able to use a mobile phone as an “eye” of my bot.

See the compatibility matrix here

Here is the code to get access to the camera and display the video stream:

html

1<body>
2  <video autoplay id="webcam"></video>
3</body>

Javascript

 1// use MediaDevices API
 2// docs: https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia
 3if (navigator.mediaDevices) {
 4    // access the web cam
 5    navigator.mediaDevices.getUserMedia({video: true})
 6      // permission granted:
 7      .then(function(stream) {
 8          video.src = window.URL.createObjectURL(stream);
 9      })
10      // permission denied:
11      .catch(function(error) {
12          document.body.textContent = 'Could not access the camera. Error: ' + error.name;
13      });
14}

Websockets

According to Wikipedia’s definition, Websocket is a computer communications protocol, providing full-duplex communication channels over a single TCP connection. The full duplex mode is important in my architecture.

Let me explain why with a simple use case:

Imagine that your eye captures a scene and sends it to the brain for analysis. In a classic RESTfull architecture, the browser (the eye) would perform a POST request. The brain would reply with a process ID, and the eye would poll the endpoint every x seconds to get the processing status.

This can be tedious in case of multiple stimuli.

Thanks to the websocket, the server can send the query, and the server will send an event back once the processing is done. Of course this is stateless in a sort, as the query is lost once the browser is closed.

Another use case would be to get a stimulus from another “sense”. For example, imagine that you want to “warn” the end user that he has been mentioned in a tweet. The brain can be in charge of polling twitter, and it would send a message through the websocket in case of event.

Connecting to the websocket

A websocket URI is prefixed by ws or wss if the communication is encrypted (aka https). This code allows a connection through ws(s).

 1var ws
 2// Connecting the websocket
 3var loc = window.location, new_uri;
 4if (loc.protocol === "https:") {
 5  new_uri = "wss:";
 6} else {
 7  new_uri = "ws:";
 8}
 9new_uri += "//" + loc.host + "/ws";
10ws = new WebSocket(new_uri);

Messages

Web socket communication is message oriented. A message can be sent simply by calling the function ws.send(message). Websockets are supporting texts and binary messages. But for this test only text messages will be used (images will be encoded in base64).

The browser implementation of a websocket in javascript is event based. When the server sends a message, an interruption is fired and the ws.onmessage call is triggered.

This code will display the message received on the console:

1ws.onmessage = function(event) {
2  console.log("Received:" + event.data);
3};

Sending pictures to the websocket: actually seeing

I didn’t find a way to send the video stream to the brain via the websocket. Therefore, I will do what everybody does: create a canvas and “take” a picture from the video:

The method toDataURL() will take care of encoding the picture in a well-known format (png or jpeg).

 1function takeSnapshot() {
 2  var context;
 3  var width = video.offsetWidth
 4  , height = video.offsetHeight;
 5
 6  canvas = canvas || document.createElement('canvas');
 7  canvas.width = width;
 8  canvas.height = height;
 9
10  context = canvas.getContext('2d');
11  context.drawImage(video, 0, 0, width, height);
12
13  var dataURI = canvas.toDataURL('image/jpeg')
14  //...
15};

To make the processing in the brain easier, I will serialize the video into a json object and sending it via the websocket:

1var message = {"dataURI":{}};
2message.dataURI.content = dataURI.split(',')[1];
3message.dataURI.contentType = dataURI.split(',')[0].split(':')[1].split(';')[0]
4var json = JSON.stringify(message);
5ws.send(json);

Bonus: ear and voice

It is relatively easy to make chrome speak out loud the message received. This snippet will speak out loud the message Received:

1function talk(message) {
2  var utterance = new SpeechSynthesisUtterance(message);
3  window.speechSynthesis.speak(utterance);
4}

Therefore, simply adding a call to this function in the “onmessage” event of the websocket will trigger the voice of Chrome.

Listening is a bit trickier. It is done by a call to the webkitSpeechRecognition(); method. This blog post explains in detail how this works.

The call is also event based. What’s important is that, in chrome, by default, it will use an API call to the Google’s engine. Therefore the recognition won’t work offline.

When the language processing is done by chrome, five potential sentences are stored in a json array. The following snippet will take the most relevant one and send it to the brain via the websocket:

1recognition.onresult = function(event) { 
2  for (var i = event.resultIndex; i < event.results.length; ++i) {
3    if (event.results[i].isFinal) {
4      final_transcript += event.results[i][0].transcript;
5      ws.send(final_transcript);
6    }
7  }
8};

Now that we have set up the senses, let’s make a “brain”

The brain: Cortical

Picture

Now, let me explain what is, according to me, the most interesting part of this post. By now, all that I have done is a bit of javascript to grab a picture. This is not a big deal, and there is no machine learning yet (besides the speech recognition built-in in chrome). What I need now is to actually process the messages so the computer can tell what it sees.

For this purpose I have developed a message dispatcher. This dispatcher, called Cortical is available on github

Here is an extract from the README of the project:


What is Cortical?

Cortical is a go framework middleware piece of code that acts as a message dispatcher. The messages are transmitted in full duplex over a websocket. Cortical is therefore a very convenient way to distribute messages to “processing units” (other go functions) and to get the responses back in a concurrent and asynchronous way.

The “processing units” are called Cortexes and do not need to be aware of any web mechanism.


So far, so good, I can simply create a handler to receive the messages sent by the chrome browser in go:

1brain := &cortical.Cortical{
2    Upgrader: websocket.Upgrader{},
3    Cortexes:  []cortical.Cortex{
4                    &sampleTensorflowCortex{}, // cortex?
5               }, 
6}
7http.HandleFunc("/ws", brain.ServeWS)
8log.Fatal(http.ListenAndServe(":8080", nil))

Note: Concurrency and asynchronicity are really built in Cortical, this is what makes this code so helpful actually.

Cortexes

Cortexes are processing units. That is the place where messages are analyzed and where the ML magic happens.

From the readme, I quote:


A cortex is any go code that provides two functions:

  • A “send” function that returns a channel of []byte. The content of the channel is sent to the websocket once available (cf GetInfoFromCortexFunc)
  • A “receive” method that take a pointer of []byte. This function is called each time a message is received (cf SendInfoToCortex)

A cortex object must therefore be compatible with the cortical.Cortex interface:


Ok, let’s build Cortexes!

A tensorflow cortex runnig locally

The tensorflow go package is a binding to the libtensorflow.so. It has a very nice example described in the godoc here. This example is using a pre-trained inception model (http://arxiv.org/abs/1512.00567). The program starts by downloading the pre-trained model, creates a graph, and try to guess labels on a given image.

I will simply add the expected interface to transform this example into a Cortex compatible with my previous declaration (some error check and some code has been omited for clarity):

 1type sampleTensorflowCortex struct{}
 2
 3func (t *sampleTensorflowCortex) NewCortex(ctx context.Context) (cortical.GetInfoFromCortexFunc, cortical.SendInfoToCortex) {
 4        c := make(chan []byte)
 5        class := &classifier{
 6                c: c,
 7        }
 8        return class.Send, class.Receive
 9}
10
11type classifier struct {
12        c chan []byte
13}
14
15func (t *classifier) Receive(ctx context.Context, b *[]byte) {
16        var m message
17        // omited for brievety 
18        tensor, err := makeTensorFromImage(m.DataURI.Content)
19        output, err := session.Run(
20                map[tf.Output]*tf.Tensor{
21                        graph.Operation("input").Output(0): tensor,
22                },
23                []tf.Output{ graph.Operation("output").Output(0),
24                }, nil)
25        probabilities := output[0].Value().([][]float32)[0]
26        label := printBestLabel(probabilities, labelsfile)
27        t.c <- []byte(fmt.Sprintf("%v (%2.0f%%)", label.Label, label.Probability*100.0))
28}
29
30func (t *classifier) Send(ctx context.Context) chan []byte {
31      return t.c
32}

Demo

This demo has been made with my Chromebook that has only 2 Gb or RAM. The tensorflow library is compiled without any optimization. It works!

The code is here.

In the cloud with AWS

Now that I have seen that it works on my Chromebook, I can maybe use the cloud API to recognize some faces for example. Let’s try with AWS’ rekognition service.

I will use the face compare API to check whether the person in front of the webcam is me. I will provide a sample picture of me to the cortex.

I took the sample picture at work, to make the task a little bit trickier for the engine because the environment will not match what it will see.

I won’t dig into the code that can be found here.

And does it work?

Cool!

Any real application?

This is really fun and exciting.Now I will be able to code a memory cortex to fetch a training set. Then I will play with tensorflow. And do not think that everything has already been done, this area is full of surprises to come (This is the Moravec’s paradox).

However, on top of that, we can imagine a lot of application. Actually, this service is working out-of-the box on Android (and it will on iOS as soon as Apple supports the getUSerMedia interface). I imagine a simple web app (no need for an APK), that would warn you when it sees someone he knows.

I also imagine a web gallery, and the webcam would watch your reaction in front of different items and then tells you which one has been your favorite.

Indeed, there may be a lot of great application for e-commerce.

You can turn your laptop into a CCTV system so it can warn you when an unknown person is in the room. We would do a preprocessing to detect humans before actually sending the info to the cloud. That would be cheaper and a lot more efficient than the crappy CV implemented in the webcam.

And finally, combined with react.js, this can be used to do magic keynotes… But I will keep that for another story.

As a conclusion, I will put this XKCD of September 2014. It is only three years old, and yet, so many things have already changed:

XKCD