Machine Learning for Creators?

I’ve argued in the past that the biggest growth area in machine learning practice over the next year or two could well be around inferencing, rather than training. Because it is the existence of pre-trained models that let us easily and quickly build prototypes and projects on top of machine learning. Which is, it turns out, what most people that aren’t focused on the machine learning itself but instead just want to something accomplished, want.

We’ve seen this sort of simplification of the field already, from the folks at, with the release of their AI2GO framework. The framework heavily abstracts away the underlying machine learning, delivering pre-trained models as a ‘bundle’ which you can link against in your Python code. Then inferencing using the bundle can be done in just a single line of code.

However, the new RunwayML application, now released in public beta, takes this all a step further, this is machine learning as you haven’t seen it before. No experience needed. No code needed. This is machine learning for creators, and makers, not developers.

RunwayML beta with a MobileNet V2 SSD model trained against the COCO dataset. 🍌🍎 (📷: Alasdair Allan)

Announced back in June, the Runway application sits on top of a pre-trained model zoo—although it’s also possible to port existing models to run in the Runway application. But unlike Xnor’s AI2GO framework, which sits on top of on their new generation of binary weight models, Runway has all of the familiar models that any machine learning practitioner would expect.

In fact, while the process of using the model is far more abstracted in Runway, the underlying model type you’re using is far more visible here than with the AI2GO bundle maker, which goes out of its way to abstract the underlying model away from the end user. Although mostly, I rather suspect, to hide the interesting new proprietary models from them.

Selecting a ‘Kitchen Object Detector’ model in the AI2GO bundle maker. (📷: Alasdair Allan)

Models in the Runway application can either run locally, if you have Docker Desktop running on your laptop, or remotely using Runway’s “GPU enabled” cloud infrastructure. Although some models are only available on remote GPU instances, and can’t be downloaded and run locally. Yet, running models remotely costs money. During the beta period running models on Runway’s back end infrastructure will cost $0.05 per minute, you do get $10 in free GPU credits when they sign up for a Runway account.

However, things get really interesting because while you might not need to be able to code to use models inside the Runway application, you can interact with them from code over the network, “…each Runway Model exposes itself using three network ports on localhost: an HTTP port, a Socket.IO port, and an OSC port. All three servers are available for the duration of the time that the model is running.

Introduction to Runway: Machine Learning for Creators (Part 1). (📹: The Coding Train)

Until very recently it’s actually been moderately hard to get people to take pre-trained models seriously. A lot of getting started guides don’t really talk about inferencing at all, a lot began and ended with training new machine learning models. Most machine learning experts viewed pre-trained models as nothing more than something to let you play around with the tooling, a simple “Hello World” for machine learning, and not that useful.

You might drawn an analogy between a trained model and a binary, and the data set the model was trained on, and source code. But it turns out that the data isn’t as useful to you — or at least most people — as the trained model.

Because lets be real for a moment. The secret behind the recent successes of machine learning isn’t the algorithms, this stuff has been lurking in the background for decades waiting for computing to catch up. Instead, the success of machine learning has relied heavily on the corpus of training data that companies — like Google — have managed to build up.

For the most part these training datasets are the secret sauce, and closely held by the companies, and people, that have them. But those datasets have also grown so large that most people, even if they had them, couldn’t store them, or train a new model based on them.

So unlike software, where we want source code not binaries, I’d actually argue that for machine learning the majority of us want models, not data. Most of us — developers, hardware folks — should be looking at inferencing, not training.

Runway ML beta with a ResNet-32 model detecting facial landmarks. 🙂 (📷: Alasdair Allan)

Although it looks like the Runway application may also soon support training in one form or another, as there is an option to train your own models greyed out in the left hand menu bar, “…coming soon.

If you want to get started with RunwayML then the company has thrown up extensive documentation on how to get started, which comes along with example code showing you how to connect the platform to a number of different environments from Python, to JavaScript, and even Processing. Although the Github repo for connecting the Runway application to an Arduino is currently empty, and “…still a work in progress.

Introduction to Runway: Machine Learning for Creators (Part 2). (📹: The Coding Train)

Processing is an sort of an intriguing addition, because it was arguably one of the tools that drove the data journalism revolution of the early teens, where suddenly everyone could be part of the narrative.

Data, on its own, locked up or muddled with errors, does little good,” Alex Howard wrote back in 2012, “Cleaned up, structured, analysed and layered into stories, data can enhance our understanding of the most basic questions about our world, helping journalists to explain who, what, where, how and why changes are happening.

It’s going to be really rather interesting to see if the Runway application can serve the same purpose for machine learning, as Processing did for big data.

This is almost getting too easy. Essentially we’re at the point where we can plug machine learning systems together like Lego, without really having to understand how the underlying machine learning works. Which, when you think about it, is actually sort of interesting.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store