Where Does Your Smart Product Sit?
Designing for the Internet of Things
This is the first article in a series of six on designing connected devices, the next article in the series is “Starting With One” and covers prototyping. Links to all six articles can be found in the series overview.
Forty years ago two men named Steve built a business out of hardware that went on to be the most valuable company the world has ever seen.
Time passed, and technology became more complicated, so much more complicated that it became much harder to do that. But that’s beginning to change again, the dot com revolution happened because, for a few thousand or even just a few hundred dollars, anyone could have an idea and build a software startup.
Today for the same money you can build a business selling things, actual goods, and the secret there is that you don’t have to train a whole generation of people into realising that physical objects are worth money the same way people had to be trained to realise that software was worth money.
But everything begins with how your users will interact with your device, and how it will interact with both them and the network. In other words, where your product will sit in the hierarchy of connected devices.
Local, Edge, or Cloud?
In general connected devices can be roughly split into three broad categories; local devices, edge devices, and finally, the cloud connected devices.
Local devices don’t have the capability to reach the Internet, but they are still connected in a network. This network usually isn’t TCP/IP based, for instance both Zigbee and Bluetooth LE devices are good examples of network connected things that operate locally rather than directly connected to the Internet, and illustrate the two types of local networking. In the case of Zigbee the device operates in a mesh, or peer-to-peer, mode with packets hoping between devices until they reach the edge of the local network. In the case of Bluetooth LE the device operates in a broadcast, or paired mode, with messages being picked up directly by a device on the edge of the network.
Edge devices are exactly as you’d expect, typically they have multiple radios, and operate in both local mode, for instance utilising Zigbee or Bluetooth LE to talk to a local non-TCP/IP network, but also fully support a TCP/IP connection to the outside world. They act as a bridge, or gateway, between a local network and the outside world. Typically forwarding data received from a local network of sensor devices, or commands to a similar network of actuators, to and from the Cloud.
Cloud devices are things that can directly connect to a TCP/IP network, in most cases using Wi-Fi, although wired devices also count in this category. They’re distinct from edge devices in that they, typically, don’t communicate via a local network to other network enabled devices. If they are part of an extended network of smart devices all the communication is normally funnelled via the cloud.
State of the Current
At the moment most consumer facing Internet of Things things are targeted at early adopters, and the predominant model for these devices is the cloud device. Prices, and margins on devices, can be quite large. These large margins allows manufacturers to deploy cloud facing devices, which due to the higher power requirements required by the radios generally have to be connected to a main power supply. Such devices, which are TCP/IP native also have reduced development times, since they share similar software architectures with the web and mobile applications that proceeded them.
Unfortunately this also means that communication between things only feet apart can end up being proxied via a data centre in upstate New York even though the things themselves are in San Francisco and are in line of sight with one another.
However this trend towards cloud architectures we’ve seen over the last few years isn’t sustainable. With tens of billions more Internet connected devices arriving over the next few years, far faster than any predicted increase in bandwidth to outside world, data is increasingly going to become a local rather than a cloud problem.
A Race to the Bottom
While margins are high, consumer facing Internet of Things devices which are capable of native TCP/IP connections are possible. However, increasingly the market will become far more price sensitive.
While there is the argument that Moore’s Law will drive the price of silicon downwards, there is the obvious corollary which then suggests that as the market becomes price sensitive then, instead of maintaining the current price, the price of a basic networked device will be pushed downwards. Cheaper less capable silicon will be used, and the basic building blocks of the Internet of Things will (for the most part) continue to consist of devices that are incapable of natively connecting to a TCP/IP network, and hence directly to the Internet.
If you want to deploy devices in massive quantities its is more likely that these devices will have the minimum viable specification that is required to do the task at hand. This implies that while many manufacturers of connected products are anticipating that, in a similar way to the digital Internet, a single protocol and network stack will come dominate the Internet of Things it is in fact unlikely that this will happen. Instead the Internet of Things will remain diverse in the protocols and architectures used.
It’s possible that convergence will happen at the highest level of the protocol stack, with the data exchange formats tending towards a single standard, while all the while the underlying transport protocols remain diverse. This is almost the opposite of he existing digital Internet, where a diverse number of low level networking protocols have been effectively replaced with TCP/IP, layered on top of which is a large number of other higher level transport protocols and document standards.
Building User Stories
Building user stories that show not just how the end user will use your device, but how the end user interacts with the (potentially smart) world around them, is vital. For instance while smart light bulbs are being held up as a massive Internet of Things success story, looking at how users actually make use of them illustrates exactly what is wrong with the current generation of smart things.
Like most Internet of Things devices being sold directly to consumer right now, smart light bulbs mostly have the same architecture; there is a thing, in this case the bulb, an app that talks the thing, and a cloud service supporting both the app and the thing. While there is some variation depending on the networking technology in use, whether the lights themselves talk directly to the user’s home WiFi network or make use of a mesh networking stack, such as Zigbee HA, and require a “hub” or gateway, the underlying model of “thing, app, and cloud” is almost universal.
Before the arrival of smart light bulb like the Philips Hue, your lights (at least in the home) were mostly things you turned on and off using a switch on the wall. Now you can control your lights from anywhere in the world using an app on your smart phone. However while that adds functionality, it is at a cost.
Think about entering a dark room. Ae you going to put your hand to your pocket, retrieve your smart phone, unlock it, open the smart light bulb app, find the right bulb in the app, and then turn the bulb on? Or are you simply going to use the switch on the wall. Worse, leaving a room you have somewhere to be, are you going to go through a similar procedure to turn the bulb off, or are you simply going to use the wall switch associated with that light?
Unfortunately you can make your smart light system unresponsive by using the wall switch — or by anyone else in the house who might not know you have ‘smart’ light bulbs installed — by using the same wall switch. Once the light bulb has been turned off a the wall, the new additional functionality provided by the smart bulb, the ability to be controlled from anywhere, is no longer available.
A smart light bulb replaces a thing we use every day, the light switch, but it does it poorly. Building the user story about how lights are used it’s fairly evident that what we really we need to replace is the switch, not the bulb. The switch has continuous mains power available, unlike the bulb.
Unfortunately while replacing the switch makes far more sense, there is a problem. The problem is of course that, at least for most people, a light switch needs an electrician to install it, but a light bulb doesn’t. With smart light bulbs you are therefore trading one time friction at installation, for friction every time turn the bulb is turned on or off.
That added burden, the friction added to each and every interaction with the smart device, will not be acceptable to the average user in the long term. It’s barely acceptable to the early adopters that currently make up most of the market for Internet of Things devices in the home.
Designing the Minimum Viable Product
A great deal has been said about the minimum viable product, and the way to build a software startup around one has now been codified almost to the point where you could chisel it onto a set of stone tablets. There’s a recipe that most software startup founders follow, building a product with just enough features to gather data bout how end users work with the product to inform its continued development.
However the story of hardware development is somewhat different to software development. While the concept of a minimum viable product still exists and is useful, building hardware tends to be design rather than feature led. That results in two prototypes being built, a “looks like” and a “works like” prototype. These separate models look like the final product, with little or sometimes none of the intended functionality, and work like the final product, and generally bear no outward resemblance to the final product. This way, you can separate the crucially important design of your product from, the user stories of how the users will touch, feel, and interact with it, from the equally important how of the thing. How it works, how it itself interacts with the world, and the network, around it.
In the last few years we’ve finally reached a point in the software design world where we’ve figured out that offering the user choice isn’t necessarily the best thing to do. More choice isn’t necessarily a good thing, sometimes it can be a confusing thing. Confusing the “looks like” and “works like” prototypes, or mixing them, leads to what’s commonly referred to as feature driven design. For hardware products intended for the Internet of Things this generally implies poor user interaction with the product. The more buttons and dials you add, the worse your design. Effectively every control you add is a design decision you aren’t making, you’re offloading the design of your interface, or rather of how things should work, onto the end user. This is design indecision, passing on design decisions to the user, design decisions that should have been taken by the designer themselves a single time instead of by the user every time they use the product.
The Standards Problem
There is a brewing, if not all out, standards war ongoing in the Internet of Things. Part of the confusion around Internet of Things standards is of course, the different capabilities of the devices lumped into the category. Putting a low cost remote battery powered sensor, or something like an iBeacon, in the same category as a high-end media appliance that streams video from the cloud, and attempting to make the claim that there will be one protocol to rule them all is something that will inevitably end in failure.
There is also a great deal of confusion around protocols at different levels of the networking stack, for instance there is a fundamental difference between low level wireless standards like Zigbee or WiFi, and much higher level concepts such as TCP/IP or above that HTTP and “the web” which sits on top of the TCP/IP networking stack. Above even that are document level standards like XML and JSON, and even more conceptually wooly things such as patterns. For instance the concept of RESTful services is effectively a design pattern built on top of document and high level networking protocol standards. It is not in itself fundamental, and is unrelated to the underlying hardware, at least if the hardware itself is capable of supporting an implementation of these higher level protocols
However perhaps the greatest standards problem with the Internet of Things is that, due to constraints in power or computing resources, it is a mess of competing and incompatible standards at the lowest level. Factors such as range, data throughput requirements, power demands and battery life dictate the choice from a bewildering array of different wireless standards.
The Big Three
The big three, the most familiar to consumers and to developers, are Bluetooth, WiFi, and GSM (cellular). The three technologies probably need little introduction are have limited use case overlap, although they can be misused.
The most obvious choice, perhaps even the default choice, for networking an Internet of Things device is the WiFi standard. It offers good data rates, and reasonable ranges (of the order 150ft or larger), and means your device can connect directly to the Internet if needed. However WiFi devices with even moderate duty cycles are power hungry. Unless the device has a mains power supply it’s probably a poor choice for an Internet of Things device, and forcing a mains power supply into a device that would be better without one just so it can have a WiFi connection is an even poorer design choice.
Bluetooth, especially the low energy configurations intended for low data rates and limited duty cycles, is designed for personal (wearable) devices with limited ranges and accessories. While recent standards revisions include support for direct Internet access via 6LoWPAN (IPv6) there is still only limited support that effectively means that Bluetooth devices are restricted to local, and small, networks spanning (despite manufacturers claims) around 30 or 50ft. In the shortest range (few inches) use cases before considering Bluetooth, NFC (Near Field Communications) technology should also be considered.
Regrettably when dealing with Bluetooth you should also consider that, while the Bluetooth 4 standard shares little except the name with proceeding standards, the public also associates the standard with poor reliability.
Of the thee perhaps the most ubiquitous, with the widest deployment and market penetration, is GSM. If your cell phone can get signal in a location, so can an Internet of Things devices with a GSM module onboard. Data rates lie somewhere between WiFi and Bluetooth, with the main advantage being range. GSM devices can be located up to 20 miles from a cell tower, and depending on intervening obstacles, still get reception. However GSM is both power hungry, and expensive to operate. While GSM may be a good fit for a gateway device, it’s unlikely to be a good fit for most Internet of Things devices deployed into the home.
Standards such as Zigbee and Z-wave are less widely known to consumers but fill a niche in the local networking space. While they need a gateway device to talk to the digital Internet both standards have mesh networking capability, so while individual devices can have between 30 to 300ft range the size of the network is actually only limited by the number of devices deployed. Both Zigbee and Z-wave are targeted for low power applications with lower data rates.
While Zigbee and Z-wave have been around for a while, newer IPv6 protocols such as Thread, which are based around the a basket of standards including 6LowPAN, offer mesh networking and direct access to the digital Internet so long as IPv6 capable networking gear is in place. Designed to stand alongside WiFi these IPv6 based protocols such as Thread are attempting to address the lack of TCP/IP at the lowest levels of the Internet of Things accepting that the high powered WiFi standard may be inappropriate for many (if not most) Internet of Things devices.
Wide Area Networks
While GSM is the most popular standard used to provide Wide Area Networks (WAN), there exist other newer standards potentially better suited to the low powered nature of smart devices that attempt to provide this functionality at much lower costs.
LoRaWAN uses the ISM radio bands and has a range of up to 3 mile in an urban environment and perhaps 9 or 10 miles in a suburban environment, data rates range from 0.3 kbps up to as high as 50 kbps, and it makes use of various frequencies depending on deployment. It is also optimised for low power deployments and large (millions of devices) deployments. The first LoRa network with nation wide coverage was rolled out in June 2016 in the Netherlands by the Dutch telecoms group KPN.
Sigfox has a longer operational range in rural environments, up to 30 miles, but that drops as low as 6 miles or less in urban environments. Like LoRaWAN, Sigfox uses the ISM radio bands and is currently being rolled out in major cities across Europe and the United Kingdom. Making use of Ultra Narrow Band signalling it is intended for very low data rates (just 10 to 1000 bps) but also very low power deployments (consuming a factor of 100 less power than cellular).
There are other far less known alternatives including Neul, which leverages the small slices of white space spectrum between the allocated TV bands, and has a range of around 6 miles. But while the battle between these wide area networking technologies is still ongoing, it now seems likely that LoRaWAN has taken an admittedly still shaky lead in standards war.
Using Inappropriate Technology
Choosing between the various wireless standards, let alone higher level protocols layered on top of the standards, is a tough problem, and you need to be careful not to choose inappropriate technology for your use case. A good example of this is the CES scavenger hunt.
Twice now International CES has run a scavenger hunt. Based around iBeacon technology participants needed to hunt for eight beacons scattered around the vast halls (three venues) of the CES show in Las Vegas. Both times I have managed to “hack” this scavenger hunt without leaving my desk, or actually attending CES, by fooling the CES iPhone app into thinking that I’ve found all the beacons.
This is actually fairly easily done by decompiling the CES app to find the identities (and locations) of the beacons it is looking for, and then replicating them using a Raspberry Pi or even another phone, fooling the app into thinking that the phone running it is in close proximity to the actual beacons.
While not a particular, and probably anticipated, failure mode for this technology deployment you need to be careful that you don’t fall in love with the buzzwords and choose inappropriate technology for your application. If the technology is a poor fit to your use case your architecture can become overly complicated, compromising both the reliability and security of your device.
Products, Platforms, and Strategies
Over the course of the last few years many companies have introduced platforms and products “designed for the Internet of Things.” However in reality these products often are either just proprietary middleware and services, or are simply embedded electronics, with or without the addition of a network connection. Just adding a network connection to an existing device doesn’t make it part of the Internet of Things.
In an environment that is rapidly changing, and will continue to be volatile for several more years before best practices, or even just accepted practices, start to emerge designing your product will depend heavily on a number of factors. One of the key problems in the current generation of Internet of Things device is the refresh cycle problem. Companies used to the rapid yearly upgrade cycles of the cellphone and laptop are struggling to cope with typical refresh cycles for consumer and industrial products that might be better thought of in decades rather than months or years.