Security Is Your Job

This is the sixth and final article in a series on designing connected devices, the previous article in the series is “Time to Market vs Common Sense,” and talks about manufacturing as a startup. Links to all six articles can be found in the series overview.

Security has to be one of the first thing you consider when you design a connected device. Consumers are far more sensitive about data generated from things they can touch, and handle, than they ever have about data on the web. Big data is all very well when it is harvested quietly, silently, and stealthily, behind the scenes on the web. Because, to a lot of people, the digital Internet still isn’t the as real as the outside world. But it’s going to be a different matter altogether when their things tell tales on them behind their backs.

“The coming privacy crisis on the Internet of Things,” speaking at the TEDxExeter Salon in 2017.

Ignoring security for a connected device, or even leaving till later in the development process, is a mistake. It needs to be engineered into your device, and your thinking from the start. These seemingly smart devices are attractive to hackers because for a lot of manufacturers security is still viewed as an afterthought.

“The Little Things of Horror,“ speaking at ThingMonk in 2016.

It’s well established that most consumers currently treat their home router just like any other piece of electronics and that for many, the password is still the same default passwords the router shipped with from the factory. There’s no reason to suspect that most consumers will treat the coming wave of connected devices any differently. In fact as smart devices take over the home most consumers will treat them in the same way that they treat the dumb devices they’re replacing. Whatever security scheme is implemented by the device should take this tendency into account. If a user’s refrigerator can be recruited into a botnet it’s not going to generate good publicity for the manufacturer.

A Unique Security Problem

Even for those devices with good security, the Internet of Things presents a unique security problem. In the past a great deal of computer security has relied on attackers not having physical access to the computer, but with an Internet of Things that’s the point, and it opens up a whole new can of security worms. This physical vulnerability of Internet of Things devices means that attackers can leverage their access to a smart device to gain further access to a user’s home network, and potentially compromise much than just a single device.

This problems becomes especially acute when the devices are deployed in industrial or retail spaces rather than the home, spaces in which potential attackers have much greater freedom of movement. This is exacerbated in situations where attacks not only have access, but also privacy.

One hotel in London—that shall remain nameless—has replaced its “dumb” light switches with a series of Android tablets, allowing guests to not only control their lighting, but also their television, and even the room’s blinds. However inspecting the network traffic between the tablets and the lighting shows that they use the Modbus protocol.

Now Modbus is a serial communication protocol developed by Modicon in 1979 for use with its programmable logic controllers (PLCs). It is still in use today by many SCADA systems, although many Modbus systems now use Modbus TCP/IP and transmit information over TCP networks rather than serial cables. Notably the Modbus protocol has no authentication.

In addition the hotel had implemented an IP addresses scheme that allowed attackers to map the IP address directly back to an individual hotel room with little difficulty. The result meant that an attacker with physical access to the hotel’s network, in other words any guest or visitor to the hotel, could control systems in every room of the hotel. Regrettably this is far from an isolated incident.

Authentication and Authorisation

When you log on to a device with a user name and password you are authenticating. However this is very different than authorisation.

Authorisation is the process of verifying that you should have access to something. One of the ongoing problems in computer security is that often these two very different concepts are pushed together into a single scheme. This is exacerbated in the case of smart devices as many of the schemes we’re used to — the ubiquitous username and password of the digital Internet — no longer work for devices without a screen. The visual feedback to the user of the lock icon in their web browser’s location bar, reassuring them of a secure connection between them and the cloud, is also absent.

However the features that make smart devices powerful also make them a new vector for verifying our identify and authenticating us, both to itself, and the network of devices around it.

Increasingly as two-factor authentication becomes common on the web, our smart phones are used to generate the Time-based One-time Password (TOTP) codes to give us access to our digital identities. The thing that makes the smart phone such a powerful authentication factor is that for most of us it’s always with us, and this is even more true in the emerging class of wearable devices. The possibility of adding biometric authentication to these “always with us” devices is enticing.

Passive Authentication

However beyond the “always with us” nature of wearable devices, smart devices offer the opportunity of passive authentication. The use case of location-verified transactions is a particularly interesting, if a user can be confirmed to be in one location while a transaction — for example a cardholder-present credit card transaction — is happening elsewhere, then immediate action can be taken, in this case declining the card. Alternatively if a user can be confirmed elsewhere, a smart door lock on a user’s home can refuse entry to a potential intruder even if they present the correct credentials for entry.

Beyond location is routine, with machine learning systems you can envisage connected devices having enough knowledge of not just where a user is, but how they move from place to place, and when. It’s possible to imagine our routine as a possible second factor to automatically authenticate us to our devices. While it may seem far fetched there are already systems which make use of our habits, for instance signature verification systems that compare not just the final signature, but the speed with which the user move the pen, the pauses between letters, and pressure of the pen on the pad the user signed, to help authenticate the signature.

Two-Layer Authentication

While authentication to a single device is helpful, the end user will inevitably have many devices. Today accidentally closing your browser, which for some people may have dozens have open tabs, is a nightmare scenario. Not because the browser is unable to reopen the same tabs it closed when you accidentally quit, but that the user must now perform two-factor authentication with a half a dozen different services.

Scenarios where a user might have to re-authenticate themselves with a number of connected devices are possible, for instance a long absence, or a large geographical displacement, or even just a departure from their normal routine.

Here we need to consider how systems of devices, rather than a single device, should be authenticated. Although at the moment these systems often times just consist of a single smart device and the smart phone the owner uses to control the device, in the future that will not necessarily be the case.

However even today it makes sense to split authentication into two layer, authentication between devices — in this case the smart phone and the connected device it controls — and authentication between the device and the user.

If the smart phone can be authenticated automatically, perhaps using a variation of TOTP and shared secret keys, then the user needs only worry about authenticating themselves to their phone — something they’d commonly have to do anyway. As one device becomes two, and two become many, then this scheme starts to make even more sense. The user can authenticate themselves to one device, and if all of their smart devices are authenticated to each other then the device in question can act as a proxy to confirm their identity to the other devices. This has the added benefit of separating once again the issues of authentication and authorisation.

Using a Cloud Proxy

While some manufacturers have gotten around the issue of remote control of connected devices by making both the user — and the smart phone app the owner uses to control the device — and the smart device connect to a remote cloud application, with all communication flowing through the cloud, even in the case where the owner and the connected device are located just a few feet apart, this is not optimal. If the user and the device they are trying to control are on the same local network, a working connection to the Internet should not be necessary to control a connected device.

However without such a central service a cloud proxy is necessary to allow the owner to contact their smart device from an external network. Here the smart device would open a network connection, a SSH tunnel, through the user’s firewall to a remote proxy service. When the user attempted to control the device from an external network the user’s smart phone would instead connect to the proxy. The proxy could then pass the commands back to the smart device through the SSH tunnel. To the smart device, which would authenticate this request in the normal fashion, the phone and the user would appear to be ‘local.’

This scenario has several advantages over a centrally controlled service. However the main advantage is privacy, the proxy service has no need to know the contents of the messages it is passing back to the user’s smart device, just its identity to tell it apart from all the other devices. The user’s message traffic will however remains encrypted end-to-end.

The Internet of Things and the Industrial Internet

While the Industrial Internet has its roots in the SCADA systems of the early sixties, the Internet of Things has its roots in the web architectures of the dotcom boom. The clash of those cultures, and architectures, may well contribute to dangerous security problems.

The Industrial Internet isn’t necessarily about connecting big machines to the public Internet; rather, it refers to machines becoming nodes on a pervasive networks that use open protocols — Internet-like behaviour follows. These behaviours occur because a lot of things become possible if the network can just be assumed, if connectivity can be assumed.

SCADA systems that tie together decentralised facilities were designed to be robust, easily operated and repaired, but not necessarily secure. To be fair most SCADA systems were never intended to be connected to public network. Unfortunately this hasn’t stopped people taking legacy SCADA systems and connecting them to Internet. There’s a big temptation to do so, it makes things a lot easier, and it looks powerful. Unfortunately by virtualising access to serial ports, and exposing them as Internet of Things edge devices, many large systems, driving large scale machinery is directly exposed to attack.


Which almost inevitably, brings us to Stuxnet. It was the first of a new breed of malicious code. It attacked in 3 phases, first it targeted Microsoft window machines and networks, replicating itself. Then it sought out Siemens Step 7 software, which is a bit of Windows-based software used for industrial control systems, finally it compromised the PLCs (programmable logic controllers) attached to those boxes. But crucially only if they were operating a very small range of variable-frequency drives, centrifuges in other words. The worm’s authors could thus spy on the industrial systems and then cause these fast-spinning centrifuges to tear themselves apart.

Now most speculation identifies Stuxnet’s target at Iran nuclear plants carrying out uranium enrichment — as many as 60 percent of the identified infected machines were in Iran, and the complexity of the worm implies that a nation state was behind it.

However Stuxnet was not a one off, an aberration. It was a high profile flag for what’s coming as more and more sensors and actuators are put on public-facing networks. Most of these are going to be much softer targets than going after a IR-1 centrifuge operating with uranium hexafluoride. The next big attack will almost inevitably trade sophistication for scale.

The Great Internet Census of 2012

World Map showing location of hosts discovered by the Carna botnet in 2012. (📷: Internet Census)

Built by an anonymous researcher to measure the extent of the Internet, the Carna botnet was designed to attack small embedded systems — the precursors to today’s Internet of Things devices — rather than desktop computers. The botnet made use of almost trivially exploitable security vulnerabilities, such as routers using the default password, to build a large scale distributed port scanner.

While a solid security strategy is necessary when building a connected device the success of the Carna botnet is telling, four simple ‘default’ password gave its author access to hundreds of thousands of consumer devices, as well as ten of thousands of industrial devices. As the author concludes, “A lot of devices and services we have seen during our research should never be connected to the public Internet at all. As a rule of thumb, if you believe that ‘nobody would connect that to the Internet, really nobody,’ there are at least 1000 people who did.”

Broken Firmware

One potentially serious problem with many of today’s smart devices is that the high level “smarts” often sit on top of the same silicon as other devices.

For instance both the Fitbit Aria WiFi bathroom scales and the Ring smart doorbell made use a WiFi module produced by GainSpan, and back in 2015 Pen Test Partners discovered a vulnerability in the firmware of the GainSpan module used by the Aria scales that allowed attackers to retrieve the SSID and WPA PSK of the owner’s home network by placing the scales into setup mode, which can be simply done by pressing the reset button on the bottom of the scales, connecting to the Access Point (AP) the scales create when in this mode and retrieving the information from a standard Gainspan firmware provided endpoint.

The same firm discovered a similar vulnerability in the Ring doorbell. However since the Ring doorbell is mounted outside the house, rather than living in the owner’s bathroom, the vulnerability in this case was much more serious Ring patched the vulnerability immediately when notified, however the device still exposed a number of pages left from the GainSpan SDK.

These cases show the potential security problems when dealing with off the shelf modular hardware. Most connected devices will be build from standard modules, developing your own silicon is far beyond the capabilities of almost any company considering building a device. However these modules do come with their own vulnerabilities. In the case of the GainSpan WiFi the original manufacturer regarded this as normal operation of their SDK, and advises all manufacturers to remove these endpoints before production.

Fixing these sorts of problems once a significant number of units are in the wild can be problematic, most users can’t or won’t perform firmware updates — or even be aware such updates may be necessary. You will find bugs in your connected device after shipping begins, sometimes these bugs can lead to large exposure surfaces for attacks and will need to be fixed. While there are companies like Balena that are working to simplify automated firmware update deployment to distributed devices, right now the burden on doing so lies squarely with the manufacturer of the device.

Fixing the Firmware?

The time to fix firmware in a product that is massively distributed can be protracted, even if the manufacturer is proactive in fixing the problems. However depending how the device interacts with the cloud, or how thoroughly the security scheme is integrated into the SDK for the product making the required changes can take a great deal of time.

At the start of 2014 using a combined approach investigation of the Estimote Bluetooth LE beacon SDK proved that the beacons were easily reconfigurable in the field by unauthorised third-party attackers. The implications of that were fairly far reaching. If someone maliciously changes the iBeacon Major or Minor characteristic of a beacon, any consumer application configured to use that particular beacon will stop working — the beacons must be configured with a pre-defined identity to trigger the correct behaviour inside the consumer’s own application when their smart phone comes into proximity of the beacon.

Beyond that you could potentially configure a “fake” beacon to act as an impostor of another beacon belonging to a retail chain, potentially gaining access to promotions, gift cards and other location dependant goodies tied to the beacon you’re impersonating. In fact I did so, twice, with the CES Scavenger Hunt.

A year and a half later, in the wake of the announcement of Google’s new beacon standard Eddystone, the Estimote beacons were updated. However despite changes to their SDK the vulnerabilities discovered were still present in the beacon SDK. The added capabilities of the Eddystone support made the presence of this vulnerability much more critical. With a URL it is much easier to trick a user into visiting a malicious web page, which could then automatically download and install a root kit onto their device.

Once publicised it took Estimote another month to fix the vulnerability in their firmware.

Reverse Engineering the Hardware

Dropping below the exposed software, and even below the firmware level, physical access to the hardware means that it too is vulnerable. Many connected devices are put in production with a serial port still on board. The pads on the PCB may no longer be connected to a socket that is exposed on the outside of the case, but the traces still exist on the board itself.

While not trivial, it’s perfectly possible to use these vestigial serial ports — left by the engineers that designed the board for debug and (possibly) technical support purposes — to reverse engineer the device. Bypassing any high level security these ports often give attackers direct access into the heart of the device, its firmware, and even the data flows (SPI traffic for instance) between pieces of hardware.

Beware What You Put in Production

You should be very aware what hardware your putting into production. While it’s tempting to leave debugging ports on your board when it goes into production, and there may be good reasons why it should go into production that way, you should be careful about how and what it can access.

You should be very clear between the difference between a “throw away” prototype and a “works like” prototype that is intended to be used for iterative development towards a final product. Prototyping using off the shelf hardware, like the Raspberry Pi, can often lead to small scale production runs using the same hardware as your prototype. Unfortunately people rarely remember to update the software onboard these off-the-shelf devices, and the device accumulates well know vulnerabilities over time and becomes a ticking time bomb without updates.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alasdair Allan

Alasdair Allan

Scientist, Author, Hacker, Maker, and Journalist.