The UP AI Core X, Just Another Neural Accelerator Board?

Alasdair Allan
3 min readOct 12, 2018

It’s been more than a year since the Movidius Neural Compute Stick arrived on my desk. Since then we’ve seen a number of other different chips, with big players like Google starting to think about their own silicon. The arrival of this custom silicon over such a short period is a leading indicator. A recognition, not just of the growing ubiquity of machine learning but that Moore’s Law, which has sustained the industry since its inception, is dead. Any further performance increases will rely on custom silicon. Initially controversial, the accepted wisdom is now that deep learning is eating software.

Earlier in the year, Aaeon announced their UP Core board, built around the Intel Movidius chip using a mini-PCIe board. This week they followed up with the announcement of the UP AI Core X range, “…a complete product line of neural network accelerators for edge devices.”

The UP AI Core X. (📷: Aaeon)

Based around the new Myriad X chip, the successor to the Myriad 2 chip used in their previous board, giving an estimated ×10 performance increase over the previous generation of VPU. The new boards are available with one or two Myriad X chips in a variety of form factors—MiniCard/mPCIe, M.2 2230, M.2 2242 or M.2 2280. Alongside these boards is a credit-card sized board, intended for developers, called the AI Vision Plus X.

The mPCIe AI Core X board (far left), and the three AI Core XM boards (M.2 2230, M.2 2242, M.2 2280) (right). (📷: Aaeon)

Machine learning development is done in two stages. An algorithm is initially trained on a large set of sample data on a fast powerful machine or cluster, then the trained network is deployed into an application that needs to interpret real data. This deployment stage, or “inference” stage, is where the AI Core boards are useful using standard tools such as Caffe, TensorFlow, or the OpenVINO Toolkit.

The ability to run these trained networks “at the edge” nearer the data — without the cloud support that seems necessary to almost every task these days, or even in some cases without even a network connection — could help reduce barriers to developing, tuning, and deploying machine learning applications. It could potentially help make “smart objects” actually smart, rather than just network connected clients for machine learning algorithms running in remote data centres. It could, in fact, be the start of a sea change about how we think about machine learning and how the Internet of Things might be built. Because now there is — at least the potential — to allow us to put the smarts on the smart device, rather than in the cloud.

The UP AI Core X is currently raising on Crowd Supply, with pricing at $94 for the single-core mPCIe, M.2 2230 E key, and M.2 2242 B+M key, AI Core X and XM boards, and $144 for the dual-core M.2 2280 M+B key AI Core XM board. The three-core AI Vision Plus X board is more expensive, with a $259 price point. However, unlike the Core X and Core XM boards which are available individually, the Vision Plus X seems to only be available only as part of a $548 “developer kit.”