Malignant Intelligence
Prompt engineering and software archeology
--
Transcript and video of the opening keynote I gave NDC Oslo in May 2023.
We’ve reached a tipping point when it comes to generative AI. Looking back, I think the first sign of the coming avalanche was the release of the Stable Diffusion model around the middle of last year. It was similar to several things that came before, but with one crucial difference: they released the whole thing. You could download and use the model on your own computer.
At the beginning of last year if I’d seen someone use something like Stable Diffusion — which can take a rough image along with a text prompt to generate artwork — and use it on a TV show, I’d have grumbled about how that was a step too far. That is was a classic case of “Enhance!” the annoying demand of every TV detective when confronted with non-existent details in CCTV footage.
But instead of being fiction, it’s real now. Machine learning generative models are here, and the rate with which they are improving is unreal. So if you’ve been ignoring them so far, it really is worth paying attention to what they can do, and how they are changing.
Because if you haven’t been paying attention it might come as a surprise that large language models like GPT-3, 4, or LLaMA, are a lot larger and far more expensive to build than the image generation models that have become now, almost ubiquitous.
Until very recently these models have remained closely guarded by the companies — like OpenAI — that have built them, and accessible only via web interfaces, or if you were very lucky via an API. But even if you could have gotten hold of them they would have been far too large, and far too computationally expensive to run on your own hardware.
But like last year’s release of Stable Diffusion was a tipping point for image generation models, the release of a version of the LLaMA model the month before last, that could be run on your own computer, is game-changing.