chrislaing.net

chrislaing.net

Beware the Mathematician

Christopher Laing

3-Minute Read

High-Code: The Unexpected Path to AI Democratisation

Years ago, I read a blog post called Always Bet on Text that left a lasting impression on me. Not only did it accord with my own experience, but it gave me a good, compact reference frame to think about technology. No recent technology is more suited to this reference frame than Generative AI, for the simple reason that, at its core, modern GenAI is all about text completion.

Why Simple Text Completion Matters

When ChatGPT first appeared, it struggled with basic maths. This limitation revealed a key insight into how Large Language Models (LLMs) work. Rather than asking ChatGPT to solve logic problems directly, I found that asking it to write code to solve those problems - and then running that code - was remarkably effective. Sure enough, some time later, ChatGPT and other AI models began using this exact approach for logic problems.

This underscores a crucial point about LLMs: they’re brilliant with text but less adept at other forms of thinking. The trick to unlocking their potential is to reframe your problem as a text completion task.

A Real-World Example: Diagramming

Take the task of creating a diagram. You could use a traditional tool like Lucidchart, perhaps a tool with an AI-assisted diagramming interface. I’m sure that Microsoft will bundle one into Microsoft365 if they haven’t already. But if you want to iterate on that diagram, use it as input for further steps, or do any computation on it, you need a different approach. The answer is to turn diagramming into a text completion problem.

User-friendly, text-based diagramming systems like Mermaid have sprung up in recent years, but the concept has been around in various forms for a very long time. By representing diagrams as code, we open up new possibilities for AI-assisted creation, modification, and integration into larger workflows.

Programmers and tech-savvy folks have an advantage here. For years, we’ve been turning various domains into “-as-code” paradigms: infrastructure-as-code, diagrams-as-code, and so on. We’ve long recognised the precision and utility of text, building our workflows around it.

This text-centric approach puts us in a prime position to leverage current-generation GenAI. Programmers who work this way - and programming-adjacent executives like myself who’ve held onto these methods - are already operating in a way that maximises the problem-solving potential of LLMs. Anyone not using GenAI this way is operating (at best) through a layer of indirection, hemmed in by the limited scope of whatever tool they are using for a task.

High-Code Solutions Will Democratise AI

To really tap into GenAI’s power, we need to shift our thinking. Instead of asking, “How can AI solve this problem?”, we should ask, “How can I turn this into a text completion task?” and then give that task to the LLM. This approach not only lets us play to LLMs’ strengths but also helps us create more flexible, iterative, and integrated solutions, by turning our workflows into something that can be computed.

In this sense, so-called “Low-Code” solutions are a backwards step. The idea behind Low-Code is that we can get more people creating automated systems by turning programming into a graphical, point-and-click exercise. The insight we can draw from modern GenAI is that we will instead get productivity and democratisation of computing by turning point-and-click interfaces into text completion tasks.

Perhaps, instead of replacing programmers, GenAI will create even more of them.

Recent Posts

Categories

About

The personal website of Christopher Laing.