On my shelf I have many books spanning everything from networking to AI to security to the basics. While most of my college textbooks are long gone, two remain: the dragon book and Introduction to Algorithms.
The former I keep for sentimentality. I’ve written exactly two compilers in my life, and I hope never to write another one. The latter I keep because it is timeless. Algorithms, you see, aren’t tied to any operating system or language. They’re logical rules—patterns—that are followed to solve common problems.
This is sometimes why I say I “Dijkstra” my errands when out driving. Dijkstra’s algorithm is a set of rules to find the shortest path, and it’s as applicable to running multiple errands as it is routing packets through a network.
With that in mind, let’s consider the evolving space of prompt engineering. A simple definition is “the practice of designing inputs for generative AI tools that will produce optimal outputs.” (McKinsey)
Over the course of the past few months we’ve seen numerous prompt engineering “techniques” surface, each of which were devised to solve a specific type of problem: how to produce optimal outputs out of generative AI.
Forbes has been doing an excellent job bringing these techniques to the fore:
There are many more out there, but they all share the same characteristics. Each describes a set of rules or patterns for interacting with generative AI to produce desired results. From an engineering perspective, this is not all that different than algorithms describing how to sort a binary tree, reverse a linked list, or find the shortest path through a graph to a destination.
They are, in design and purpose, natural language algorithms.
Now, I’m not going to encourage engineers to become prompt engineers. But as many engineers today are finding out, using natural language algorithms to design more effective generative AI solutions works. If you read through this blog on mitigating AI hallucinations you’ll see that within the context of the solution, multiple natural language algorithms including chain of thought and reflective AI are used to guide the responses of GPT such that an optimal answer is generated.
The reason this is important to recognize is that as prompt engineering techniques emerge and, ultimately, receive recognizable names, they become the building blocks for solutions that leverage generative AI. Today’s prompt engineering techniques are tomorrow’s natural language algorithms.
And we would do well not to ignore them or dismiss them as less valuable than traditional algorithms nor ignore them as only applicable to chat interfaces used by family and friends.
We may rely on an API to integrate generative AI into solutions, but the data we’re exchanging is natural language, and that means we can leverage those prompt engineering techniques—those natural language algorithms—within those solutions we’re building to produce better, clearer, and more correct answers from generative AI.
This also means that technology leaders should not just allow but encourage engineers to spend time engaging with generative AI to uncover those patterns and algorithms that will lead to more optimal solutions.
You never know, one of your engineers might just wind up having an algorithm named after them in the future.