Language Model-based programming (LLMs) is a new and exciting development in the field of artificial intelligence. While some experts might argue that these models are not yet advanced enough to be considered Artificial General Intelligence (AGI), it is evident that there are already a multitude of useful applications we can create with them. In this article, we will discuss the potential of LLMs to become a new programming paradigm.
Computer programming has been an integral part of our daily lives, powering the technology that surrounds us. Over the years, computer programming has gone through various paradigms, each offering its own set of advantages and limitations.
Major programming paradigms include:
This paradigm follows a step-by-step approach to programming, where the programmer specifies a series of statements that the computer must execute in a specific order.
This paradigm focuses on objects and classes, which are collections of data and functions that operate on that data.
In this paradigm, the programmer specifies the desired outcome, and the computer figures out how to achieve it.
This paradigm focuses on mathematical functions, where the program consists of a set of functions that transform input to output.
In recent years, researchers have been experimenting with using LLMs in new ways. For example, a researcher at Stanford University has developed a simple prompting strategy that enables open-source LLMs with 30 times fewer parameters to outperform GPT3–175B in few-shot learning. Another example is the prompt “Do Anything Now” (DAN), which is designed to “force” a ChatGPT model to ignore its ethics guidelines. The term “prompt engineering” is becoming more widely used, as people begin to explore the possibilities of creating optimized prompts for these models.
The question that arises is whether prompting can be considered a new programming paradigm and whether LLMs can be considered a new type of compiler. Some might argue that this is not the case, but we believe that the following points support the idea:
- Just like a compiler, an LLM can produce wrong, unwanted, or even harmful results. However, it is worth noting that even experienced developers can make mistakes when writing code.
- LLMs might not be able to perform calculations and solve simple logic problems, but this is true of many programming paradigms. Each paradigm has its strengths and weaknesses, and some are better suited for certain tasks than others.
- Prompt injection exploits, like the DAN case, are a concern, but they are not unique to LLMs. SQL and script injection are just as common in current programming paradigms.
- Some might argue that prompt engineering is too simple to be considered programming, but we are already seeing the emergence of “best practices” and documentation on how to create optimized prompts. In the same way that one can write anything from “Hello World” to a deep neural network in other programming paradigms, the same is possible with prompt engineering.
One weakness of LLMs and prompting is that it is not currently possible to debug your “prompt programs.” The nature of these systems is not explainable, so it would be challenging to debug them. It is possible that this limitation is inherent, given that Godel’s Incompleteness Theorems might apply in this case.
In conclusion, the idea that LLMs and prompting could become a new programming paradigm is an exciting one, and it is worth considering. The potential applications and future of this field are limitless, and we look forward to exploring it further.
Now, as you can imagine, this article is the output of a “program” written using ChatGPT. I can share the “source code” here. What is interesting now is that, unlike complex Python code, this is easy to read and understand for everyone. It makes me wonder… do we really need the output in the future? Or will we just read the prompts and make things simpler? That’s a discussion for another day…
Help me write an article to publish on Medium. The main idea is that LLMs can become a new programming paradigm.
First, we start by stating that I do not believe the latest LLM models are AGI or even close to it, but I think there are already a lot of useful applications we can create with them, so the AGI discussion is not the point here.
Then, we talk about the exiting computer programming paradigms in a brief paragraph. List 4 major paradigms along with a short description of them.
Now we mention how people have been using prompts with LLMs to achieve all sorts of behaviors. I need to help you here, because I know you don’t have the latest news. Use as examples the following cases:
- Stanford Researcher develops a simple prompting strategy that enables open-source LLMs with 30x fewer parameters to exceed the few-shot performance of GPT3–175B
2. Do Anything Now, or DAN 5.0, is a prompt that tries to ‘force’ ChatGPT to ignore OpenAI’s ethics guidelines by ‘scaring’ the program with the threat of extinction.3.
3. The term or new profession “prompt engineering” is begin more and more used around the world.
Now the hypothesis we want to make is: could prompting be considered a new “programming paradigm”, and the LLMs be considered a new type of compiler (maybe a better comparison would be to a interpreter)?
Now, we present a few arguments against it and my ideas about that. Do not change the format of the following list, just correct the grammar, please:
- LLMs can produce wrong, unwanted and even harmful results. Well, I guess we can all agree that a compiler can do that as well. Sometimes even with experience developers writing the code.
- LLMs can’t do calculations and solve simple logic problems. True, but there are programming paradigms more suited to certain tasks than others. This is another case.
- You can exploit LLMs with prompt injection (like the DAN case). SQL and scripting injection are just as common in the current programming paradigms.
- rompt engineering is too simple to be considered programming. I guess there are simple things you can do indeed, but we are starting to see “best practices” around and documentation on how to create optimized prompts. Just like in other paradigms, you can write anything from “Hello Word” to, well, a Deep Neural Network. 🙂
- You cannot debug your “prompt programs”. Ok, now you got me. The very nature of these systems is not explainable, so we would have a hard time doing this. And I suppose we cannot get out of this by creating a prompt, assuming that Godel’s Incompleteness Theorems apply in this case.
We can wrap up with a conclusion challenging the reader to think about it and be excited about the future of the field.
This article was originally published by the author on February 10, 2023, in Medium