With the arrival of large language models (LLMs), something new is coming in the domain of human-computer interactions. These technologies are often seen for their ability to transform numerous industries, but the biggest change may not lie solely in their use as automation tools, but rather in their potential to completely reshape how we interact with computers.
Language models have demonstrated impressive power. They can draft text, answer complex questions, translate languages, generate code, simulate sophisticated conversations… yet, these models are not entirely reliable. When it comes to tasks that require 100% accuracy, there are still significant gaps. Replacing entire backend systems with an LLM is unrealistic. Despite their high success rates, their lack of precision and reliability in certain critical situations (such as security or financial transactions) makes this idea too risky. Additionally, using LLMs systematically for every type of application would introduce bloat—unnecessary software overhead—that would not be optimized for many real-world use cases (I’ve seen graphical engines fully generated by AI… though I haven’t examined the details, I can’t help but get tensed at the thought of how many GPUs would be required to properly run DOOM https://gamengen.github.io/). Thus, the goal should not be to use these models for everything, but to deploy them in the most relevant and effective ways.
Historically, interaction with computers has relied on formal languages: complex command lines or graphical interfaces designed to translate human actions into instructions understandable by the machine. With LLMs, it is now possible to drastically reduce the need for formal languages and visual interfaces—and that’s a good thing (at least for visual interfaces). Looking back at the imagination of science fiction films, like Alien, where the central computer “Mother” communicates with the crew via text-based interfaces using natural language, we see that there is no need for a dashboard with endless clicks to simply understand the malfunction of a certain engine—everything is interfaced by a language model. This future is now within reach thanks to LLMs. One of the most promising yet underutilized breakthroughs is this interface revolution: replacing sophisticated visual interfaces with natural language interaction, via text or voice.
Currently, visual interfaces dominate the interaction between users and machines. Whether through smartphone applications, computer software, or sophisticated dashboards, the majority of tasks rely on visual actions (clicking here, scrolling there, swiping this way). With LLMs, it becomes conceivable to eliminate much of this graphical interface in favor of a simplified interaction where the user could simply say: “Call this person,” “Generate a report on this,” or “Find that information.” This vision is reminiscent of many human-robots interactions in older SF movies: a fluid, direct, and efficient interaction with the machine. This vision already started with Siri, Alexa, … but will take a new turn with the generalized usage of those language models. This simplification will have positive consequences. Today, many interfaces are designed to be catchy and addictive, optimized to maximize time spent on the device rather than achieving the desired task. In contrast, a natural language interface will allow us to focus on the essential, reducing screen time and freeing us from unnecessary distractions.
I think we might see the disappearance of many specific applications for simple tasks. Instead of having to install an app to check the weather, look up sports statistics, or read the latest news, one could simply ask a vocal or text-based assistant. This shift could lead to more minimalist devices, where the interface between humans and machines is reduced to its simplest expression: natural language. If we think about it, the history of latest technology go through this where each new consumer invention (telephone, computer, music listening, services..) was bit by bit unified in a universal interface, which was the smartphone until today. Continuing into this direction will make LLM a new step in this universalization process.
One of the major challenges will be ensuring system compatibility with these simplified interfaces. Currently, systems like REST APIs or GraphQL already enable communication between different software layers, but it may be necessary to develop new standards to meet the requirements of this new era. Moreover, for security reasons, the future might place greater emphasis on local models, hosted directly on devices, to avoid giving complete control to remote servers—a risky prospect in terms of privacy and cybersecurity.
It took decades for graphical interfaces and software to evolve into their current forms. With LLMs, we may be witnessing the beginning of a new change in interface design. This could be harnessed by bold companies that will reinterpret our relationship with machines. By reducing interfaces to their simplest expression—natural language—these technologies have the potential to liberate users from the cognitive overload imposed by current interfaces, though at the risk of further delegating control to machines.