I spoke at the Front End of Innovation (FEI) Conference in Boston back in June 2024 about AI, innovation and the future. I have been thinking a lot about this topic in the time since 2024.
The future direction of Artificial Intelligence is not just about AI in and of itself, it is about different kinds of AI plus some other new technologies (such as Agentic AI and the Internet of Things) that are going to change everything. As I mentioned in my previous articles, AI Changes Everything and AI and autonomous everything, AI will have far-reaching long-term impacts on every facet of life over the long term. A key way that AI will change things, especially Generative AI (Gen AI) and Conversational AI, will be by rearchitecting the entire user interface as we have known it.
At present the way that we build front end applications – the way that users interact with our systems – is for developers to manually construct application interfaces for humans. We have done it this way since the earliest days of computing. But this is all about to change.
About AI Now
However, before I explore what this all means we need to dig in a bit and understand what AI really is. AI is not just one thing; it is a bunch of different technologies that sit together under the banner of AI. The key types of AI that are in use today include:
- Machine Learning: subset of AI that enables machines to learn from existing data and improve upon that data to make decisions or predictions
- Natural Language Processing: Natural language processing (NLP) is the ability of a computer program to understand human language as it’s spoken and written – referred to as natural language.
- Deep Learning: a machine learning technique in which layers of neural networks are used to process data and make decisions
- Generative AI & Large Language Models: create new written, visual, video, and auditory content given prompts or existing data
- Agentic AI: where autonomous AI agents can reason, plan, and take actions to achieve goals with minimal human supervision
The Front End
The notion of standard input and standard output dates back to the earliest days of computing. It is the most fundamental way in which a human communicates with a machine. The main way that we have communicated with machines up until now is by way of a keyboard. With our mobile phones (Siri for my iPhone) and other devices like Amazon Alexa and Google Assistant they started providing standard input via voice inputs. But one thing each of these devices and their personas is that they are very stupid. They do not understand stuff we say. They are not able to daisy chain commands. In essence they are irritatingly incompetent.
But with the arrival of Gen AI this is all about to change in major ways. This is the beginning of what I call the Star Trek era of computing.
Talking to Machines: How Our Relationship with Apps Is Changing
Lately, there has been a subtle but profound shift in how people interact with technology: less typing and more talking to AI systems using natural speech. For many everyday tasks, speaking to an AI agent now feels more intuitive than navigating traditional menus and forms.
From Commands to Conversation
For decades, software demanded that users learn its grammar through clicks, fields, and rigid commands. Today, modern AI agents can interpret intent, context, and nuance, which allows people to speak in half-finished thoughts and still receive useful outcomes.
Living with an AI Assistant
Using voice to work with digital systems changes how knowledge work and daily tasks feel, because interacting with an AI can resemble collaborating with a colleague rather than operating a tool. People can request summaries, research support, or scheduling help in conversational language while the agent handles the underlying complexity across multiple applications.
A More Human Way to Compute
Speech is faster and more expressive than typing for many people, which makes voice interaction feel closer to thinking out loud than filling out a form. At the same time, the rise of voice-led agents raises governance questions about data, accountability, and appropriate boundaries for automation that acts on a user’s behalf.
Looking Ahead
Designers and technologists are increasingly talking about a “voice-first” or “post-app” world, where a conversational layer sits on top of many systems and becomes the primary interface. Typing is unlikely to vanish, but for a growing set of interactions it will sit behind more natural, multimodal conversations that treat speech as the default input.
I am going to be discussing this and more over on my Data Revolution Podcast – please check it out.



