Beyond the Chatbox: Rethinking Interfaces for AI First Products

Beyond the Chatbox: Rethinking Interfaces for AI First Products
Photo by Kelly Sikkema / Unsplash

People like to stick to habits. They feel safe, familiar, and give us confidence in what we already know. That comfort is often useful; but it can also hold us back from building something new and better.

Nowhere is this tension clearer than in today’s AI interfaces. Large Language Models (LLMs) contain the word “Language” in their name, and suddenly the industry decided the only way to use them is through a chat window; typing prompts as if texting a colleague.

But is this really the best we can do?

Humans don’t only communicate with words. We gesture, point, move, and show. Body language, context, and intent all shape expression. Limiting AI to chatboxes is like reducing human communication to telegrams: functional, but primitive.


Back to First Principles

Take one concrete example: functional 3D modeling.

The purpose of engineering design is clear:

  • to visualize an object and identify flaws,
  • to simulate it under real-world conditions, and
  • to manufacture it through technical drawings or digital workflows.

In short, the goal is to create a precise digital representation of an object; whether or not it yet exists. The closer that representation is to reality in shape, structure, and material properties, the more valuable it becomes.

So the real question is: how should we interact with a tool designed to generate manufacturable 3D objects?


A Brief History of Interfaces

The answer becomes clearer if we look at how people have interacted with computers over time. The very first machines were operated through switches, patch cables, and punch cards, where programming meant rewiring hardware or feeding stacks of cards. Later came teletypes and command lines, allowing typed instructions but still abstract and text-heavy. The 1980s brought the graphical user interface (GUI); windows, icons, and the mouse; that opened computing to non-specialists. In the 2000s, the shift to web and mobile centered design around touchscreens and responsiveness. The 2010s introduced multimodal input with gestures and voice assistants. Today, the dominant experiment is AI chatboxes. Each stage shows the same pattern: interfaces move steadily away from machine-centered constraints toward more human-centered, intuitive, and adaptive experiences.


From 2D to 3D and Back Again

Technology doesn’t evolve in straight lines. We’ve moved from simple 2D interfaces to immersive 3D environments, only to collapse back to flat screens; and now, to the chatbox.

Current AI systems often accept multimodal input; text, sketches, images; but the outputs remain flat, mostly text or voice. That’s a mismatch: we’re feeding rich inputs into the system, yet receiving low-bandwidth outputs.

The future lies in real-time, generated interfaces that mirror the richness of the task itself.


The Flight Booking Analogy

Consider booking a flight with an AI agent. Instead of navigating endless forms, you state your preferences; destination, dates, budget. The AI pulls data across platforms, compares options, and presents a dynamic, generated dashboard.

It might show multiple trade-offs: a direct but costly flight paired with a cheaper hotel, or a slower journey with stronger loyalty perks. Restaurants, transport, and recommendations could be layered in.

The result is not a wall of text; it’s customized, contextual, and visual.


Applied to 3D Modeling

Now bring this back to design.

An intelligent agent could:

  • suggest solutions based on prior work and design libraries,
  • integrate supplier catalogs, material databases, and online repositories, and
  • present a directly manipulable 3D model as the main output.

The engineer isn’t instructing the system to “draw a line from A to B.” Instead, they specify design intent; and the system adapts its output accordingly.

This isn’t about automating button clicks. It’s about rethinking the interface so the AI collaborates on outcomes, not tasks.


Lessons from ChatGPT

ChatGPT shocked the world with its simplicity: a chatbox that suddenly replaced the search bar. That worked; but it also created a trap: the belief that chat is the solution to every problem.

History shows the danger of overextending a breakthrough. When the internal combustion engine appeared, people tried to put it into everything; even home appliances. Just because a technology excels in one domain doesn’t mean it belongs everywhere.

Chat may replace the search box. But it shouldn’t be the default interface for all applications; just as search never became the universal UI for the web.


What This Means at Kyrall

At Kyrall, we believe design tools should move beyond chat. Our platform still supports multimodal inputs; text, sketches, documents, images; but outputs directly into an interactive 3D workspace.

Instead of cluttered menus or streams of generated text, the agent suggests alterations directly in 3D space. Engineers don’t primarily think in sentences; they think in geometry, functions, and relationships. A visual discipline deserves a visual interface.

Just as self-driving cars don’t need steering wheels, intelligent design agents don’t need toolbars or endless buttons. The intelligence runs under the hood; the experience is streamlined, and the focus stays where it belongs: on creating manufacturable, functional designs.

The closer the interface is to the end goal, the more efficient and empowering it becomes.


The Path Forward

The future of AI interfaces isn’t about embedding a chatbox in every workflow. It’s about inventing entirely new ways to collaborate with intelligent agents.

Chat got us started. But it should not be the destination.


Sources:

  1. AI UX Design: ChatGPT Interfaces Are Already Obsolete
  2. The Weird Death Of User Interfaces
  3. AI 50: AI Agents Move Beyond Chat