

At the end of 2025, Google made A2UI public as an open-source “agent-to-UI” format: an AI agent outputs a declarative JSON description of UI components, and the client app renders those components with its own native library across frameworks such as Flutter, Angular, and Lit. At the same time, Google Research described Generative UI as a capability where the model generates not only content but an interactive interface tailored to a prompt, created at runtime, with rollout beginning in the Gemini app and Google Search (AI Mode). Together, these moves point to a UI shift from fixed, predesigned screens toward task-first flows, where the system generates the controls needed for the user’s specific request (forms, pickers, review/confirm steps) in that moment.
To unpack how agent-driven interfaces replace fixed screens in real consumer workflows, we spoke with Aleksandr Loginov, Chief Design Officer at Prequel, a high-growth technology company whose AI-powered photo/video editing apps is widely used in the influencer and creator economy. He oversees product design and end-to-end delivery for visual AI technologies across photo and video, working across design, R&D/Data Science, and art, and serving as a key stakeholder for both mobile and backend engineering. He has helped shape the company’s identity with effects built to convey users’ specific mood (what users feel, not only what the image looks like). He worked on interface patterns that later became widely adopted across AI apps: each effect treated like a mini product with its own name, cover, and branding, plus a discover-style browsing page that later became a default UI model for many AI apps. He also explores assistant-led product concepts where interaction with a smart assistant becomes the core interface layer.
The clearest sign is where people start tasks now. A few years ago, most journeys began with a search. You would open Google, skim Wikipedia or a forum, and only then jump into an app. Today, many users start by asking a model first. They go to GPT even for things that used to be simple search-and-click flows.
You see the same pattern at work. People use agentic tools such as Cursor not only for coding, but also for routine actions with files and systems. In imaging, the mindset is shifting in the same direction. Instead of thinking which app to open for a specific edit, users increasingly go to one AI tool and describe the outcome they want. That change is the real signal. People are starting from intent rather than navigating a menu of features.
Agents will know a lot about us before we even start, because context builds up over time. The way someone communicates already carries signals. Tone, pacing, how they tweak results, and what they consistently choose all reveal patterns. You can infer preferences, constraints, and even the kind of vibe that usually works for them.
The goal is not to take choice away. It is to curate choice so it feels effortless. Done well, the user still feels creative and in control, but they are not forced to wade through endless settings. Think of a great restaurant menu. It is short, but every option is a strong fit. The interface should work the same way, offering a small set of relevant directions instead of making the user configure everything from scratch.
I do not think templates disappear overnight. For a while, we will see a hybrid phase. Agents will still lean on proven creative techniques such as templates, repeatable patterns, and familiar editing moves. They will simply use them as building blocks inside the workflow. The difference is how the user accesses them. People will not need to search through libraries or tune settings by hand. The agent will pick the right technique for the situation and apply it in context, based on the goal the user describes.
Not entirely. That kind of workflow is a good example of what changes when you can ship complex pipelines dynamically. The complexity does not disappear; it just moves out of the user’s way. If an agent can orchestrate a multi-step backend workflow autonomously, the product does not need to expose the pipeline structure, menus, or configuration panels. But it still needs a way for a person to set intent, apply constraints, and approve outcomes.
In professional production environments, the loop can get close to self-contained. The agent generates, evaluates against analytics, and iterates. Humans mostly govern direction, tone of voice, and risk. In that setup, the interface becomes more like an operating framework: rules, criteria, permissions, review gates, and logs. It is less “buttons and screens” and more “how do we control delegation safely?”
For everyday users, a pure “single input field” is usually not enough either, because trust is built through legibility. People need to see what the agent is about to do, what it used as inputs, and what options are still open. So the next generation of apps is not UI-free. It is UI that is lighter, more contextual, and often generated at runtime. Conversation can be the entry point, but you still need small, clear interaction moments for preview, confirm, and steer.
I do not think it kills the prompt. I think it removes prompting from the user’s work. In an agentic interface, the prompt becomes an internal layer that translates intent into actions. The user experience becomes simpler. Set the goal. Add constraints. Review what the system is about to do. Approve the outcome.
That case showed why packaging matters. At the time, many people were searching for the right way to ask for the outcome they wanted. The product shipped that same outcome as a one tap action. That shift turned a trend into a product flow, and it performed very strongly without needing to expose sensitive metrics.
One tap products win in trend moments because they encode cultural context into defaults. They remove uncertainty. They remove setup. They shorten time to result. The user does not have to think like an operator to get something that feels current.
Agents will get better at cultural context and vibe. They can learn from what users consume, what performs, and what each audience responds to. But cultural fit is also a control problem. As the agent gets more autonomy, you need clear success criteria, boundaries, and review steps to keep outputs consistent and safe.
The likely end state is a hybrid. Agents make prompting optional and mostly invisible. Products still surface one tap actions when speed and predictability matter more than exploration.
When an app can model the user accurately, the interface stops being only a set of screens for manual control. It becomes a control plane for delegation. In professional setups, you often end up with an agent that can generate and publish at scale. The human role shifts from operating tools to governing decisions. You define intent, boundaries, and accountability. That means the interface is less about sliders and more about permissions, approvals, and auditability. What can the agent do without asking? What must be reviewed? What data can it use? What evidence should it log so decisions are traceable? The most important UI elements become policy, review queues, and exception handling, because trust comes from predictable governance, not from more buttons.
In personal creation, the opposite constraint applies. People want the system to understand them, but they do not want to be replaced by a proxy. Here, the interface is still for the human, just smarter. It should feel like a dialogue with memory. It should carry preferences forward, keep context, and offer a small set of meaningful choices at the moment they matter, so the user stays the author and the agent stays the assistant.
Avoiding the generic AI look is less about more creativity and more about building a reliable taste system. Most models drift toward the average because they are trained to be broadly acceptable. If you want a user’s signature to persist, you need mechanisms that preserve consistency over time. That starts with a stable representation of preferences, not just a one-off prompt. The system should remember what the user repeatedly approves and rejects, then translate that into constraints the agent respects. For example, palette tendencies, contrast comfort zone, grain and texture tolerance, preferred skin tone rendering, and typical framing choices.
You also need an evaluation loop that rewards the right outcomes. Instead of optimizing only for novelty, optimize for “this looks like me.” That can be done through lightweight comparisons to the user’s own history and through explicit checkpoints where the user makes a small number of high-impact decisions. The agent can propose options, but the user’s approvals become the training signal that sharpens the signature.
The result is an interface that behaves like a skilled assistant with taste memory. It does not generate endlessly. It converges. It gives a few strong directions, shows what changed, and makes it easy to steer back to the user’s aesthetic when the model starts to wander.
As long as marketing exists, the current distribution model will probably keep working. But the nature of apps will change. And you can already see it in corporate software this month: AI agents are starting to sit on top of the SaaS stack and do the work across tools. Instead of opening five different systems, teams ask one agent to pull data, update records, draft a document, open a ticket, route approvals, and log what happened. When that becomes normal, many SaaS products stop feeling like destinations and start looking like interchangeable back-end services. That is the SaaS apocalypse people are talking about: the agent becomes the front door, and the tools behind it compete on reliability, access, and outcomes rather than UI. Consumer behavior tends to follow enterprise patterns with a delay, so it is hard to imagine this stopping at corporate workflows. Once people trust an agent to complete real tasks end to end, they will start there first, and the winners inside the App Store will be the products built to plug into that agent-native world.