Aleksandr Loginov: “The Complexity does not Disappear, it just Moves Out of the User’s Way.”

Visual AI product leader and Chief Design Officer at a popular tech company, he argues that agent-driven products won’t remove complexity, but relocate it behind the scenes
Aleksandr Loginov
Written By:
Arundhati Kumar
Published on

At the end of 2025, Google made A2UI public as an open-source “agent-to-UI” format: an AI agent outputs a declarative JSON description of UI components, and the client app renders those components with its own native library across frameworks such as Flutter, Angular, and Lit. At the same time, Google Research described Generative UI as a capability where the model generates not only content but an interactive interface tailored to a prompt, created at runtime, with rollout beginning in the Gemini app and Google Search (AI Mode). Together, these moves point to a UI shift from fixed, predesigned screens toward task-first flows, where the system generates the controls needed for the user’s specific request (forms, pickers, review/confirm steps) in that moment. 

To unpack how agent-driven interfaces replace fixed screens in real consumer workflows, we spoke with Aleksandr Loginov, Chief Design Officer at Prequel, a high-growth technology company whose AI-powered photo/video editing apps is widely used in the influencer and creator economy.  He oversees product design and end-to-end delivery for visual AI technologies across photo and video, working across design, R&D/Data Science, and art, and serving as a key stakeholder for both mobile and backend engineering. He has helped shape the company’s identity with effects built to convey users’ specific mood (what users feel, not only what the image looks like). He worked on interface patterns that later became widely adopted across AI apps: each effect treated like a mini product with its own name, cover, and branding, plus a discover-style browsing page that later became a default UI model for many AI apps. He also explores assistant-led product concepts where interaction with a smart assistant becomes the core interface layer.

Aleksandr, as software is shifting from fixed screens and buttons to systems that interpret intent and assemble a UI on the fly. From your vantage point, building AI products, what is the first real sign that this shift is already happening in how people actually use apps?

The clearest sign is where people start tasks now. A few years ago, most journeys began with a search. You would open Google, skim Wikipedia or a forum, and only then jump into an app. Today, many users start by asking a model first. They go to GPT even for things that used to be simple search-and-click flows.

You see the same pattern at work. People use agentic tools such as Cursor not only for coding, but also for routine actions with files and systems. In imaging, the mindset is shifting in the same direction. Instead of thinking which app to open for a specific edit, users increasingly go to one AI tool and describe the outcome they want. That change is the real signal. People are starting from intent rather than navigating a menu of features.

You scaled Prequel by shifting from generic tools to emotional stylization, allowing users to express specific moods instantly. Can the interface stop being a set of tools we choose from and instead become a proactive partner that senses our intent and prepares the emotional look before we even ask?

Agents will know a lot about us before we even start, because context builds up over time. The way someone communicates already carries signals. Tone, pacing, how they tweak results, and what they consistently choose all reveal patterns. You can infer preferences, constraints, and even the kind of vibe that usually works for them.

The goal is not to take choice away. It is to curate choice so it feels effortless. Done well, the user still feels creative and in control, but they are not forced to wade through endless settings. Think of a great restaurant menu. It is short, but every option is a strong fit. The interface should work the same way, offering a small set of relevant directions instead of making the user configure everything from scratch.

Your team worked on simplifying complex video editing into one-click templates for short-form video platforms, reducing the need for professional software skills. As AI agents move from editing to generating, will the very concept of a template disappear in favor of agents that build unique, bespoke interfaces and layouts on the fly based on the raw footage provided?

I do not think templates disappear overnight. For a while, we will see a hybrid phase. Agents will still lean on proven creative techniques such as templates, repeatable patterns, and familiar editing moves. They will simply use them as building blocks inside the workflow. The difference is how the user accesses them. People will not need to search through libraries or tune settings by hand. The agent will pick the right technique for the situation and apply it in context, based on the goal the user describes.

At Prequel, your team worked on ways to ship multi-step AI workflows faster than traditional development cycles. If agents can now orchestrate backend logic autonomously, is the goal of the next generation of apps to eliminate the UI entirely, leaving only a single input field or a voice command?

Not entirely. That kind of workflow is a good example of what changes when you can ship complex pipelines dynamically. The complexity does not disappear; it just moves out of the user’s way. If an agent can orchestrate a multi-step backend workflow autonomously, the product does not need to expose the pipeline structure, menus, or configuration panels. But it still needs a way for a person to set intent, apply constraints, and approve outcomes.

In professional production environments, the loop can get close to self-contained. The agent generates, evaluates against analytics, and iterates. Humans mostly govern direction, tone of voice, and risk. In that setup, the interface becomes more like an operating framework: rules, criteria, permissions, review gates, and logs. It is less “buttons and screens” and more “how do we control delegation safely?”

For everyday users, a pure “single input field” is usually not enough either, because trust is built through legibility. People need to see what the agent is about to do, what it used as inputs, and what options are still open. So the next generation of apps is not UI-free. It is UI that is lighter, more contextual, and often generated at runtime. Conversation can be the entry point, but you still need small, clear interaction moments for preview, confirm, and steer.

Prompting is a hurdle for most users. You shipped a one-tap experience that turned a cultural trend into a simple flow and delivered strong adoption and business results. Will the Agentic Interface finally kill the prompt by having the agent understand the cultural context and vibe as deeply as your one-button solutions do?

I do not think it kills the prompt. I think it removes prompting from the user’s work. In an agentic interface, the prompt becomes an internal layer that translates intent into actions. The user experience becomes simpler. Set the goal. Add constraints. Review what the system is about to do. Approve the outcome.

That case showed why packaging matters. At the time, many people were searching for the right way to ask for the outcome they wanted. The product shipped that same outcome as a one tap action. That shift turned a trend into a product flow, and it performed very strongly without needing to expose sensitive metrics.

One tap products win in trend moments because they encode cultural context into defaults. They remove uncertainty. They remove setup. They shorten time to result. The user does not have to think like an operator to get something that feels current.

Agents will get better at cultural context and vibe. They can learn from what users consume, what performs, and what each audience responds to. But cultural fit is also a control problem. As the agent gets more autonomy, you need clear success criteria, boundaries, and review steps to keep outputs consistent and safe.

The likely end state is a hybrid. Agents make prompting optional and mostly invisible. Products still surface one tap actions when speed and predictability matter more than exploration.

In your work on GIO, the product where users run personalized AI photo sessions and receive a set of generated portraits, you create a user avatar of their AI twin based on their photo. When an app can model the user this precisely, who is the interface for: the human making choices, or an AI twin acting on the human’s behalf?

When an app can model the user accurately, the interface stops being only a set of screens for manual control. It becomes a control plane for delegation. In professional setups, you often end up with an agent that can generate and publish at scale. The human role shifts from operating tools to governing decisions. You define intent, boundaries, and accountability. That means the interface is less about sliders and more about permissions, approvals, and auditability. What can the agent do without asking? What must be reviewed? What data can it use? What evidence should it log so decisions are traceable? The most important UI elements become policy, review queues, and exception handling, because trust comes from predictable governance, not from more buttons.

In personal creation, the opposite constraint applies. People want the system to understand them, but they do not want to be replaced by a proxy. Here, the interface is still for the human, just smarter. It should feel like a dialogue with memory. It should carry preferences forward, keep context, and offer a small set of meaningful choices at the moment they matter, so the user stays the author and the agent stays the assistant.

While AI is powerful, certain tasks like fine color correction or aesthetic adjustments still require human taste and often beat synthetic AI results in the long run. How do we build agent-driven interfaces that don't just automate everything into a generic AI look, but instead amplify a user’s unique aesthetic signature?

Avoiding the generic AI look is less about more creativity and more about building a reliable taste system. Most models drift toward the average because they are trained to be broadly acceptable. If you want a user’s signature to persist, you need mechanisms that preserve consistency over time. That starts with a stable representation of preferences, not just a one-off prompt. The system should remember what the user repeatedly approves and rejects, then translate that into constraints the agent respects. For example, palette tendencies, contrast comfort zone, grain and texture tolerance, preferred skin tone rendering, and typical framing choices.

You also need an evaluation loop that rewards the right outcomes. Instead of optimizing only for novelty, optimize for “this looks like me.” That can be done through lightweight comparisons to the user’s own history and through explicit checkpoints where the user makes a small number of high-impact decisions. The agent can propose options, but the user’s approvals become the training signal that sharpens the signature.

The result is an interface that behaves like a skilled assistant with taste memory. It does not generate endlessly. It converges. It gives a few strong directions, shows what changed, and makes it easy to steer back to the user’s aesthetic when the model starts to wander.

You have a proven track record of scaling AI products from concept stage to meaningful recurring revenue by catching technological shifts early. If people increasingly start with a single agent rather than opening apps, does the App Store model weaken?

As long as marketing exists, the current distribution model will probably keep working. But the nature of apps will change. And you can already see it in corporate software this month: AI agents are starting to sit on top of the SaaS stack and do the work across tools. Instead of opening five different systems, teams ask one agent to pull data, update records, draft a document, open a ticket, route approvals, and log what happened. When that becomes normal, many SaaS products stop feeling like destinations and start looking like interchangeable back-end services. That is the SaaS apocalypse people are talking about: the agent becomes the front door, and the tools behind it compete on reliability, access, and outcomes rather than UI. Consumer behavior tends to follow enterprise patterns with a delay, so it is hard to imagine this stopping at corporate workflows. Once people trust an agent to complete real tasks end to end, they will start there first, and the winners inside the App Store will be the products built to plug into that agent-native world.

Related Stories

No stories found.
logo
Analytics Insight: Latest AI, Crypto, Tech News & Analysis
www.analyticsinsight.net