Building trust and the era of AI becomes one of the core concepts for brands if they want to continue having a steady flow of leads from organic search. With millions of people shifting from traditional ways of searching for the information to using ChatGPT and other LLMs to quickly search for brand information or to compare brands, you have to get your company featured in AI overview and popular AI tools. Without it, you will definitely lose this battle to clients and those who will have better exposure, will win the client.
AI is no longer something that you only see in movies like The Matrix. It became a significant part of our lives right after OpenAI introduced their CharGPT. Millions of users use LLMs for everything starting from finding a perfect baking recipe to adjusting their day trading strategies. Many Gen Z tend to replace search engines with LLMs using them as advisors in business and relationships. But the makes AI and LLMs so unique and helpful have potential threats when it comes to trust, compliance, and accountability.
No matter which questions you ask, AI will give you the answer. And that is the main issue because any AI is trained on open data. And if there is too little data, it means that AI can give wrong answers or be factually incorrect. Studies show that AI can hallucinate in up to 65% of queries giving you wrong answers or providing inaccurate conclusions because it simply doesn’t have enough facts or these facts were manipulated by the core sources. It advised to add some glue to the pizza sauce because they do this in TV commercials to make food look more attractive. But what works in TV commercials shouldn’t be used in case you make pizza at home, right?
That’s why trust is the key aspect when it comes to AI and things that it can advise you. It is not a secret that many traders and investors use LLMs in their daily life which means their trading and investment strategies could be affected by what ChatGPT will say. And one wrong statement that is taken for truth can lead to huge losses.
If you are one of those who use LLMs on a daily basis, do try to understand how they come to conclusions and which sources they use during that process. You would barely use medical advice from ChatGPT if you knew that it used a couple threads on Quora and Reddit for basis. Although most people think that AI models are bulletproof against cyber threats, this is not exactly true. With enough money invested you can feed LLMs incorrect or biased data that will lead to issues with clarity and reliability.
Compliance is another angle that AI is struggling with as many countries have their own policies for using AI. Such governmental bodies require developers to stay within special frameworks that align with norms and legal obligations. There are just a few frameworks for AI on an international landscape, but with each year we will see more attempts to formalize the way LLMs work. This will lead to situations where LLMs will act and suggest differently depending on when a user is located. And that is a tricky situation when there are going to be two or more different points of views orchestrated by local governments or regulatory bodies. Can this lead to censorship? Certainly we are already at this stage because Deepseek has already tweaked to avoid tricky questions about CCP in China and the situation around Taiwan. This situation is not unique and expect to see more geo-related specifics in the future.
The EU is working on the Artificial Intelligence Act that will classify AI systems by the level of risk and issues they can potentially provoke. Thus, high-risk areas like the defence sector, public health and biometric data will require human oversight on all steps of using AI.
The US has a very fragmented landscape that touches primarily autonomous vehicles, finance, and healthcare industries as the most sensitive for AI implementation. But the more AI will be used in other industries, the faster the US will introduce a global regulatory framework for that.
As for Asia, it is the least developed region in terms of compliance and regulations for using AI. Only China has their own policies, the rest of the countries either don’t have any compliance requirements or are at the early stage of development.
In LATAM the AI landscape is pretty much the same as in Asia. Most of the companies use AI the way they want to with no restrictions. Big brands like 777fun use AI in customer support trying to cut the costs. On the other hand, many local airlines use AI to help with scheduling and launching new routes.
In 2025 it is easy to find who is accountable for almost anything: from a typo in the article to a plane crash. But how would this work if no human is involved in the process? Who will we blame when a self-driving car will trigger a car accident with injuries or death involved?
Is it a problem of the company who owns the fleet or should AI developers be sued? And if it’s the second option, will we see the skyrocketing costs of insurance for AI companies just like we have in healthcare?
The more industries start to rely on AI so much, the more questions it provokes. That is why we will see a growing pressure on engineers and data scientists for the models they develop. Better training and following the ethical guidelines are essential but if following the guidelines is relatively easy to achieve, how do we make a train machine to give the right answers and literally every question you may ask? This is especially challenging when the number of websites that can serve as a core source is dramatically decreasing over the past decade. LLMs now rely heavily on Reddit and Quora that on vetted professionals and that is an issue OpenAI is still to solve.