WHO Issues a Warning About the Use of AI in Healthcare

WHO Issues a Warning About the Use of AI in Healthcare

AI in healthcare could be biased and generate misleading or inaccurate information

The World Health Organization called for caution on May 16 warning healthcare organizations about artificial intelligence, saying data used by AI to reach decisions could be biased and generate misleading or inaccurate information. The WHO said it was enthusiastic about the potential of Artificial Intelligence or AI in healthcare but had concerns over how it will be used to improve access to health information, specifically how it will be used as a decision-support tool. 

"Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world," the organization wrote. 

It was "imperative" to assess the risks of using generated extensive language model tools (LLMs), like ChatGPT, to protect and promote human well-being and protect public health, the U.N. health body said. Although WHO said it is excited about emerging AI technologies such as ChatGPT, it reiterated that these tools need clinical oversight to ensure they are safe, effective, and ethical.

Importance of AI in healthcare

Artificial intelligence (AI) perfectly mimics human cognitive functions. AI-powered advanced analytics, varied AI applications, and solutions are widely used in cancer, cardiology, and neurology strokes, in early detection, diagnosis, and treatment processes. The very digital tool and technology predict the outcomes of treatment processes and their prognosis evaluation. This is how AI in healthcare is proving to be one of the biggest technological wonders helping healthcare businesses to scale heights while providing quality services to patients increasing their precious life span.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net