Guidelines for Using Artificial Intelligence Are Released by The ICMR

Guidelines for Using Artificial Intelligence Are Released by The ICMR

The ICMR has issued guidelines for the use of AI in the health sector, which are discussed below

Every field, including healthcare, has been affected by artificial intelligence (AI). Ethical Guidelines for AI in Healthcare and Biomedical Research have been issued by the Indian Council of Medical Research (ICMR) to "guide effective yet safe development, deployment, and adoption of AI-based technologies" in recognition of this.

According to the ICMR, some of the recognized applications of AI in healthcare include diagnosis and screening, therapeutics, preventive treatments, clinical decision-making, public health surveillance, complex data analysis, predicting disease outcomes, behavioral and mental healthcare, and health management systems.

"An ethically sound policy framework is essential to guide the AI technologies development and its application in healthcare," given that AI cannot be held accountable for its decisions. Additionally, the ICMR guiding document stated, "It is important to have processes that discuss accountability in case of errors for safeguarding and protection as AI technologies get further developed and applied in clinical decision making."

For the benefit of all parties involved, it outlined ten essential patient-centered ethical principles for the application of AI in the health sector. Accessibility and equity, data quality optimization, non-discrimination and fairness, validity and trustworthiness, autonomy, data privacy, collaboration, risk minimization and safety, and accountability and liability are just a few examples.

The autonomy principle guarantees human oversight of the AI system's operation and performance. The patient's consent must also be obtained before any procedure can begin, and they must be made aware of the potential social, psychological, and physical risks.

The safety and risk minimization principle aims, among other things, to avoid "unintended or deliberate misuse," anonymized data separated from global technology to prevent cyberattacks, and a favorable benefit-risk evaluation by an ethical committee.

For AI systems that must be made available to the public, regular internal and external audits are crucial, as the accountability and liability principle emphasizes. The aim of the accessibility, equity, and inclusiveness principle is to close the digital divide by recognizing that the use of AI technology is contingent on the widespread availability of suitable infrastructure.

The rules likewise framed a brief for important partners including specialists, clinicians/clinics/general well-being framework, patients, morals panel, government controllers, and the business. The document noted, arguing that the development of AI tools for the health sector is a multi-step process involving all of these stakeholders:

To make AI-based solutions technically sound, ethically justifiable, and applicable to a large number of people equitably and fairly, each of these steps must adhere to standard procedures. To make the technology more useful and palatable to its users and beneficiaries, all stakeholders should adhere to these guiding principles.

According to the guidelines, the ethics committee was responsible for conducting the ethical review process for AI in health. This committee looks at a variety of things, such as the source of the data, its quality, safety, anonymization, data piracy, data selection biases, participant protection, payment of compensation, and the possibility of stigmatization, among other things.

The body is "liable for surveying both the logical meticulousness and moral parts of all wellbeing research and ought to guarantee that the proposition is experimentally solid and gauge every single expected hazard and advantages for the populace where the exploration is being completed," the archive notes.

The guidelines also emphasize informed consent and governance of AI tools in the health sector, both of which are still in their infancy even in developed nations. Numerous frameworks in India combine healthcare innovations with technological advancements.

The National Health Policy's Digital Health Authority for the Use of Digital Health Technologies, the 2018 Digital Information Security in Healthcare Act (DISHA), and the 2017 Medical Device Rules are examples of these.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net