Will Powerful AI Disrupt Industries Once Thought to be Safe in 2023?

Will Powerful AI Disrupt Industries Once Thought to be Safe in 2023?

Will 2023 see the demise of formerly safe industries due to powerful AI?

The night is dark and full of terrors, the day is bright and lovely and full of hope, as one fairly commercially successful author once put it. It's a good metaphor for AI, which, like all technology, has advantages and disadvantages. What will AI be capable of in 2023? Will regulation be able to stop the worst effects of AI, or are the floodgates already open? Will strong, revolutionary new AI arise, such as ChatGPT, upending previously safe industries from automation? The Knives and Paintbrushes open research group member Mike Cook concurs with Gahntz that generative AI will continue to be a significant—and problematic—force for change. He believes that generative AI must "finally put its money where its mouth is" in 2023.

We should anticipate a tonne more similar apps given the success of Lensa, the AI-powered selfie app from Prisma Labs that went viral. Expect them to overly sexualize and change the appearance of women, as well as to be susceptible to being duped into producing NSFW photographs. Maximilian Gahntz, a senior policy researcher at the Mozilla Foundation, predicted that the consequences of such systems, both positive and negative, would be amplified by the incorporation of generative AI into consumer technology. For instance, Stable Diffusion received billions of photographs from the internet before "learning" to link particular words and concepts with particular imagery. Text-generating models are frequently amenable to being deceived into promoting offensive ideologies or creating deceptive content.

As the U.K. considers legislation that would eliminate the requirement that systems trained using public data be used purely non-commercially, expect opposition to increasing in the upcoming year. A low amount of AI businesses, particularly OpenAI and Stability AI, took centre stage in 2022. However, when the potential to create new systems expands outside "resource-rich and strong AI labs," as Gahntz put it, the pendulum may begin to swing back toward open source in 2023.

According to him, a community-based approach might result in greater scrutiny of systems as they are developed and put into use: "If models are open and if data sets are open, that'll enable much more of the critical research that's pointed out many of the flaws and harms associated with generative AI and that's often far too difficult to conduct." Large language models from EleutherAI and BigScience, a project supported by AI firm Hugging Face, are two examples of such community-focused initiatives. The music-generation-focused Harmonai and OpenBioML, a loose collection of biotech experiments, are two groups that Stability AI funds directly. Although decentralized computing may eventually threaten traditional data centres as open-source initiatives mature, money and skills are still needed to build and run advanced AI models.

However, Chandra notes that huge labs will continue to hold a competitive advantage so long as the procedures and information are kept under lock and key. OpenAI has unveiled Point-E, a model that can create 3D things from scratch when given a text command. Although the model was open sourced, Point-training E's data's sources were not disclosed or made available by OpenAI. For the benefit of more scholars, practitioners, and users, Chandra added, "I do think the open source efforts and decentralization efforts are very worthwhile. Chandra believes that these restrictions are crucial, particularly in light of the increasingly obvious technical faults of generative AI, such as its propensity to provide inaccurate information. In the future, regulations like the EU's AI Act may alter how businesses create and use AI systems. The same may be said for other regional initiatives, such as New York City's AI hiring act, which calls for bias audits of AI and algorithm-based tech before usage in hiring, promoting, or recruiting.

This makes the use of generative AI challenging in many fields where errors might result in exorbitant expenses, such as healthcare. Furthermore, the simplicity of producing false information raises issues with misinformation and deception, she added. In spite of this, AI systems are already making choices that have moral and ethical ramifications. However, regulation will merely be a threat in the coming year; before anyone is punished or prosecuted, expect a lot more wrangling about laws and legal disputes. However, businesses may still compete for positions in the most favorable categories of new rules, such as the risk categories under the AI Act. Going into the New Year, it's uncertain if businesses will be convinced by that justification, especially given how eager investors appear to be investing in generative AI in general.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net