The Risks, Benefits and Ethics of Artificial Intelligence
Recently social media has been abuzz with art and text generators driven by artificial intelligence (AI), leading consumers everywhere to question just what kind of robots we’ve let into our homes and lives.
In November 2022, leading AI research lab Open AI, released ChatGPT, an artificial intelligence language model that, when given a prompt, produces conversational and easily convincing text. On its heels came Lensa, an app from Prisma Labs which uses AI to transform selfies into avatars that make users look like they stepped out of a Marvel Comic Book.
And on the music side of things, Riffusion, created by Seth Forsgren and Hayk Martiros, used open-source AI tool Stable Diffusion to pair text with audio processing for music generation.
Such AI applications are captivating, pushing the boundaries of every day creation and commercial services. They are also raising a lot of questions around privacy, property rights, biased datasets and the future disruption of industries. And these concerns are sparking bigger conversations over the risks and benefits of AI technology.
Yes, AI runs the gamut of all uses, from work to entertainment. Let’s have a look at the implications of AI, particularly the risks, benefits and ethical questions it arouses.
Types of AI
AI refers to intelligence harvested from computer systems that are being developed to perform tasks and improve themselves based on the information collected.
This kind of intelligence is in contrast to the intelligence found in humans and/or other organisms who possess problem solving, reasoning, and adaptive skills.
Across the industry, researchers tend to classify AI into three categories:
Weak, or narrow, AI
General AI (AGI)
Superhuman AI
Narrow AI is designed to handle a single subject or narrow task and cannot perform additional tasks other than the rules programmed. For example Siri is a Narrow AI algorithm.
General AI (AGI) on the other hand is technology that can think like humans, process and solve complex tasks and attempt to solve problems better and better independently. This is deep learning and it imitates the workings of the human mind.
Last, SuperAI is even more meta and is considered to be systems of AI that supersede human ability.
Risks and benefits of AI
So are we all evolving into cyborgs with lasers shooting out of our eyes? Nope, not yet. However, humans feel both a fear and fascination with AI and its potential. It is important to put AI into context because of all the hype that can sway the public into thinking we are headed into armageddon.
The benefit of AI is that it can solve complex problems with an ease of efficiency from evolving large data sources. It can shore up time and effort for other human investments and innovation. AI is finding great use cases in fields such as content creation, coding, autonomous driving, writing, and personalized learning. The risks, however, range from algorithmic biased data, job loss, to AI becoming an existential threat.
An honest concern is that AI training datasets encode biases. For example, women across social media spoke out and critiqued the app Lensa for outputting edited images of a more sexualized version of themselves and whitewashing people of color. The claim was that these AI algorithms perpetuated damaging stereotypes and did not produce accurate, meaningful outputs. However, the problem with AI is often more with the quality of the data being imputed into these systems. Algorithmic outputs are only as good as the quality of data being put in. Depending on who is inputting and designing the data, the results will reflect certain perspectives while limiting and potentially harming others. In order to support a more just world, it becomes imperative to include diverse points of views in the design and training of AI.
Meanwhile, other concerned citizens have spoken out about privacy and intellectual property rights as publicly available content on the internet is being used (including materials with nuanced copyright, trademarks, consent or compensation) to train unsupervised AI models. There is also ongoing conversation regarding whether AI generator tools are anti-creator and have the potential to replace this market space.
Sasha Stilles, a poet and artist nominated for the Pushcart Prize, Best of the Net, and Forward Prize, is a lifelong poet and AI researcher. Her work explores the intersection of text and technology, experimenting with language models (LMs) like GPT-2 and fine-tuned text generators. Her book Technelegy was written in collaboration with AI using her poetry as the foundational training data.
“I’m much less interested in the idea of being replaced by AI than in the prospect of being augmented- having our human capabilities expanded and elevated via intelligent systems,” she said. To Stiles, there is a false binary between technology and humans. From fire, the wheel, printing press to the internet, human civilization is a direct product of advancing technologies. As a female poet, her work is doubly important as she strives to bring her perspective and encourage her peers who are currently underrepresented into the making of these systems.
General applications of AI
Don’t panic. Applications for generative AI models are still relatively limited. However, fascinating use cases have been proposed for LMs including those for ChatGPT. OpenAI’s new bot can write college level essays, debug code, write funny jokes, play a role as a translation manager and even create apologetic William Shakespeare sounding sonnets. ChatGPT is great for tasks that are low stakes but still fail in sensitivity and specificity.
Princeton University scientists Arvind Narayan and Sayash Kappor wrote an article on Substack describing a foundation for defining tasks that ChatGPT and other LMs can do. They outline a few types of ideal AI tasks:
l Tasks where it’s easy for the user to check if the bot’s answer is correct, such as debugging help
l Tasks where truth is irrelevant, such as writing fiction
l Tasks for which there does, in fact, exist a subset of the training data that acts as a source of truth, such as language translation
The downside is that sometimes, the bot sounds so convincing that people can misconstrue it to be the truth. The reliability rate on ChatGPT is still mediocre, as disclaimed on Open AI’s website to any user who opens a trial account. Critics of AI argue that downstream effects of an authoritative-sounding bot with inaccurate outputs can push our current epistemology crisis even further into a black hole.
Ethical solutions for AI’s risks
According to a paper by University of Washington researchers, several planned approaches must be in place to mitigate risks for AI language models. Researchers and AI trainers should carefully document their data sets, evaluating and auditing the effectiveness on a regular basis and holding themselves accountable for computational characteristics that run the risk of maleficence. Undocumented data fed into an AI learning system runs the risk of perpetuating harm without recourse or appeal.
Moving ethically forward with this technology calls for organizations to set standards of accountability and transparency. Governance around AI should scrutinize practices of data collection and audit for computer behaviors/designs that potentially create inaccurate and harmful effects. Understanding how AI models are being built and engaging in productive collaborative conversations will be essential.
So, readers — are you up for the challenge?
Recent Comments