June 14, 2024

We Have the Power to Choose What AI Does Next

By: John Tuttle

“Hey Alexa, open the pod bay doors,” we jokingly said to the inanimate transducer in our aunt and uncle’s kitchen. “I’m sorry Dave,” answered Alexa, “I’m afraid I can’t do that. And I’m not HAL from 2001: A Space Odyssey.”

It was a beautiful and, at the time, hysterical bit of programming. AI is a tool, not a creature, contends Jaron Lanier, an influential scientist at Microsoft. It is a tool that we can “teach” or program. Yet, the popular imagination often muses over the scenario of machine overpowering maker.

That sense of superiority or fear of domination may be felt in any number of areas. Bots take over jobs. Sometimes they’re better and faster than we are. Having a machine complete tasks can not only save money; it can improve those projects that might have stagnated or been disadvantaged.

The days of print media are not entirely dead, but those of the typewriter are. As nostalgic as it is, I’ve never used the burdensome Underwood that rests in my bedroom. It sits gathering dust. Meanwhile, before writing this piece, I made sure to log-in to Grammarly. Technology changes at a rapid pace. As with any development that impacts employment, recreation, and communication, AI and automation lead to social upheavals. Let’s see what’s changing and what we can do about it.

Bots Break into the Blue Collar Workforce

Many American homeowners now welcome a robotic vacuum to the family. Devices like iRobot’s Roomba, which benefit from machine learning technology, develop algorithms to detect patterns— similar, perhaps, to how humans develop habits. Thus, the Roomba cleans our floor. 

While these little bots help us to spiffy up before guests come over, similar machines are put to work on a larger scale, making certain tasks in the blue-collar sphere obsolete. In the summer of 2018, before moving off to college, I worked in a fulfillment warehouse. One of the tasks I had was wrapping the outgoing pallets in plastic. I returned to this job for a few months as a post-grad and found a pallet-wrapping robot had gotten the gig.

These bots, like their little vacuum cousins, also function with a series of sensors and machine learning algorithms. Today, automated palletization (including the loading of pallets) gains more and more application. Other work robots that function in a similar way include “mowbotsand the cleaning/restocking bots found in numerous grocery stores across the country.

Dr. Daniel De Haan, a research fellow at Oxford who specializes in topics relating to religion and science, worries that AI and automation applications will take away jobs from various sectors. Such applications are used in numerous fields from manual labor to medicine, legal advice to communications, and a whole lot more. Making certain jobs obsolete, as some bots already are, comes with pros and cons. From one perspective, this is nice: “The robot handles the tedious part of my job.” From another perspective and to a higher degree, this could be terrible: “The robot has replaced me.”

Again, AI and automation are just tools. Granted, De Haan says, AI might be “the most powerful tool we’ve created so far — aside from language.” But still a tool. Because it’s so powerful, and because its effects amount to a mixed bag of moral implications, how we use these tools becomes an ethical consideration.

The Helps and Hazards of Generative AI

Countless websites host chatbots in lieu of customer service representatives. When I Google something on my phone, the topmost answer is an “AI Overview.” But AI-generated answers, while they might sound nice, aren’t always reliable. Such misinformed data are known as “AI hallucinations,” where a generative AI produces incorrect information. It turns out artificial intelligence is not infallible.

When I’m writing online with Grammarly, I only use some of its basic features. In that sense, it’s pretty much glorified spellcheck and thesaurus. But even at that level, the algorithms are present — scanning, scrutinizing. Sometimes, it suggests a change of phrase and, sometimes, I agree with Grammarly. Other times, I don’t. It’s still a tool. I can choose what I want to do with it and what I will do with it.

Of course, someone could have Grammarly write an entire email for them. A student could have ChatGPT write their essay for them. And, perhaps worst of all, a professional writer could have a generative AI do their work for them! This actually happened at another publication I write for, The Collector.

In August 2023, The Collector’s editor-in-chief emailed all paid contributors that some submissions used AI-generative software without permission, which constituted a “serious breach of our ethical standards.” They terminated all contracts with authors who submitted AI-generated articles. The editor wrote, “Technology can be tempting, but it should never replace the human touch and genuine creativity that our readers expect.”

This illustrates a few things. One is that people use AI to plagiarize. (In reality, generative AI is one big plagiarism machine, cobbling together texts and research previously conducted by flesh and blood people.) Another thing it shows is that some companies care about the “human touch.”

Responsibility and Choice in the Age of AI

Part of the human touch is choice. A large language model AI isn’t the one who decides what patterns it “learns.” A person makes that choice. Likewise, we have a choice to make concerning the AI that automates and streamlines our lives. In talking with Big Think, Ethan Mollick, a professor at University of Pennsylvania’s Wharton School of Business, said that this fact puts power and responsibility back in our hands. Getting advice from an AI can even help add fresh perspectives, he says. Getting this advice forces us to “either reject or accept it.” Just like I can accept or decline Grammarly’s suggested edits.

Mollick says this relationship of utility can help foster human creativity. After all, Grammarly’s rephrasing of my term isn’t a far cry from me scouring through a thesaurus website. But it’s certainly quicker! And if the artist in me is concerned about an AI ever being a better writer than me, I need only remember that generative software might just as easily be called the perfect plagiarism tool. From a creative perspective, that’s all ChatGPT is.

Professor Mollick remarks, “We get to decide how this thing is used…We need to think about the mistakes we’ve made in regulating other technologies and what the advantages are.” Managers decide what tools and bots to implement in their sphere of influence. Maybe that will cost someone’s job. Ordinary people get to decide whether to use AI tools for something constructive (like generating goofy images) or destructive (such as creating a deepfake video of a political leader).

Now it’s also up to federal agencies like the FTC and DOJ to conduct an antitrust investigation into the Big Tech companies that are on the cutting edge of AI. This is one effort on the government’s part to mitigate the dangerous potential of AI.

Right now, we have the responsibility for AI. We can choose what it learns, where it goes, and what we let it do. However, De Haan bemoans when people turn to AI to solve the major problems in life such as those dealing with relationships and parenting. When this happens, De Haan argues, we are forgoing our human responsibility. We’re “offloading responsibility” onto the AI. This lifts a uniquely human faculty from our shoulders, deferring to AI.

This is a definitive moment in history at the intersection of human and artificial intelligence. We have the power to use it well or irresponsibly. We should take care how we use this new, powerful tool. Otherwise, we might end up asking Alexa how she thinks we can best use AI.