by Kathy Huang
Earlier this year, during an icy spell in Boston, I attended my first American Association for the Advancement of Science meeting. On the last day of the meeting, I ran into a fellow attendee on my way back to the venue from getting lunch. We were both mildly lost, but once we found the right direction and could walk in a more relaxed manner, she remarked to me that there was a surge in the number of Artificial Intelligence (AI)-related panel sessions at the meeting this year, compared to last year.
While an annual meeting that draws some four to five thousand attendees is only a small sample of the population, this surge reflects a broader rise in the presence of AI in our work and personal lives. It is a change that has sparked mixed reactions: according to a recent Pew Research Center study, 51% of adults in the US are more concerned than excited about increased AI usage in daily life.
So, how wary should we be of AI technologies? And to what extent does our wariness come from media hype, as opposed to concrete problems that AI has already presented us with?
Sensationalism in the Media
News outlets frequently dramatize their content to appeal to our emotions and curiosity, thereby boosting their readership. When it comes to AI, headlines like, “Will AI take over the world and all our jobs?” immediately grab our attention and can make us fearful and antagonistic towards AI.
In the pursuit of sensationalist, science fiction-inspired news stories about AI, journalists can become biased in the kinds of information they want from experts. Brian K. Smith, a professor of computer science and education at Boston College, recalls a reporter asking him about the possibility of an AI apocalypse in which robots rise up against humans, à la Terminator or RoboCop. Smith challenged the reporter: “You’ve given me science fiction examples. Is there any instance so far in the history of humanity where machines have tried to rise up and destroy the human race?” The reporter ended the interview right there.
“People are preoccupied with this one sort of model where AI is imagined as a person that will replace people,” said Ted Underwood, a professor of English and information sciences at the University of Illinois Urbana-Champaign. “Stories like that get a lot of attention.”
AI experts themselves can also be guilty of over-hyping their work. In 2016, “godfather of AI” and 2024 Physics Nobel laureate Geoffrey Hinton suggested that we should no longer train radiologists because AI would outperform humans in that field within the next five years, or “it might be ten.” Since that prediction, the radiology staff at the Mayo Clinic has increased by 55% to include more than 400 radiologists. Across the US, the American Board of Medical Specialties reported that there were 59,525 board-certified radiologists in June of 2019. By June of 2024, that number had risen to 66,962.
MIT labor economist David Autor opined that predicting AI will steal jobs from humans underestimates how complex human jobs actually are. Indeed, AI did not replace radiologists. Rather, it has helped radiologists work better, such as by identifying images that likely show an abnormality so they know where to look first, or by taking accurate, reproducible measurements that are variable when done by hand. Human radiologists are then still needed to interpret the images and numbers, make conclusions, and communicate their findings to patients and other physicians. Admirably, Hinton has acknowledged that he overstated AI’s rendering of human radiologists obsolete, updating his view to be that AI will make them more efficient.
“I would like to see [media] coverage that spends more time thinking about how people can expand or augment what they’re doing, rather than the people versus robots script,” said Underwood. He highlighted that we need greater technology literacy for using AI tools in the workforce, as currently, “Most people don’t have the understanding to take full advantage of them, even if they technically have access.”
In fact, many may only understand a small slice of what constitutes AI. “At this moment in time, when people say ‘AI,’ they almost always mean ChatGPT,” said Smith.
ChatGPT, Generative AI, and the Humanities
Since entering the chat—pun very intended—in late 2022, ChatGPT has essentially become the face of AI. Yet, AI research began much earlier in the 1950s and its results have long been in our everyday lives, from Siri to email filters and even Roombas.
Smith credits the equivalence of ChatGPT with all of AI to the branding genius of OpenAI, the company behind ChatGPT. “OpenAI did such a phenomenal job by putting a simple interface on top of their language model and calling it ‘ChatGPT,’” he said. “It became so public-facing [while] the rest of what we talk about as AI is always buried.”
AI is an umbrella term for computer-based technologies that perform tasks by trying to mimic human thought. ChatGPT is an application of generative AI, a newer type of AI that learns patterns from data and imitates them to produce text, images, video, code, and other content based on inputs or prompts. In contrast, traditional AI involves systems that mainly analyze and predict, rather than produce content.
If we restrict our impressions of AI to ChatGPT and other generative AI tools, it is easy to see why AI has received such backlash in recent years. Students have used ChatGPT to write essays that were assigned to teach critical thinking and communication skills, defeating the purpose of those assignments. People can rapidly spread mis- and dis-information with AI-generated content, and while some instances might be light mischief, others can seriously harm reputations and hinder emergency responses. Additionally, scammers have used AI to write text for phishing emails and advertise fake products, cheating people out of time and money.
While many can denounce such misuses of generative AI, a squishier issue involves using generative AI to produce art and writing for private enjoyment. Certainly, training AI systems on copyrighted material without their creators’ permission evokes ethical concerns. So does taking business away from human artists and writers when the creative industry already tends not to pay well. But what if someone who cannot afford to commission a human artist just wants to Studio Ghibli-fy a family photo with ChatGPT to put on their desk?
If the technology is available, it is hard to persuade people not to use AI-generated work because they “should” help human creatives make a living, or because the output does not have enough human labor behind it to be “real” art. The focus then turns to what companies might do to restrict access to these technologies, or to pay creatives for using their intellectual property to train generative AI.
Underwood, whose research involves text-mining digital libraries, has thought about fair use for a long time. He believes training AI models with copyrighted material should be fair use as long as the material is not too new—say, created within the last five years—and as long as the models are only extracting patterns from it as opposed to reproducing it, which is legally prohibited. He offers this perspective on the flip side:
“Suppose we rule that training is not fair use. You have to pay publishers for their archives in order to train on them. The big tech companies are perfectly capable of doing that; they absolutely could buy up publishers’ back catalogs. If we determine that the right to train is something that can be bought and sold, we’d now have big intellectual monopolies where every professional in the country has to pay $500 a month to [a company] to have their [AI] model, which is the exclusive model. That’s much more terrifying to me than the idea that these models are going to compete with creatives. If we have [AI technology] open, creatives can figure out how to use it themselves and still have a job. If it’s a market that can be cornered, there is potential for concentration of power and increase of economic inequality. To me, the intellectual property argument against AI has a high chance of backfiring on those important pieces.”
Though the concept of AI versus humans looms large, generative AI competition against human creatives is not the only possible relationship between AI and the humanities. Memo Akten, Sougwen Chung, and other acclaimed artists have already integrated AI into their work. Underwood sees promise in using large language models (LLMs) to computationally analyze vast amounts of text for drawing conclusions about literary history, a field of research known as distant reading. In the past, he has done this with statistical models to study trends like gender stereotypes across two centuries of written fiction. More recently, he has shown an example of how an LLM-based AI model can not only be more accurate than a statistical one in estimating time passage in fiction—another trend he has studied before—but can also be more interpretable in documenting its “thought process” verbally.
“I don’t see us just not doing the technology,” Underwood said. During a presentation at the Wolf Humanities Center at Penn, he recommended that humanities scholars ought to work with AI developers to help their models represent a more diverse array of cultural perspectives. In computer science terms, this is called pluralistic alignment. Fortunately, Underwood notes, computer scientists recognize this need for humanities input and better cultural representation.
Panic, Policy, and Regulation
Patrick Grady and Daniel Castro from the Center for Data Innovation describe the phenomenon of the “tech panic cycle” as a bell curve of public concern. In this curve, Trusting Beginnings in a new technology are followed by a Rising Panic that eventually reaches a Height of Hysteria. This hysteria then subsides in a Deflating Fears stage, and finally we reach a Point of Practicality in Moving On from the panic.
In April of 2023, Grady and Castro placed the hysteria surrounding generative AI, due to the introduction of ChatGPT, in the Rising Panic segment of the curve (Figure 1). Now, two years later, Smith believes we are still in the same vicinity. He noted that in practice, the curve is not smooth, but rather contains many oscillations, and different sectors of people—educators, creatives, etc.—can be at different points along the curve. Overall, though, “I don’t think we’ve come as far as Practicality [yet],” Smith said. “It’s a moving target.” Printed books, recorded sound, and motion pictures all went through tech panic cycles, suggesting that at some point, we will reach an overall Point of Practicality with generative AI—and AI broadly—as well.
Figure 1. The tech panic cycle. Grady and Castro place generative AI in its Rising Panic phase as of April 2023. Taken from https://datainnovation.org/2023/05/tech-panics-generative-ai-and-regulatory-caution/.
This is not to say that we should just sit back and let the curve happen; we can still make more careful choices about how generative AI tools are developed and implemented. The extent to which this should involve regulation at the policy level is a thornier issue. Some, such as Grady and Castro, think that regulating generative AI will only stifle innovation, even with the flexibility that guidelines like the EU AI Act offers.
“I think there is room to say, ‘we want to be responsible,’” Smith believes, citing policy development surrounding nuclear energy and biotechnology. “I think you could do that without saying, ‘and then there will be no more innovation.’ There have to be some kind of rules or constraints that govern the decision-making around those things, and also conversations with developers where they’re in communities aligned with people who could help them see, ‘there’s a potential problem here.’”
He understands Grady and Castro’s point, though: “A law is built on precedent, so it’s all going to be [hypothetical] until some specific cases come up.”
Our Future With AI
Many grievances against AI are not intrinsically technology issues, but social issues that AI forces us to reckon with. The panic people feel at the idea of losing their jobs to AI’s cheaper labor is a capitalism problem. Cheating in school did not start because of ChatGPT. Academic integrity, which Smith likes to say is the “AI” we are actually concerned about, is more about how teachers are teaching and assigning work, and what kinds of values students are learning from the people around them. Data centers that house AI machinery guzzle enormous amounts of water and electricity. This is an environmental issue that younger generations, trending more eco-conscious, may want to weigh against cheating in school with ChatGPT and other LLM-based tools. Misinformation is not a new phenomenon, either. AI has just made it easier and faster to produce, which means the general public needs more digital media literacy alongside improvements in watermarking AI-generated content.
There is no promise, as Underwood noted in his talk at Penn, that the good AI can do will outweigh its consequences. AI’s many associated problems, beyond what I have covered here, are very real and should not be dismissed in the name of technological progress. But, some aspects of AI are perhaps not as awful as their reputation to date, and we can still be optimistic about our ability to build and harness AI systems for good.
When I asked Underwood if there was anything he particularly looked forward to AI being able to do, he said, “I would love to see [AI models] be able to tell a suspenseful story. Right now they don’t understand suspense.”
Will generative AI ever get there?
I guess we will find out.