By Tom Robotham

Lately I’ve been thinking a lot about artificial intelligence and its potential impact on society. On one end of the spectrum is the grim scenario of robots taking over and either enslaving the human race or wiping us out altogether. This nightmare has long been the stuff of science fiction, notably in the 1968 film 2001: A Space Odyssey when the computer HAL 9000 tries to kill the astronauts aboard a space ship after learning that they are planning to shut “him” down. 

On the other end of the spectrum is the possibility that AI might free us from all kinds of mundane tasks and accelerate human learning. 

I’ve been attempting to make sense of all of this, first by reflecting on the history and impact of technological change over the last 150 years or so. At the moment, I’m thinking in particular about the changes witnessed by my paternal grandparents who were born 15 years before the Wright brothers proved that humans could fly, and lived long enough to see men land on the moon.

The early history of aviation, in particular, developed at astonishing speed, from the first flight in 1903 to routine military aviation just 11 years later. 

When they were children, meanwhile, my grandparents lived without electric lights, radio or telephones. By the time they died, television had long since become commonplace, and the age of personal computers was dawning.

The changes I’ve witnessed in my lifetime are not quite as dramatic but are striking nonetheless. When I was a kid, I loved television but had limited viewing options, on our family’s 19-inch black-and-white set, from half a dozen channels. Today, I take it for granted that I can watch on demand, on my large high-definition screen, virtually any film or TV show ever made. 

My work life has also changed by leaps and bounds. When I was in college, I wrote all of my papers on a manual typewriter and did all of my research using the library’s card catalogue, books and microfilm. After college, when I got my first newspaper job, I graduated to an IBM Selectric and was fascinated that I could erase typos with a key click, thanks to built-in White-Out tape. When desktop computers arrived a few years later, I was awed by the fact that I could move entire paragraphs at the click of a button. Nevertheless, all during my early newspaper career, I still used phone books, paper maps and a beeper, which alerted me that I needed to find a pay phone, fish a quarter from my pocket, and call my office. 

Even as late as 1993, when I started my master’s thesis, the internet still had no practical applications for the average person, so I had to do all of my research the old-fashioned way. 

I don’t remember precisely what year it was that I got my first cell phone, but I do know that well into the early 2000s, I was still using a landline most of the time. It’s worth remembering that the first iPhone wasn’t introduced until 2007. And yet, within a few years, smart phones were ubiquitous. 

That same year, I discovered My Space. Facebook was still a marginal platform, limited mostly to college students. 

Fast forward to the present—less than two decades later. Here’s an outline of my typical day: After getting up and making coffee, I open my laptop and do Wordle and Spelling Bee, then read the online version of The New York Times. (I can’t remember the last time I held a newspaper, other than VEER.) After that I check my various email accounts, glance at Facebook, then log onto the ODU website and do some grading on an app called Canvas. If I have the time, I also watch Stephen Colbert’s monologue from the night before, on YouTube. (I haven’t watched live TV in more than a year.)  

Two days a week, I still go to campus and teach students face-to-face, but all classrooms there are equipped with desktop computers, which I use regularly to show videos and project literary texts on screen to facilitate discussion. To do this, by the way, smart phones are now required for a dual identification log-on procedure.

For about 50 of my 69 years on this planet, none of this existed. And yet, I now can’t imagine life without these technologies. 

In light of this, I can only imagine how thoroughly integrated technology is into the lives of my students, who’ve never known a world without the internet, smart phones, social media, email and Zoom meetings. 

It’s no wonder that they take ChatGPT for granted and are tempted to have it write their papers for them. 

Which leads me back to the potential ramifications of what we’re calling artificial intelligence. 

First of all, let’s be clear that artificial intelligence has been around since the dawn of the computer age. It’s the essence of a hand-held calculator, never mind the technology that allows us to ask Google how to get a wine-stain out of a carpet, or give us driving directions. 

So why has AI become alarming all of a sudden?

 For me, it’s disturbing on a personal level because I can easily see a future in which I will no longer be needed. Universities will place growing emphasis on courses designed and graded by AI. That may or may not happen in my lifetime, but it’s not farfetched.

In the meantime, I’m concerned that AI will erode students’ ability to think and write because the temptation to have that process done for them will be too great. And not only students: Already, some professors are using it to generate lesson plans and lectures. This holds no appeal for me because I enjoy creating from scratch, but I’m trying to keep an open mind. I suppose ChatGPT could be a useful tool for generating ideas that might be selectively incorporated into an original piece of work. Alas, many people lack that discipline. Already, I’ve had students call on AI to write essays, which they then copy, paste and submit without even reading them first.

Beyond education, there are two other areas in which AI poses danger: politics and mental health. Over the last decade or so, we’ve seen how corrosive fake stories on the internet can be. Now, with AI, we’re seeing more and more images and videos that are made up but are realistic enough to fool many folks. 

More dangerous still is a growing trend among teens to turn to AI for advice on mental illness. There are documented cases of chatbots encouraging depressed teens to commit suicide. 

The bots don’t always lead people astray. Recently, I heard a story about a woman who found the courage to leave her abusive husband after consulting AI. That turned out to be good and necessary advice. But the more general risks of relying on it to make grave decisions should be obvious to everyone.

Still, I wonder if we’re looking at these new developments too pessimistically. Perhaps our fears of AI are a reflection of the darkness of our times. We used to have more faith in the future. I’m thinking in particular of the 1964-65 New York World’s Fair, where new technologies were predicted with bright and shining optimism. And in retrospect, much of that optimism seems to have been justified. Today, by contrast, there’s an oppressive sense of gloom in the air, and a fervent desire to go backwards.

For better or worse, though, there’s no going back. AI is here to stay. The best we can do is to try to use it responsibly and to stay aware of both its benefits and pitfalls.