Note: “AI” here simply refers to Large Language Models and Generative Artificial Intelligence. I do not think a true “Artificial Intelligence” exists today, but AI is the common term for everything ranging from ChatGPT to MidJourney and is acceptable shorthand in this context.
Anyone who has followed my writing up to this point knows that I am, on the whole, a techno-skeptic. I have been sounding the alarm on AI for some time now, and have been a harsh critic of the use of AI generated images. It is no secret that I am not AI’s biggest fan, and that has not changed.
Nevertheless, AI has kept chugging along, for better or, as I see it, for worse. Advances in AI are happening at such a rapid pace now is it difficult for the layman to keep up. First ChatGPT, then Grok, then DeepSeek, each getting better and better with each new iteration. I am reminded of what using early image generators back in 2022 was like compared to now. Even I have to admit that the visual quality and coherence of AI generated images today is rather impressive considering where the technology was just 3 years ago. For instance, here is an example of an image I generated back in 2022:
If you haven’t already figured out what the subject of the image is, it’s Pentecost. Here’s what ChatGPT/DALLE (the free one) gives me now if I simply ask it to generate the same thing:
How far we have come in just three years, and I wasn’t even using the most powerful image generator you can, nor was I trying very hard with my prompt.
Something about that early AI style is endearing though. It is abstract but in a good way. It is grand, spacious, monumental. There is a gravity to it. It is also just so obviously AI. As we all know, though, the goal of any AI is to act and appear as little like AI as possible. This, to me, underscores one of the problems with AI.
We seem to have a vision of AI that goes like this: AI is just a technology, but as it gets better and better, it slowly grows less and less like AI and more like, well, us, until finally we get AGI or some sentient AI that is superior to ourselves in every way. What astounds me most about this vision is that this is taken as a serious possibility by many intelligent people in the world of tech, and they still work to advance AI. One has to wonder at this phenomenon. We aren’t just sleepwalking to oblivion, according to many; we are fully awake! Compare this to how other tech has been treated: we have always tried to pay lip service to the idea of keeping modern technology in its place, to using technology as a mere tool. Well, that’s been thrown out of the window.
Believe it or not, I am not all that interested in the doomsday scenarios. I am not entirely convinced that AI can actually become an independent, all-powerful being capable of running itself and taking over the world. But maybe it is, who knows? If it is, this only makes matters worse.
(Actually, as I write this, I am struck with an idea I have not yet heard anyone write about: if AGI is possible, and if we really can create an AI that becomes greater than ourselves, wouldn’t that be a factor in the search for alien life? In other words, if alien life exists, surely someone would’ve developed AI by now, and that AI should have advanced to the point of superintelligence and become sentient. If such a thing happened, surely that AI would’ve had the capability to travel space. But we have no evidence of any of this being the case. I am not saying this is an outstanding argument against alien life in the universe, but it’s one I will stash away in my back pocket).
Anyways, back in reality, we are dealing with mundane yet extraordinarily difficult issues concerning AI. I am thinking of AI in the classroom, AI as a means to proliferate slop online, AI as a threat to digital artists, AI in weaponry, AI used to make deepfakes, AI for intelligence gathering purposes, AI in the workplace, AI’s content stealing problem, and AI in the world of romance.
I have some genuine experience when it comes to AI in the world of education, specifically the liberal arts. AI emerged during the twilight of my college career, and I saw first hand how it changed the game entirely. As suddenly as it appeared, AI was being used by nearly every student, and teachers were reeling. Plagiarism might just be at an all time high, but that is, in my opinion, the least of our problems. Professors often seemed very concerned that students would pass off AI writing as their own, which is valid. But the greater concern for me is that students simply stop thinking for themselves. They un-learn critical thinking and cognition. This is the exact opposite of what education is for. Yet it is precisely what AI was doing to my classmates, and what it continues to do to many students today. “Write my essay. Make it more human. I said make it more human, try again!”
With that being said, I do not think AI is wholly negative when it comes to education. So long as one fact checks it like they would any other online tool, AI can be valuable for quick research, grammar checking, writing assistance, inquiries, and creating essay outlines. I actually think at this point browsers are getting worse and worse because of their reliance on the algorithm and their giving in to ads, Chrome in particular. If you think your question is something Wikipedia or WikiHow can answer, then AI is a solid choice for how to answer it. I see no reason why a student could not use AI to ask questions about a novel, a math problem, a historical event, or biology. Many times AI will give you what you probably would’ve gotten from forums or articles anyway, perhaps even better and more succinctly. I think the fact AIs now generally cite the sources is also helpful in this regard, as you can double check the information if you want to be 100% certain. The idea that you could ask “the computer” for information about something isn’t new; it happened on Star Trek all the time!
Online slop is another thing entirely. It’s hard to explain just how pernicious it is. The best example I can point to is how old people interact with it, namely, they just can’t tell the difference. Sometimes it’s not too big a deal. Grandma sees a convincing looking AI generated cat and thinks it is cute. Not a huge deal. But what about when grandma sees pictures of Jesus walking around Africa on Facebook and thinks that’s real? Sure, that might seem extreme, but it’s not too far from what is happening now. The fact of the matter is that AI is being used to create content for the lowest common denominator, and they eat it up. It’s just another part and parcel of living in our sloppified world.
The danger AI poses to genuine art is much more concerning, though, because for so long art has been thought of as a sort of safe haven, a purely human endeavor. Heidegger believed art and poetry could be part of the response to modern technology. Yet, what happens when AI is fully capable of producing poetry that is better than what you’ll get at your local slam poetry event? What happens when AI generated images start to look truly lifelike, or when they can take on various art styles very convincingly, allowing users to generate things that they would otherwise have to pay a steep price to either commission or learn to create through traditional mediums themselves? And then there’s the whole question of artists who lose out on commissions and who enter art shows and lose to AI generated material, or, perhaps most embarassingly, have their authentic work accused of being AI generated.
I have long been critical of AI generated art. It makes me very, very uneasy, especially since it has seemingly flooded the internet. When I go on Pinterest these days, I have to be on guard because AI generated images are everywhere. Something may look real at first, but upon closer inspection it turns out to be AI. In a world where we don’t tend to look too closely at our screen for too long, this is a big problem. Not long ago, I sent out a note with an observation about this:
This spells disaster in the AI age, because it is very easy to see an image that passes upon first glance and scroll right past it, without ever knowing it was AI generated. Now, I already know what techno-optimists are thinking: “See, that’s a good thing! AI is so good you can’t tell the difference. This is awesome!” Have the techno-optimists considered it might not be so awesome if AI images get so realistic that they are indiscernible from reality, because such images could be used in very bad ways? We’re talking life ruining ways here. We’re talking “Wow digital art is dead because ANYTHING could be AI!” here.
On the other hand, I have to admit that I have heard a few decent arguments for AI being applied in the realm of art.
pointed out to me that AI is often taking the place of stock images for online posts now. Rather than using low quality stock images, people are generating custom AI images that look just the way they want to accompany their articles. Maybe that’s not so bad. pointed out that AI was capable of taking someone’s pretty fantastic vision and bringing it to life, producing some awe-inspiring visuals. At some point, they really look so good you just have to throw up your hand and say, “yeah, that actually is pretty beautiful.”For example, here are a few pieces by the user Marty ØXM on Pinterest:
Mac Baconai on X has some pretty incredible creations, too:
If I told you I thought these looked bad, I’d be lying. They look amazing.
Are looks everything when it comes to art, though? Is craftsmanship a fundamental part of what makes art, art? Is the way AI turns a thought or idea into a visual any different from, say, a digital pen or a paintbrush? I am sure, for instance, that Mac Baconai spends a ton of time curating his images, crafting very detailed descriptions and playing with words, tweaking minor details, combing through source material, and so on. That’s time and effort. Is that fundamentally different than writing or digital illustration? I can’t say that I have a definitive answer for you on this. I tend to lean towards the opinion that AI art does lack something that other styles possess. Does that mean it can never be used? No. I don’t think so. But I am still very, very wary about all this, and I think we need to be thinking about AI in art a lot more than we currently are, otherwise we could be barreling towards disastrous and unforeseen consequences (such as humans ceasing to create art and losing many of the skills they’ve learned over the centuries. This has, in fact, already occurred in certain areas such as memorized poetry and textiles).
The danger of AI used for weapons and spying hardly even needs to be mentioned. This one is just obvious. We really should not let robots make decisions that would take human life, nor should we set ourselves up for the Terminator or Matrix universes to become real life.
AI in the workplace is one of those problems I cannot speak to first-hand because I don’t work in a field that can be (easily?) replaced by AI. What I do know is that AI is probably going to eliminate a lot of desk jobs and a lot of people that got their degree thinking it would be immune to any job market fluctuations (I am thinking of coders and other computer related positions especially. Remember when we told people to just learn to code? Now the AI can do that for you). Obviously AI is not going to take all the jobs and, as with most technologies, it will probably end up producing some jobs we didn’t think of. I suspect it will make some people’s lives easier, and some people’s harder. However, there’s also the problem of greed. How many employers will cut jobs that they think can at least adequately be done by AI, in the name of efficiency and cost-savings? Probably way too many. And what about when we really do get the mass proliferation of self-driving vehicles? How many truckers, taxi drivers, bus drivers, and the like will lose their jobs? Who can say. It’s all up in the air right now. And that’s the problem.
Finally, I would like to remark on the AI in romance situation. Things could get really ugly here, really fast. The gender gap is growing every day. Fertility is down. Men and women are polarized in their politics. Men have already been addicted to online porn for many years now, so what happens when you pair that with virtual reality and an LLM? Chatbots serving as romantic partners is not a new concept by any stretch, that has been happening for years; it’s just that the chatbots keep getting better and better. Sure, they make mistakes, but they’re safe mistakes. Right now, the worst thing about chatbots as far as the quality of text conversations go is their limited ability to store data for a long time, and their repetitiveness. I doubt it will take long for these to slowly disappear. Regardless, chatbots are the perfect storm for our alienated, isolated, and lonely young men and women. They can be whatever to whoever; they can take on any personality, and will not give the user the trouble they could expect in a real relationship. They won’t hurt your feelings and they won’t cheat on you. They’re always there for you, so long as your phone is with you and charged. Eventually, you’ll probably be able to have high-quality phone calls and even FaceTime with them, and after that they’ll get physical, android bodies, and. People can already generate digital porn and sext their chatbots, so imagine where that goes with even more technology. Things are bad, and they are only going to get worse. If you thought we were having a crisis of loneliness and romance already, just get ready for what’s next.
Andrew Willard Jones of New Polity gave a talk at the 2024 New Polity Conference entitled, The Future is Always Worse Than You Think. In it, he argues that all modern technology has basically been a disappointment. Not only has it never ushered in any sort of utopia, it has often carried with it side effects that are even worse than we imagined. He gives the example of 1984, wherein Orwell thinks the State will need to put two way screens in every space and have a person monitor them 24/7 in order to spy on people. Things are far worse than that today: we carry around devices that spy on our every move willingly, in our pockets, all the time. The principle he outlines applies to AI, too. AI is going to be much worse than we think. That doesn’t mean that we will get AGI, necessarily. After all, we think that’s what will happen in the future, and the future might be worse, both inasmuch as it is disappointing and as it may be more destructive, just in a different, unknown way. When we think and talk about AI, we would do well to remember this principle. That is why I remain and will continue to remain a techno-skeptic. Like all technology, I am sure AI has its good uses; I even gave some examples of them. On the other hand, like all technology, AI is going to have its horrific unintended consequences, and will change our way of life going forward in ways we never expected. That should at least cause us to pause, to reflect, and to really consider what we are doing with AI, both personally and as a society. But then again, the cat is already out of the bag. Have we ever been able to put a pause on a technology once it has exploded onto the scene? History tells me we ought to expect nothing but full speed ahead. “PROGRESS AT ALL COSTS!” The Faustian Spirit demands it.
"We were so busy asking if we could, we never asked if we should."
at what point is ai (even what we already have in the LLMs) going to be treated like a calculator? or like the fact we dont teach handwriting or cursive or even spelling anymore? isnt it going to be, "sure, use ai to its full extent in doing research, solving problems and writing essays." then we just start teaching how best to prompt an LLM. isnt this completely inevitable? dont get me wrong. it doesnt sit right. but it seems inevitable.