AI Does Not Need to be Sensational to be Dangerous
Not every negative consequence of AI is sensational. That doesn't mean it won't cause immense damage.
When one looks at the conversation surrounding the use of Generate Artificial Intelligence Engines (AI hereafter), one will typically find a great deal of sensationalism and apocalyptic rhetoric. Media does not help in this regard, with movies like The Terminator and The Matrix portraying AI machines as eventually dominating humans and taking over the world in a very visceral way. Similarly, there are fears today—not unsubstantiated—that AI would cause massive unemployment across a wide variety of fields. Artists are especially concerned with the rise of generative AI.
Though these conversations can sometimes be useful, I fear that they miss the forest for the trees. You see, we are very concerned about the big things; will AI take all the jobs? Will it gain sentience? Will it rebel against us? However, there are a multitude of less sensational but more immediate dangers of AI. Their being mundane or more subtle does not mean they are insignificant, though. As I will attempt to demonstrate, the less sensational threats of AI may prove to be the most damaging.
The first negative consequence of AI which I have perceived, and which I have not heard discussed thoroughly (not to say it hasn’t been done), is that of what I will simply term “proliferation.” As an amateur aesthete, I often browse Pinterest. Of late, I have noticed a huge increase in AI content on my feed. At first it was mainly fantasy art type stuff, poor mimicry of vintage pulp fantasy illustrations. But then, because I have an interest in interior design, I began to be assaulted with AI generated kitchens and living rooms. Yes, AI generated interior design. Some of them were not obvious when scrolling until you look a little closer and notice the tree branch phasing through the window, or the lines on the floor showing some inconsistency. This problem has also manifested itself when I search for religious art. For example, I was looking for a “Christ the King” piece. When I searched “Christ the King,” probably half of the first dozen results were obviously AI generated. This has occurred at various other times in my browsing of images.
This is all anecdotal, so let us now delve into the speculative. I fear that what I have experienced is only the beginning. One of the sources of this problem of AI content proliferation is that online users have easy access to many free engines and can create several images at a time. They then post these on their accounts, or the website itself collects and publishes people’s generated content without the user even requesting that to be done. If you browse Google Images right now and type in a generic art prompt, you will probably be met with AI generated content that was not even published by a person, but by the AI website itself. AI content can be produced at a rapid rate and the threat of AI generated content flooding the internet and even drowning out human creations is not at all out of the question. This has not been limited to images, either. People are now reporting that books being sold are obviously written by AI upon purchase. As if the “dead internet theory”1 was not bad enough, AI generated content proliferation may spell the end of the internet as a place for information, art, and human contact.
Another area that AI has swept through like wildfire is academia. Shortly after the popularization and widespread awareness of engines like ChatGPT, students in university and high school quickly began using ChatGPT in all sorts of ways, such as essay writing, mathematical calculations, and coding, just to name a few. ChatGPT can also be used like a search engine, but instead of presenting the user with a list of online results, it simply replies with its own answer, often pulled from the internet or whatever data it has been fed. I remember vividly how in the Spring 2023 semester AI began to be used by even the least tech-savvy students. It did not take long for professors to catch on, as they observed a noticeable shift in writing quality and style during this time. For professors who had already seen their students’ work, the introduction of AI generated content would have been jarring, but easily detectable, especially since AI was not very polished at this time. However, the technology developed quite rapidly, and by the beginning of Fall 2024, every professor had something about AI included in their syllabus. Some took a hardline stance against it while others embraced it. One thing was certain though: in just a few months, AI had come to change education completely, with student AI use being a constant consideration.
Now, it is easy to react to this particular development in a few ways. The most common response I’ve observed are to treat AI use essentially like plagiarism and to take concern with the way students are just copy and pasting AI without citation. Another response is to treat AI as just another internet tool at students’ disposal, and to merely request students cite their sources. Both these have some truth to them. AI is a tool, and plagiarism is certainly made easier by it. It is a problem that students copy and paste without citation; nobody objects to this. But the overall response to AI use among students has been lacking. People are concerned so much with the technical side of the problem that they overlook the essence of the problem. Let us ask ourselves: what is the essence of technology? Heidegger says it is its enframing of all things, including humans (to put it as succinctly as possible. See my other post for more on this). Here, if we are to focus on generative AI in the context of learning, I would suggest that the essence and danger of AI lies in its gathering-together of data for the student, and of doing the “thinking” that the student would otherwise be required to do. Allow me to illustrate:
Prior to the use of AI, a student who was writing an essay would be required to go onto the internet and find a source. They may do this well or do it poorly, but the end product will reflect how well they performed this task of data-collection. Furthermore, the student would be required to actually sit down and write their own essay. Sure, they might plagiarize here and there without being noticed, or in another scenario they might pay another student to do the writing. But overall, students were writing in their own words or by their own ability. Again, this produced a variety of results, some good and some bad. Now, what happens when you introduce a hundred free AI engines that allows a student to type in a prompt, give the number of characters, words, or paragraphs being required by the professor, and then press “enter” and get an immediate product meeting these parameters? The answer is that you get exactly what you’re currently getting in classrooms today. Nearly every student is using it. Many times, they’re just copy and pasting exactly what the AI gives them, perhaps with a few minor changes. Every student is getting roughly the same response, in the same HR-speak AI style, regardless of their genuine abilities. For some, the AI will produce better writing than what the student could produce (maybe this should cause us to ask why so many unintelligent and incapable people are going to college). For others, the AI will be of roughly the same quality. Still others, worse. This last group is perhaps the most concerning of all. You see, this actually causes an otherwise intelligent or capable student to be made stupid. They’ve been given a crutch they never needed. Some will figure that out quickly, while others succumb to it.
My point here is that the essential problem with AI in the field of education is that it does more than just introduce plagiarism into the system: it actually damages the very process of learning. Going to school is not just about producing an answer. It is about the process of producing answers. In an ideal world (certainly one where the classical idea about education prevailed), students would be formed as persons, and would be challenged not merely to give answers, but to ask questions and learn how to find answers. They would be taught how to think. AI is a full-frontal assault on this idea of education. It takes away the process of thinking. And what’s worse is that unlike normal plagiarism, whereby one or more human beings copy the work of one or more other human beings, AI is one human being copying an algorithm. Yes, there are many technical problems that arise out of AI use in the field of education, but the biggest problem is simply that it does not contribute to learning and to intellectual formation on the whole; rather, it detracts from it. Our education system is already a heaping pile of garbage as it stood before the introduction of generative AI. This just makes matters worse.
Finally, I want to draw attention to how AI may contribute to an even worsened state in the world of romance today. Let’s consider, for instance, how AI proliferation will affect dating services. It is easy to see how bots will completely flood dating sites. Then, consider the advances being made in AI engines and how advanced AI girlfriends and boyfriends are being introduced. This is not necessarily a new problem, since such ideas have existed for some time, but the technology is rapidly improving. Then, consider the advances being made in life-like dolls and mechanical robots. The sensational threat of AI here is something out of Cyberpunk or Bladerunner. But let us consider the immediate consequences.
This generation is already so online and has suffered so much from anti-social COVID measures, that the prospect of their romantic lives being further relegated to a screen should be cause for great concern. We know that Gen Z and Gen Alpha are already meeting their partners online more frequently than ever, and one only needs to observe plain reality to know that young people’s opposite-sex interaction happens primarily over text and video. Introducing AI partners to this equation is like punching someone who is already down. Men and women are already so far apart, this will only widen the gap and increase resentment on both sides. If you think the incel problem is bad now, just wait for when AI girlfriends become more appealing than a real one. If you know about modern women’s love of smutty fiction, just wait for when their fantasies can come alive via AI. We don’t need to imagine radical ideas like cyborgs or walking, talking robots taking the place of people in the world of sex and romance. The reality is far less interesting but perhaps as destructive: AI “romance” can all be facilitated by a screen, something this generation is already intimately familiar with. AI threatens to worsen existing problems in the area of screen addiction, anti-social behavior, animosity between the sexes, and romantic frustration, and it can do all this without the dystopian machines we see in media.
What I hope to demonstrate by mentioning these issues is that we do not need to look into the far future or imagine a bloody AI takeover of the world; rather, AI’s threats are often more subtle and mundane. The danger of technology today is not necessarily in its technical nature, but in the way it will impact human development. We must look into its real essence and avoid looking at it as if it is just a tool to be used or abused.
A theory which states that most online users are, in fact, non-human bots and that bots create the bulk of content on the internet. If you want to see why this theory holds some weight, just go to the replies of a big (also probably bot run) account like “Historic Vids” on X/Twitter.
Nowadays, it seems like most students just study for the credentials, not out of curiosity about the content. Since every exam or term paper is just an obstacle on the way to graduation, everything is done with the minimal amount of effort. It was nearly the same when I started at university around 20 years ago. We just used other methods as AI was not available.
Great article, it's spot on. I came for the AI Art portion but enjoyed your commentary on the other aspects too. I agree that the sensational aspects are getting all the attention, but the mundane aspects are where the real rather than hypothetical danger lies. The internet is already diluted, and clogged with hack work, and click bait. Making content generation easier isn't necessarily good, because it means that people can now say many things that aren't worth taking the time to say.
To the extent that the medium is the message, how an image is created makes a statement about how worthwhile the image was to say. I figure if someone really has something important to say they can take the time to compose it, paint it, and or go out, arrange or wait for the right lighting and photograph it. If it wasn't worth the image maker's time to say it themselves through actually creating or capturing the image themself it's probably not worth our time to look at it. Same goes for AI writing, if it's not worth someone's time to write it so they use AI, it's likely not worth anyone else's time to read it.