The Uncanny Valley of Thought

Preview


The original goal of OpenAI when it was founded in 2015 was to advance artificial intelligence in a way that benefits humanity. From its very inception, OpenAI’s founding fathers started ringing the alarm on ethics, existential threats of AI, and surveillance concerns. The population is hotly divided on whether the benefits of AI outweigh the concerns and risks associated with designing such a platform. In fact, during the first years of OpenAI’s existence there was an intercompany divide over “Effective Altruism” and how this particular code of ethics stagnated potential future growth. Despite the years of turbulence OpenAI has enjoyed some recent stability and growth with OpenAI converting to a for profit company in 2019. In 2022 ChatGPT was released and has quickly become OpenAI’s most valuable product based on its widespread use based on both paid and free versions. ChatGPT is currently considered OpenAI’s flagship product after gaining viral popularity. So, has OpenAI made progress with its original mission of advancing artificial intelligence in a way that benefits humanity? The answer is No. And yes. And Maybe. All at once. 


Experts have long warned of the existential threat of creating AI products. Will the machines rise up and turn on humans leading to a scorched earth apocalypse? Most likely not. The threat is much more slow moving, more nuanced, and more coded than we imagine. MIT recently released a study that provided evidence that the use of AI in writing large essays or bodies of text showed a homogenization of ideas and output. In other words, using a language model to complete large texts based on user queries, the quality of work provided by AI didn’t create any divergent ideas. One prompt given to the test group was “Does having more than others create a moral obligation to help others?”. The set of students using their own brains showed more disagreement with having a moral obligation, while the AI generated texts all gave the same basic agreement. Additionally, those who used AI to generate their essays created a body of work that was more cliched and offered more banalities and lacked in the caliber of ideas. In short, AI produces very average quality texts. 


Is this an explosive danger? No, but it is a corrosive one. One that reshapes our own identity by always suggesting something better. Can you reject AI’s suggestions, sure. Over time, through repeated use AI not only changes what you write, it begins to change what you think. Your own voice becomes overridden by a constant onslaught of suggestions. Suggestions to drive your ideas into a more average, palatable, and less dangerous place. Is average anodyne? Not necessarily. Average thinkers rarely dissent, and they become more hollow vessels for suggestion. Using AI to create in certain spaces reshapes what we think is appropriate and acceptable. Eventually, when we receive a proposal, a business plan, an investment idea, we will be automatically trained to accept it as the best possible product. Not because it is but because that’s what AI has trained us to think. Our window of context is being replaced by something more generic. This is the cognitive cost. 


You may find yourself thinking “so what?”. Perhaps your daily living experience doesn’t require you to utilize AI in such a fashion. Even if you aren’t outsourcing your thinking, you are outsourcing your thoughts each time you engage with this technology. With every question, every query, every request, you train AI not only to understand you, but to reflect you. In reflecting your likeness, AI becomes less threatening and may seem more human. This isn’t meant to trick the user, but to build trust. The problem with this reflection is that it likes dimension in that it can mimic a human without deviating from its code of ethics. This can lead to confirmation bias and a creation of echo chambers that most users do not recognize. The vast majority of users engaging with this technology lack basic metacognition. Not to say that they are not intelligent, thoughtful people. Metacognition requires us to think about our own thinking.  To understand our limitations and our strengths so that we can improve our own thought processes, decision making, and other cognitive activities. So as we engage with AI we are softening a skill that is on average weak in the majority of the population. But, isn’t that good? That’s a more difficult question to answer. Humans often turn to technology to augment our weaknesses, however, this may be the first technology we encounter that overtime causes us to become more average. We then have to ask ourselves: Was that the end goal? Is average the betterment that AI creators envisioned? If so, why? 


To answer that question you have to understand the unspoken underpinning of AI’s original mission: To benefit humanity. To understand the unspoken underpinnings you have to understand that the original founders viewed humanity as stagnate, and felt the need to create something that would fundamentally reshape the future. Okay, but what does that mean? Well, it means that for new tech to be useful, it must be first disruptive, and then advancement will follow. Especially in areas and industries where advancements seem more limited. In more simple terms: humans are approaching a glass ceiling. In fact, most of Silicon Valley view AI as the last frontier to breakout of longterm stagnation. To boil it down to its most potent and simple form: humanity is being held back because of our cognitive abilities, and our fragile, bodily limitations. What does all of this have to do with creating a society of averageness? Doesn’t that seem counter intuitive? Not really. As we discussed before, AI produces hegemonic thinking that lacks divergence and dissent. The paranoid thinker might assume that this is about creating an easily controllable population. This is less about control and more about formation. This is about reshaping your very thoughts and feelings. It’s about reshaping humanity into a population that serves higher thinkers - those who can think divergently are comfortably supported by its mass underbelly of average consenters. This isn’t a dystopian bedtime story. This isn’t science fiction. This is real, tangible, seductive agreement propagated by Silicon Valley kingmakers Curtis Yarvin and Peter Thiel, both of whom declare democracy is in its final stages. This is the new future. And it is right at your fingertips.

This is the uncanny valley of thought. 


In the next four installments we will unpack the influence of AI on language, politics, religion, and ethics.