The Uncanny Valley of Language

Language is the very thing that tethers us to reality, it allows us to name what we experience and witness. Language allows us to categorize and catalog things. It allows us to relate space and time. Not only does language allow us to speak on these things, it allows us to access these things. The relationship between language and perception has been long studied with theories that range from widely supported e.g. Sapir-Whorf hypothesis, to largely rejected e.g. Linguistic Determinism. The main difference of these being influence vs. determination. In other words, language can help inform and shape our perception, but it does not wholly dictate our perception. How then, does the introduction of AI add additional layers to an already complex construct? In our introduction we already discussed the cognitive cost in utilizing an AI platform to create written bodies of work as demonstrated in a recent MIT study which examined the effect of Natural Language Programs (NLPs) in generating texts. This study proved that creative writing - searching for the right word, casting out bad representations of thought, and eventually settling on what it is we want to say - is a muscle that weakens with repetitive use of these programs to generate bodies of texts. It depletes the system our brain uses to access and define reality. 


Is this a solemn warning peddled by nervous creatives who feel their thunder is being stolen? Probably not. In fact, most people can tell an AI creative work apart from the genuine article. How? Because the language that tethers us to reality, and the choices we make when employing that language are rooted in something that AI cannot replace: feeling. AI can simulate feeling in the way that artificial sweeteners simulate cane sugar. It gets close, but the human brain is hard wired to tell the difference. It’s the same reason we can usually tell when someone is lying to us, when someone’s words don’t match their eyes, when a sentiment falls flat. It’s not about how words are used, about how they land. This is something AI cannot replicate. No matter how much data we feed AI learning systems technology cannot replicate the shame we feel when we make a mistake, it cannot mimic nervousness or loss, bots do not feel joy or love. They simply mirror humanity in a way that is laced with banalities and tropes. 


To put it more simply: it isn’t the things we say that give our creativity wings, it’s the experiences that lend emotion and give color to something otherwise soulless. What makes our creative bodies so authentic is the pain that brought us to create in the first place. Perhaps that is the very reason that AI cannot generate something matched to human art: because if real art comes from feeling, from emotion, from something AI does not and cannot possess, of course the output lands inside the uncanny valley. I personally write more out of pain than any other feeling, even if the product itself is not a painful piece. For me, pain pushes out thoughts and feelings that generally I would shy away from expressing. It is a bloodletting of sorts. AI can try to mirror that. It may even come close at times, but without the lived experience behind it, and can never substitute reel feeling. 


But is there a danger in utilizing AI to create written content for us? I think it depends. Studies confirm again and again that AI rarely offers a divergent or dissenting opinion, but that in itself is not dangerous. However, when we continue to accept an AI suggestion over our own because we automatically assume that technology has more to offer than our own thought, we slowly lose ourselves incrementally. Suddenly, our own ideas become less vibrant, and they come less quickly than before. For individuals who have a higher aptitude in metacognition, perhaps this effect doesn’t take root. Not because an alternative wasn’t suggested, but because these individuals are much too critical to accept it. They have an understanding of their own minds, their biases, and why they think what they think. In essence, they think about thinking. This offers a protective measure of sorts, and might give us a prescriptive way to engage AI platforms in the future so that we don’t dull our senses over time. Perhaps it is more about understanding our limits, similarly to alcoholic intake - you become aware of your tolerance and adjust accordingly. While this could be a novel approach, it might not be a silver bullet. 


One of our most meaningful and unique endeavors as a human being is to continue to keep our mind intact and healthy. We must pursue not just our creations, but our ability to create. Part of this stewardship is to engage our own minds, do our own research, and craft our own ideas. In doing so we protect ourselves from becoming a homogenous society full of repetitive and less dangerous ideas. I would assert that the existence of AI is categorically aimed at producing a more one-note population. If creative AI work trends in the same direction, it may shift the directionality to shape economic impacts, but it is too early to tell. I for one, cannot imagine life without the bloodletting exercise of creativity and I will not be ready to relinquish that task to AI just yet.

Next
Next

The Uncanny Valley of Thought