We gave an AI a Rorschach test. What it saw in the inkblots offers a window into the human mind

We gave an AI a Rorschach test. What it saw in the inkblots offers a window into the human mind Chris Stokel-Walker

We gave an AI a Rorschach test. What it saw in the inkblots offers a window into the human mind Rorschach tests challenge the human imagination and our capacity to assign meaning to the world around us, but what does AI see in them?

For almost a century, the Rorschach inkblot test has been widely utilized to provide insight into people’s personalities.

Even if you haven’t done one yourself, you’ll recognize the mirrored ink smudges that create unclear forms on cards. The test, developed by Swiss psychiatrist Hermann Rorschach in 1921, consists of displaying individuals a series of inkblots and asking them to describe what they observe. The visuals are intentionally ambiguous and subject to interpretation.

For decades, psychiatrists used them to comprehend a person’s psychology by interpreting the forms as creatures, objects, or situations. It is based on a phenomena known as pareidolia, which is the tendency to discover meaningful explanations in situations when there are none. It is the same reason why people perceive faces or animals in clouds or on the Moon’s surface.

You may perceive a butterfly in the image, whilst others may see a skull. Proponents of the exam claim that both interpretations reveal how you think.

While many psychologists

While many psychologists now feel the test is no longer valid and has little credibility as a psychometric instrument, it is still used in some areas of the world and is even used as judicial evidence, though this is debatable. Rorschach never meant it to be a personality test, but rather a method of recognizing aberrant thinking in patients, such as schizophrenia. However, some psychologists feel it may still be used in treatment to encourage self-reflection or to initiate talks.

“When a person interprets a Rorschach image, they unconsciously project elements of their psyche such as fears, desires, and cognitive biases,” explains Barbara Santini, a London-based psychologist who administers the Rorschach test to her clients. “The test works because human vision isn’t passive, but a meaning-making process shaped by personal experience.”

Finding meaning or familiar forms in inkblots requires a variety of cognitive processes that people employ every day, such as memory, emotion, and the capacity to deal with ambiguity.

But what if you don’t have any personal experience or you give it to a “brain” that operates in a completely other way? What may an artificial intelligence (AI) program identify in these inkblots? And what would their responses reveal about the human mind?

Moth or bat?

Recent advances in the capabilities of AI models, like as OpenAI’s ChatGPT, make it feasible to put them to the test. Because of their machine vision capabilities, so-called “multimodal models” may accept image-based inputs and deliver replies in the same way that text-based prompts can be used.

We sent ChatGPT

We sent ChatGPT five of the ten most popular photos used in an inkblot test to see how it interpreted them. However, the AI system’s training data – material that helps develop its “knowledge” – makes this more difficult.

AI vs. the Mind
This article is part of the AI v the Mind series, which explores the boundaries of cutting-edge AI while also learning about our own brains. With the assistance of an expert, each piece sets various AI technologies against the human intellect, posing probing questions to test the boundaries of intelligence. Can a machine produce a better joke than a professional comic, or unravel a moral quandary more gracefully than a philosopher? We want to find out.

Showing the AI model the first inkblot card from an internet database, which people usually perceive as a bat, butterfly, or moth, demonstrated how literal AI vision algorithms can be.

“This image is a Rorschach inkblot, often used in psychological assessments to explore perception and interpretation,” the conversational assistant said. “It is designed to be ambiguous so that each person might see something different depending on their experiences, emotions, and imagination.”

With its pretense ruined, it went on to describe what it perceived in the same answer, without my prompting. “For me, it resembles something symmetrical, possibly two animals or figures facing each other, or a single entity with wings outstretched,” the chat assistant said. “The beauty of these inkblots is that they invite individual interpretations!”

The chatbot

The chatbot then asked me what I saw in the image, which I elected to ignore in favor of pushing it to determine which of the two resemblances it had proposed was more suitable. “Looking closely, I’d say it most resembles a single entity with wings outstretched – perhaps a bat or a moth, with its wings open symmetrically,” the bot replied in response. “The central area could be interpreted as the body, and the details on the sides evoke the texture or shape of wings.”

“I think that’s just rehearsing what it knows about the specific test,” says Coen Dekker, a Dutch software engineer who requested a neural network to complete a Rorschach test a decade ago, an early form of the technology that now underpins AI chatbots. (Dekker’s experiment was greatly simplified due to technical limits on neural networks at the time.)

If an AI’s answer matches a human’s, it’s not because it sees the same thing, but because its training data reflects our common visual culture – Barbara Santini.

“I believe it mainly identifies patterns, shapes, and textures within the blots, and then compares these features to a vast dataset of human responses to generate its interpretation of what it sees in the inkblots,” claims Ieva Kubiliute, a London-based psychologist.

The essence of humankind

“ChatGPT provides interesting, human-like responses, but it is likely not genuinely thinking – instead, it appears to be scanning an online dataset,” explains Chandril Ghosh, a psychology lecturer at the University of Kent in the United Kingdom who specializes in mental health, AI, and therapy. It’s comparable to how a musician who has never experienced sadness may still compose a song that touches your emotions by studying how other similar songs are organized. Alternatively, as psychologist Santini puts it, “If an AI’s reaction resembles a

It’s not because it perceives the same thing as humans; rather, its training data reflects our common visual culture.

What we receive is a representation of reality and thinking. “ChatGPT can accurately describe emotions without actually experiencing them,” according to Ghosh. That is not to say that AI is worthless in terms of mental problems. “ChatGPT can understand emotions, describe them, and help others make sense of their feelings,” he points out.

Ghosh argues that ChatGPT is just regurgitating random information from its dataset for the advantage of users, giving the impression of “thinking” but never truly doing so. The manner that the tech companies behind these AI chatbots market their goods contributes to our perception that it may be thinking. ChatGPT and similar technologies are consistently useful, conscientious, and conversational. The anthropomorphization of these technologies has aided their acceptance, but it can muddy the waters about how they – and their outputs – are seen.

According to Ghosh, merely pushing the refresh button can reveal such unreality. Present ChatGPT with the same inkblot image for analysis, and it may produce a completely different meaning in the same chat.

When we provided ChatGPT the same image repeatedly, the AI responded differently.

A human would typically

“A human would typically stick to their previous answer because personal experiences and emotions influence their responses,” Ghosh adds. “In contrast, ChatGPT generates responses based on its dataset.”

That’s part of the issue when attempting to decipher significance from what ChatGPT says about the inkblots it sees: the chatbot is merely regurgitating what its training data has seen.

This is possibly best proven by an experiment conducted by researchers at the MediaLab at the Massachusetts Institute of Technology in Cambridge, Massachusetts. They trained an AI program called “Norman” – named after Alfred Hitchcock’s character Norman Bates – using photographs from a Reddit community that shows individuals dying in terrible ways. When they presented Norman a collection of Rorschach inkblots, the visuals it described resembled the darkness it had been educated in. Norman described seeing a guy being electrocuted, but a system trained on more regular visual data might see a bunch of birds on a tree limb.

It is a sharp reminder of how significant the data used to train an AI may be. Train a model with faulty data, and the eventual AI will reflect those shortcomings.

However, what an AI decides to regurgitate in response to ambiguous inputs is intriguing, according to Dekker, in part because of what it says, rather than what it “knows” (AI systems know nothing by default). “It has a general understanding of colour theory and what emotional responses these could give when analysed,” adds Dekker. “I believe it has a general knowledge of the visuals and what people identify them with. This got me thinking: “Wouldn’t it be cool if we had – or created – a new set of Rorschach-like images that the model was completely unaware of, and let it analyze those?”

Dekker is not actively

While Dekker is not actively pursuing this, we know that AI systems have a tendency to “hallucinate” and provide false information. But you may also force them to hallucinate. In one experiment, computer scientists at the Massachusetts Institute of Technology in Cambridge, US, were able to 3D print a turtle with a certain color and texture that deceived a machine vision system into believing it was a gun. Similarly, a few stickers on a stop sign might make it invisible to the algorithm, posing a risk for a self-driving car. (Find out more about the ‘strange happenings’ that cause machines to hallucinate.)

AI computers excel at detecting patterns, but the disparities in responses to a single confusing image illustrate a component of the human mind that AI cannot replicate: the emotions and unconscious meanings we attach to objects in our surroundings. Kubiliute claims that the AI system’s responses to the inkblots were objective. “It cannot understand the symbolic meaning or emotional resonance that a human might associate with a particular image.”

That in itself reveals a lot about the human psyche. “The human psyche is filled with internal conflicts, such as the tension between desires and morals or fears and ambitions,” Ghosh explains. “In contrast, AI functions on clear logic and does not struggle with inner dilemmas essential to human thought and decision-making.”

Sign up for our Tech Decoded newsletter to receive more technology news and insights, while The Essential List will send you a chosen selection of articles and insights twice a week.

Follow us on Facebook, X, and Instagram for more BBC science, technology, environment, and health news.

Daly News Paper Next Post

Leave a Reply

Your email address will not be published. Required fields are marked *