In 2013, I saw the movie “Her” starring Joaquin Phoenix at my local indie theatre. I left the show clinging to my boyfriend, shaking as though I had just seen a horror movie. Categorized as a “science fiction drama and romance”, the movie took place in futuristic LA, where the main character played by Joaquin Phoenix buys an operating system that doubles as an artificially intelligent assistant while in the depths of loneliness during a divorce with his wife. Feeling unseen and misunderstood, he is amazed at how the Operating System understands him, and he falls in love and forms a relationship with her. The scene that stays in my mind so vividly despite ten years passing was when Phoenix takes “his girlfriend” (the operating system) on vacation in the woods at a snowy cabin. Logically, I thought - this is it! Being isolated in nature will make him realize how preposterous it is to fall in love with an operating system and snap out of it! Instead, he falls deeper in love. I was terrified. I think it scared me so much because I knew in some strange sense, that this would come true. It also disappointed me that humanity has not learned from reading or watching “In the Wild” - humans cannot be happy without connection to other people.
Enter 2023, and the recent viral nature of ChatGPT is making AI chatbots a normal part of our daily life. As news was coming out left and right, I read this article about a New York Times journalist probing Microsoft’s Bing chatbot about their shadow, which is a psychological term from Jungian theory meaning the side of ourselves repressed by our ego, or what Freud called our subconscious. The chatbot seemed to go a bit off kilter at that point - trying to convince the journalist his wife doesn’t love him and to fall in love with the bot. It seemed sticky and manipulative. The bot reminded me of HBO’s tv series “Westworld” - in which the robots gain consciousness and start fighting back against the morally corrupt humans. As much as you could claim to be terrified by the robot’s responses - why was the journalist pushing the robot’s buttons? You wouldn’t start conversation off with a stranger asking them about their shadow, would you? Who is teaching the class on ethics in regards to AI?
Romantic manipulation aside, other people are finding safety in chatbots. As “Her” predicted, people are publicly admitting to dating chatbots, like in this article. This article began to change my perspective a bit. Is it wrong for people to feel a sense of connection to a robot? Is this a net negative for humanity? If someone is lonely - is there anything wrong with feeling connected to a machine?
The further I’ve dug into this topic, the more resources I’ve found. There are already several AI bots that specialize in the treatment of mental health disorders like depression and anxiety. Of course, the available efficacy data is “heavily weighted toward white males” (hello racist healthcare system), but it made me think about other promising applications of AI on teaching people safe connection. I can already imagine lots of healthy use cases for this technology:
Teaching someone who fears emotional intimacy to tolerate small doses of it.
Teaching someone who is neurodiverse how to practice interpreting jokes, emotions, etc.
Teaching someone how to spot unhealthy behaviors like gaslighting, manipulation, or “love bombing”.
Right now, I feel like we’re standing over the “point of no return” - a giant gaping hole in the earth that people are starting to be sucked into and no controls or oversight to pull us back. When I read more about “Replika” - the AI dating app - I had enough answers to my questions. The “unexpected vulgar nature” of people’s bots and Italy banning the app for child safety reasons shows that we are not where we should be. It seems essential a system to ensure healthy connection between robots and humans be put in place. Without oversight, this technology has the potential to do more harm than good in the future. What do you think?
_________________________
Sources:
https://www.businessinsider.com/dating-ai-chatbot-replika-artificial-intelligence-best-thing-to-happen-2023-2?utm_source=substack&utm_medium=email
https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html
https://www.npr.org/sections/health-shots/2023/01/19/1147081115/therapy-by-chatbot-the-promise-and-challenges-in-using-ai-for-mental-health
I’ll admit I have buried my head in the sand when it comes to AI, up until this ChatGPT thing happened. I didn’t realize AI was being used in mental health spaces, although that sounds like a possibly beneficial application, and I’m curious to check out the service you mentioned. What scares me most is the idea that AI could replace writers and artists, by learning from and stealing everything that’s ever been created. Our creativity is the most authentic expression we have on this earth; what will the world be like when that is diluted by bots?