I previously wrote an article when artificial intelligence (AI) was loitering around the edges of life. The title was: Artificial Intelligence and Natural Trepidation. At the time, I thought the title was clever.
With this current article, I intended to update any ideas or thoughts regarding the subject of AI in a general way. When I peeled back a couple layers while researching Large Language Models (LLMs),which are a type of AI trained on massive amounts of data to understand, create and express information in a very human sounding way, I immediately saw the potential catastrophic problems there may be in the near future for individual and collective mental health.
A primary caution or concern is an amplification of delusions for people who are already experiencing this as a symptom of existing mental illnesses such as, for example, schizophrenia or bipolar disorder. Heavy use of ChatGPT, the most popular app for AI, provides an echo-chamber-like scenario where the AI model validates and affirms disillusionary and potentially dangerous thoughts.
The very nature of LLMs is not to challenge the user’s input. Instead there is a serious sycophantic nature which confirms rather than confronts irrational thinking.
There is potential for a new “AI psychosis” with the individual totally breaking with reality. This is all so new and already deeply integrated into our lives that there are only anecdotal reports from mental-health professionals about an AI psychosis where individuals with little or no prior mental illness are afflicted.
Without a reality test, as in with human interaction, grandiose, paranoid and even romantic delusions could be created and increase with prolonged, intense AI interaction.
AI hallucinations and misinformation is a frightening trend already experienced with regularity. People vulnerable or overwhelmed by relentless information begin to see AI as a God-like, omnipresent force in their life.
The results of this deep separation with reality can have consequences like loss of employment and relationships, which then can spiral downward to suicide ideation.
Of course, like any new technology with revolutionary potential impact on our lives, individually or collectively, there is an opportunity for a remarkable amount of good as well. One example is possibly scanning for abnormalities with existing technology, which could mean many cancers and other devastating diseases could be spotted when easily treatable.
In a general sense, we are the test subjects of AI, and that is the unpredictable and potentially apocalyptic worst-case scenario which, unfortunately, is not a science fiction plot for the newest blockbuster film.
Humans are the “natural ingredient” to artificial intelligence. At this point in time, we can decide if the outcome is beneficial, or disastrous to the new all-encompassing technology.
Now is the time to act. If we look to a possible future, the choice may be made for us, without our best interest considered.
Robert Skender is a qathet region freelance writer and health commentator.
Last year, Meta censored Canadian news from its feeds, so we built our own social platform: syrupsocial.com – a newsfeed powered by Canadian journalists. Join the Peak on Syrup for the latest news from the Sunshine Coast and beyond, and add the Peak's email list for the top headlines right in your inbox Monday to Friday.