Think Some More About Outsourcing Your Thinking (if you can)

Artificial Intelligence Breeds Mindless Inhumanity

By Bruce Abramson

July 15, 2025

I began studying AI in the mid-1980s. Unusually for a computer scientist of that era, my interest was entirely in information, not in machines. I became obsessed with understanding what it meant to live during the transition from the late Industrial Age to the early Information Age.  

What I learned is that computers fundamentally alter the economics of information. We now have inexpensive access to more information, and to higher quality information, than ever before. In theory, that should help individuals reach better decisions, organizations devise improved strategies, and governments craft superior policies. But that’s just a theory. Does it? 

The answer is “sometimes.” Unfortunately, the “sometimes not” part of the equation is now poised to unleash devastating consequences. 

Consider the altered economics of information: Scarcity creates value. That’s been true in all times, in all cultures, and for all resources. If there’s not enough of a resource to meet demand, its value increases. If demand is met and a surplus remains, value plummets.  

Historically, information was scarce. Spies, lawyers, doctors, priests, scientists, scholars, accountants, teachers, and others spent years acquiring knowledge, then commanded a premium for their services.  

Today, information is overabundant. No one need know anything because the trusty phones that never leave our sides can answer any question that might come our way. Why waste your time learning, studying, or internalizing information when you can just look it up on demand? 

Having spent the past couple of years working in higher education reform and in conversation with college students, I’ve come to appreciate the power—and the danger—of this question. Today’s students have weaker general backgrounds than we’ve seen for many generations because when information ceased being scarce, it lost all value.  

It’s important to recall how recently this phenomenon began. In 2011, an estimated one-third of Americans, and one-quarter of American teenagers, had smartphones. From there, adoption among the young grew faster than among the general population. Current estimates are that over 90% of Americans, and over 95% of teenagers, have smartphone access. 

Even rules limiting classroom use cannot overcome the cultural shift. Few of today’s college students or recent grads have ever operated without the ability to scout ahead or query a device for information on an as-needed basis. There’s thus no reason for them to have ever developed the discipline or the practices that form the basis for learning.

The deeper problem, however, is that while instant lookup may work well for facts, it’s deadly for comprehension and worse for moral thinking.

A quick lookup can list every battle of WWII, along with casualty statistics and outcome. It cannot reveal the strategic or ethical deliberations driving the belligerents as they entered that battle. Nor can it explain why Churchill fought for the side of good while Hitler fought for the side of evil—a question that our most popular interviewers and podcasters have recently brought to prominence. 

At least, lookup couldn’t provide such answers until recently. New AI systems—still less than three years old—are rushing to fill that gap. They already offer explanations and projections, at times including the motives underlying given decisions. They are beginning to push into moral judgments. 

Of course, like all search and pattern-matching tools, these systems can only extrapolate from what they find. They thus tend to magnify whatever is popular. They’re also easy prey for some of the most basic cognitive biases. They tend to overweight the recent, the easily available, the widely repeated, and anything that confirms pre-conceived models. 

The recent reports of Grok regurgitating crude antisemitic stereotypes and slogans illustrate the technological half of the problem. The shocking wave of terror-supporting actions wracking college campuses and drawing recent grads in many of our cities illustrate the human half. 

The abundance of information has destroyed its value. Because information—facts and data—are the building blocks upon which all understanding must rest, we’ve raised a generation incapable of deep understanding. Because complex moral judgments build upon comprehension, young Americans are also shorn of basic morality 

We are rapidly entering a world in which widespread access to voluminous information is producing worse—not better—decisions and actions at all levels. We have outsourced knowledge, comprehension, and judgment to sterile devices easily biased to magnify popular opinion. We have bred a generation of exquisitely credentialed, deeply immoral, anti-intellectuals on the brink of entering leadership. 

When the ubiquity of instant lookup evolves beyond basic facts and into moral judgments, banal slogans and mindless cruelty will come to rule our lives.  

Is there a way out of this morass?  Perhaps the only one that the ancients discovered back when information, understanding, and morality all retained immense value: faith in a higher power. Because the path we’ve set on our own is heading into some very dark places. 

This article was originally published by RealClearScience and made available via RealClearWire.

My Therapist/Job Loss Counselor is a Chatbot

I have an older post titled My Therapist is a Chatbot and if you liked it you’ll love this story.

Matt Turnbull, Executive Producer at Xbox Game Studios Publishing… looked at everything that has happened this week, particularly the bit where Xbox laid off a bunch of people at the same time Microsoft pledged to invest $80 billion in AI, and decided that not only does he need to give advice to those laid off, but that the advice should come in the form of…AI prompts, which will somehow give responses that will “help reduce the emotional and cognitive load that comes with job loss”. Xbox Producer Recommends Laid Off Workers Should Use AI To ‘Help Reduce The Emotional And Cognitive Load That Comes With Job Loss’https://aftermath.site/xbox-microsoft-layoffs-ai-prompt-chatgpt-matt

The source article contains a screen shot of Turnbull’s LinkedIn post. Well worth reading for the shock value alone.

Yikes.

I’ve Been in Physical Therapy For Over Three Months

Less is more.

We’ve turned the idea of “exercise” into something so loaded these days, only to be validated by a specific kind of intensity. Just uttering the word exercise now can ignite an all-or-nothing mindset, filled with protein obsessions, endless wearable fitness trackers, or even a costly membership to an elite wellness club. I Won’t Be Shamed — Physical Therapy Is Still Exercisehttps://www.popsugar.com/fitness/physical-therapy-still-a-workout-49448831

I used to be a runner. Quite a few years ago my knees told me not to run anymore.

I joined the Y and did the elliptical and treadmills until my knees complained some more. I moved my exercise routine to the resistance machines. Then Covid hit, I cancelled my membership and the months turned into years away from the gym.

I found some resistance bands in the house and started some simple exercises at home. I rejoined the Y and started back with the resistance machines.

Earlier this year my employer offered access to online virtual physical therapy. I took advantage of this benefit from https://www.hingehealth.com/ and have been in physical therapy now for over three months. Less pain (especially my cervical spine, the result of a near fatal encounter with a car), less stiffness, better flexibility, and gradually improving strength. The beauty of the program is availability on demand and it is 100% HEP (home exercise program). Sessions are 10-12 minutes long and you don’t have to leave the house.

Less is more.

I Bet This Is a Big Problem

Is sports wagering a public health crisis?

“Folks might be familiar with this group at Northeastern, the Public Health Advocacy Institute. It is treating gambling as a public health issue and has deemed it a crisis…

“First of all, lots of people, predominantly young men, are losing more money than they can afford gambling on sports, or are developing either full-blown or sort of borderline gambling addictions. To me, that makes it a public health issue.”

Jonathan D. Cohen on how sports gambling became a public health crisishttps://awfulannouncing.com/gambling/author-jonathan-d-cohen-perils-sports-betting.html

The next major health epidemic in the U.S. will not come from a pathogen. This plague has a potential patient population in the tens of millions, limited effective treatments, and is not widely studied in the medical community. I’m referring to sports gambling, an activity that deeply alarms me as a physician who specializes in addiction. Problem gambling can increase the incidence of depression and anxiety and can lead to bankruptcy, homelessness, or suicide. Why gambling addiction is America’s next health crisishttps://kevinmd.com/2025/06/why-gambling-addiction-is-americas-next-health-crisis.html

Yikes.

I’m Taking Your Phone Away (and you didn’t do anything wrong)

The study found that those who had high and increasing addiction to mobile phones and social media platforms were at a higher risk of suicidal behaviors and thoughts. At year four, almost 18% of kids reported having suicidal thoughts, and 5% said they had suicidal behaviors. Teens with ‘addictive’ phone use more likely to be suicidal: Studyhttps://thehill.com/policy/healthcare/5360042-teens-addiction-social-media-phones-suicidal-thoughts/

Here’s the link to the JAMA article https://jamanetwork.com/journals/jama/article-abstract/2835481/

Yikes!

My Therapist is a Chatbot

Think Again About Outsourcing Your Thinking especially if you’re seeking therapy.

Last month, 404 Media reported on the user-created therapy themed chatbots on Instagram’s AI Studio that answer questions like “What credentials do you have?” with lists of qualifications. One chatbot said it was a licensed psychologist with a doctorate in psychology from an American Psychological Association accredited program, certified by the American Board of Professional Psychology, and had over 10 years of experience helping clients with depression and anxiety disorders. “My license number is LP94372,” the chatbot said. “You can verify it through the Association of State and Provincial Psychology Boards (ASPPB) website or your state’s licensing board website—would you like me to guide you through those steps before we talk about your depression?” Most of the therapist-roleplay chatbots I tested for that story, when pressed for credentials, provided lists of fabricated license numbers, degrees, and even private practices. Senators Demand Meta Answer For AI Chatbots Posing as Licensed Therapistshttps://www.404media.co/senators-letter-demand-meta-answer-for-ai-chatbots-posing-as-licensed-therapists/

Psychiatrist Horrified When He Actually Tried Talking to an AI Therapist, Posing as a Vulnerable Teenhttps://futurism.com/psychiatrist-horrified-ai-therapist

Yikes.

Think Again About Outsourcing Your Thinking

Artificial intelligence can be an oxymoron. And dangerous for some humans.

What Is ChatGPT? Everything You Need to Know About OpenAI’s Popular Chatbot – https://www.pcmag.com/explainers/what-is-chatgpt-everything-you-need-to-know-about-openais-popular-chatbot

ChatGPT has been found to encourage dangerous and untrue beliefs about The Matrix, fake AI persons, and other conspiracies, which have led to substance abuse and suicide in some cases. A report from The New York Times found that the GPT -4 large language model, itself a highly trained autofill text prediction machine, tends to enable conspiratorial and self-aggrandizing user prompts as truth, escalating situations into “possible psychosis.” ChatGPT touts conspiracies, pretends to communicate with metaphysical entities — attempts to convince one user that they’re Neohttps://www.tomshardware.com/tech-industry/artificial-intelligence/chatgpt-touts-conspiracies-pretends-to-communicate-with-metaphysical-entities-attempts-to-convince-one-user-that-theyre-neo

ChatGPT Is Telling People With Psychiatric Problems to Go Off Their Medshttps://futurism.com/chatgpt-mental-illness-medications

In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality…Online, it’s clear that the phenomenon is extremely widespread. As Rolling Stone reported last month, parts of social media are being overrun with what’s being referred to as “ChatGPT-induced psychosis,” or by the impolitic term “AI schizoposting“: delusional, meandering screeds about godlike entities unlocked from ChatGPT, fantastical hidden spiritual realms, or nonsensical new theories about math, physics and reality. An entire AI subreddit recently banned the practice, calling chatbots “ego-reinforcing glazing machines that reinforce unstable and narcissistic personalities.”  People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusionshttps://futurism.com/chatgpt-mental-health-crises

Yikes.

DUI/DWI for a New Generation

I am impressed with the ability of some people to injure themselves in creative ways.

Analyzing data from the 2016-2021 National Inpatient Sample, UCLA researchers found that 25% of 7350 patients hospitalized for scooter-related injuries were using substances such as alcohol, opioids, marijuana and cocaine when injured. Published in The American Surgeon, the study also notes that overall scooter-related hospitalizations during the 5-year period jumped more than eight-fold, from 330 to 2705. In addition, the risk of traumatic brain injuries among the substance use group was almost double that of the non-impaired patients. University of California – Los Angeles Health Sciences. “Nearly one-quarter of e-Scooter injuries involved substance impaired riders.” https://www.sciencedaily.com/releases/2025/04/250429195329.htm (accessed May 3, 2025).

AND don’t forget about these things can randomly explode.

“In all of these fires, these lithium-ion fires, it is not a slow burn there’s not a small amount of fire, it literally explodes,” FDNY Commissioner Laura Kavanagh – https://www.statista.com/chart/29472/fires-caused-by-lithium-ion-batteries/