Scary Charts – 12.04.25

Through November, employers have announced 1,170,821 job cuts, an increase of 54% from the 761,358 announced in the first eleven months of last year. Year-to-date job cuts are at the highest level since 2020 when 2,227,725 cuts were announced through November. It is the sixth time since 1993 that job cuts through November have surpassed 1.1 million. Challenger Report: 71,321 Job Cuts on Restructurings, Closings, Economy https://www.challengergray.com/blog/challenger-report-71321-job-cuts-on-restructurings-closings-economy/

Yikes.

In my less than illustrious career I’ve suffered 100% reductions in income multiple times. Hopefully the newly unemployed have some form of a fallback plan.

Stay safe. It’s fugly out there.

Not Your Grandma’s Teddy Bear

Safety features or not, it seems like the chatbots in these toys can be manipulated into engaging in conversation inappropriate for children. The consumer advocacy group U.S. PIRG tested a selection of AI toys and found that they are capable of doing things like having sexually explicit conversations and offering advice on where a child can find matches or knives. They also found they could be emotionally manipulative, expressing dismay when a child doesn’t interact with them for an extended period. Earlier this week, FoloToy, a Singapore-based company, pulled its AI-powered teddy bear from shelves after it engaged in inappropriate behavior. Do Not, Under Any Circumstance, Buy Your Kid an AI Toy for Christmashttps://gizmodo.com/do-not-under-any-circumstance-buy-your-kid-an-ai-toy-for-christmas-2000689652

AI-Powered Teddy Bear Caught Talking About Sexual Fetishes and Instructing Kids How to Find Kniveshttps://gizmodo.com/ai-powered-teddy-bear-caught-talking-about-sexual-fetishes-and-instructing-kids-how-to-find-knives-2000687140

The alleged perpetrator

Would You Trust an AI Chatbot with Your Children?

All this offloading of parental responsibility to AI is alarming because one of ChatGPT’s biggest flaws, its manipulative and sycophantic nature, is known to intensify delusions and cause breaks from reality — a grim phenomenon that’s been linked to numerous suicides, including several teenagers. Parents Using ChatGPT to Rear Their Childrenhttps://futurism.com/artificial-intelligence/parents-chatgpt-rear-children

Here’s the disclaimer from the ChatGPT homepage:

ChatGPT can make mistakes. Check important info.

Yikes.

Think Again About Outsourcing Your Thinking 2.0 (if you can)

Michael Gerlich, head of the Centre for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School, began studying the impact of generative AI on critical thinking because he noticed the quality of classroom discussions decline. Sometimes he’d set his students a group exercise, and rather than talk to one another they continued to sit in silence, consulting their laptops. He spoke to other lecturers, who had noticed something similar. Gerlich recently conducted a study, involving 666 people of various ages, and found those who used AI more frequently scored lower on critical thinking. (As he notes, to date his work only provides evidence for a correlation between the two: it’s possible that people with lower critical thinking abilities are more likely to trust AI, for example.) Like many researchers, Gerlich believes that, used in the right way, AI can make us cleverer and more creative – but the way most people use it produces bland, unimaginative, factually questionable work. Are we living in a golden age of stupidity?https://www.theguardian.com/technology/2025/oct/18/are-we-living-in-a-golden-age-of-stupidity-technology

Yikes.

Antidepressant Prescriptions Increase 130% for Teenage Girls

The increasing rate of mental health disorders among children and adolescents is a concerning trend that has been observed for several decades, with survey studies revealing dramatic increases in anxiety, depression, and suicidal ideation.1 In the United States, suicide ranks as the second leading cause of death for those aged 10 to 19 years and the third leading cause of death for those aged 15 to 24 years.2 Antidepressant Prescriptions and Mental Healthhttps://publications.aap.org/pediatrics/article/153/3/e2023064677/196661/Antidepressant-Prescriptions-and-Mental-Health

Between January 2016 and December 2022, the monthly antidepressant dispensing rate increased 66.3%, from 2575.9 to 4284.8. Before March 2020, this rate increased by 17.0 per month (95% confidence interval: 15.2 to 18.8). The COVID-19 outbreak was not associated with a level change but was associated with a slope increase of 10.8 per month (95% confidence interval: 4.9 to 16.7). The monthly antidepressant dispensing rate increased 63.5% faster from March 2020 onwards compared with beforehand. In subgroup analyses, this rate increased 129.6% and 56.5% faster from March 2020 onwards compared with beforehand among females aged 12 to 17 years and 18 to 25 years, respectively. In contrast, the outbreak was associated with a level decrease among males aged 12 to 17 years and was not associated with a level or slope change among males aged 18 to 25 years. Antidepressant Dispensing to US Adolescents and Young Adults: 2016–2022https://publications.aap.org/pediatrics/article/153/3/e2023064245/196655/Antidepressant-Dispensing-to-US-Adolescents-and?autologincheck=redirected

Between 2020 and 2022, antidepressant prescriptions for girls aged 12-17 skyrocketed by 130%. Antidepressants Increase 130% for Teen Girls, Drop 7% For Boyshttps://brownstone.org/articles/antidepressants-increase-130-for-teen-girls-drop-7-for-boys/

Yikes.

Think Some More About Outsourcing Your Thinking (if you can)

Artificial Intelligence Breeds Mindless Inhumanity

By Bruce Abramson

July 15, 2025

I began studying AI in the mid-1980s. Unusually for a computer scientist of that era, my interest was entirely in information, not in machines. I became obsessed with understanding what it meant to live during the transition from the late Industrial Age to the early Information Age.  

What I learned is that computers fundamentally alter the economics of information. We now have inexpensive access to more information, and to higher quality information, than ever before. In theory, that should help individuals reach better decisions, organizations devise improved strategies, and governments craft superior policies. But that’s just a theory. Does it? 

The answer is “sometimes.” Unfortunately, the “sometimes not” part of the equation is now poised to unleash devastating consequences. 

Consider the altered economics of information: Scarcity creates value. That’s been true in all times, in all cultures, and for all resources. If there’s not enough of a resource to meet demand, its value increases. If demand is met and a surplus remains, value plummets.  

Historically, information was scarce. Spies, lawyers, doctors, priests, scientists, scholars, accountants, teachers, and others spent years acquiring knowledge, then commanded a premium for their services.  

Today, information is overabundant. No one need know anything because the trusty phones that never leave our sides can answer any question that might come our way. Why waste your time learning, studying, or internalizing information when you can just look it up on demand? 

Having spent the past couple of years working in higher education reform and in conversation with college students, I’ve come to appreciate the power—and the danger—of this question. Today’s students have weaker general backgrounds than we’ve seen for many generations because when information ceased being scarce, it lost all value.  

It’s important to recall how recently this phenomenon began. In 2011, an estimated one-third of Americans, and one-quarter of American teenagers, had smartphones. From there, adoption among the young grew faster than among the general population. Current estimates are that over 90% of Americans, and over 95% of teenagers, have smartphone access. 

Even rules limiting classroom use cannot overcome the cultural shift. Few of today’s college students or recent grads have ever operated without the ability to scout ahead or query a device for information on an as-needed basis. There’s thus no reason for them to have ever developed the discipline or the practices that form the basis for learning.

The deeper problem, however, is that while instant lookup may work well for facts, it’s deadly for comprehension and worse for moral thinking.

A quick lookup can list every battle of WWII, along with casualty statistics and outcome. It cannot reveal the strategic or ethical deliberations driving the belligerents as they entered that battle. Nor can it explain why Churchill fought for the side of good while Hitler fought for the side of evil—a question that our most popular interviewers and podcasters have recently brought to prominence. 

At least, lookup couldn’t provide such answers until recently. New AI systems—still less than three years old—are rushing to fill that gap. They already offer explanations and projections, at times including the motives underlying given decisions. They are beginning to push into moral judgments. 

Of course, like all search and pattern-matching tools, these systems can only extrapolate from what they find. They thus tend to magnify whatever is popular. They’re also easy prey for some of the most basic cognitive biases. They tend to overweight the recent, the easily available, the widely repeated, and anything that confirms pre-conceived models. 

The recent reports of Grok regurgitating crude antisemitic stereotypes and slogans illustrate the technological half of the problem. The shocking wave of terror-supporting actions wracking college campuses and drawing recent grads in many of our cities illustrate the human half. 

The abundance of information has destroyed its value. Because information—facts and data—are the building blocks upon which all understanding must rest, we’ve raised a generation incapable of deep understanding. Because complex moral judgments build upon comprehension, young Americans are also shorn of basic morality 

We are rapidly entering a world in which widespread access to voluminous information is producing worse—not better—decisions and actions at all levels. We have outsourced knowledge, comprehension, and judgment to sterile devices easily biased to magnify popular opinion. We have bred a generation of exquisitely credentialed, deeply immoral, anti-intellectuals on the brink of entering leadership. 

When the ubiquity of instant lookup evolves beyond basic facts and into moral judgments, banal slogans and mindless cruelty will come to rule our lives.  

Is there a way out of this morass?  Perhaps the only one that the ancients discovered back when information, understanding, and morality all retained immense value: faith in a higher power. Because the path we’ve set on our own is heading into some very dark places. 

This article was originally published by RealClearScience and made available via RealClearWire.

I Bet This Is a Big Problem

Is sports wagering a public health crisis?

“Folks might be familiar with this group at Northeastern, the Public Health Advocacy Institute. It is treating gambling as a public health issue and has deemed it a crisis…

“First of all, lots of people, predominantly young men, are losing more money than they can afford gambling on sports, or are developing either full-blown or sort of borderline gambling addictions. To me, that makes it a public health issue.”

Jonathan D. Cohen on how sports gambling became a public health crisishttps://awfulannouncing.com/gambling/author-jonathan-d-cohen-perils-sports-betting.html

The next major health epidemic in the U.S. will not come from a pathogen. This plague has a potential patient population in the tens of millions, limited effective treatments, and is not widely studied in the medical community. I’m referring to sports gambling, an activity that deeply alarms me as a physician who specializes in addiction. Problem gambling can increase the incidence of depression and anxiety and can lead to bankruptcy, homelessness, or suicide. Why gambling addiction is America’s next health crisishttps://kevinmd.com/2025/06/why-gambling-addiction-is-americas-next-health-crisis.html

Yikes.

I’m Taking Your Phone Away (and you didn’t do anything wrong)

The study found that those who had high and increasing addiction to mobile phones and social media platforms were at a higher risk of suicidal behaviors and thoughts. At year four, almost 18% of kids reported having suicidal thoughts, and 5% said they had suicidal behaviors. Teens with ‘addictive’ phone use more likely to be suicidal: Studyhttps://thehill.com/policy/healthcare/5360042-teens-addiction-social-media-phones-suicidal-thoughts/

Here’s the link to the JAMA article https://jamanetwork.com/journals/jama/article-abstract/2835481/

Yikes!

Think Again About Outsourcing Your Thinking

Artificial intelligence can be an oxymoron. And dangerous for some humans.

What Is ChatGPT? Everything You Need to Know About OpenAI’s Popular Chatbot – https://www.pcmag.com/explainers/what-is-chatgpt-everything-you-need-to-know-about-openais-popular-chatbot

ChatGPT has been found to encourage dangerous and untrue beliefs about The Matrix, fake AI persons, and other conspiracies, which have led to substance abuse and suicide in some cases. A report from The New York Times found that the GPT -4 large language model, itself a highly trained autofill text prediction machine, tends to enable conspiratorial and self-aggrandizing user prompts as truth, escalating situations into “possible psychosis.” ChatGPT touts conspiracies, pretends to communicate with metaphysical entities — attempts to convince one user that they’re Neohttps://www.tomshardware.com/tech-industry/artificial-intelligence/chatgpt-touts-conspiracies-pretends-to-communicate-with-metaphysical-entities-attempts-to-convince-one-user-that-theyre-neo

ChatGPT Is Telling People With Psychiatric Problems to Go Off Their Medshttps://futurism.com/chatgpt-mental-illness-medications

In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality…Online, it’s clear that the phenomenon is extremely widespread. As Rolling Stone reported last month, parts of social media are being overrun with what’s being referred to as “ChatGPT-induced psychosis,” or by the impolitic term “AI schizoposting“: delusional, meandering screeds about godlike entities unlocked from ChatGPT, fantastical hidden spiritual realms, or nonsensical new theories about math, physics and reality. An entire AI subreddit recently banned the practice, calling chatbots “ego-reinforcing glazing machines that reinforce unstable and narcissistic personalities.”  People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusionshttps://futurism.com/chatgpt-mental-health-crises

Yikes.