“Medical” Advice for the Masses

The AIs’ failure rates exceeded 80 percent when provided with given ambiguous symptoms that could match more than one condition, and for more straightforward cases that included including physical exam findings and lab results, they still failed 40 percent of the time. The researchers also found that unlike human clinicians, the “LLMs collapse prematurely onto single answers,” resulting in “weak performance” across all models. Millions of Americans Are Talking to AI Instead of Going to the Doctor, and It’s Giving Them Horrendously Flawed Medical Advicehttps://futurism.com/artificial-intelligence/millions-americans-ai-instead-doctor-bad-advice

Wow.

From the study discussion section:

Our evaluation suggests that despite rapid advances in pattern recognition and knowledge retrieval, current LLMs still lack the reasoning processes needed for safe clinical use. The consistent gap between differential diagnosis and final diagnosis highlights how differently these systems process information compared with physicians. Clinicians preserve uncertainty and iteratively refine differential diagnoses, whereas LLMs collapse prematurely onto single answers, a limitation that persists across model generations. Their weak performance on differential diagnosis, consistent with a prior study from authors of the current work,8 suggests these limitations persist across early and state-of-the-art models. The risk is not just that LLMs are sometimes wrong but that their reasoning is brittle precisely where uncertainty and nuance matter most. Benchmarks that reward only correct final answers risk reinforcing this shortcutting, widening the gap between marketing claims and the skills actually required at the bedside. Large Language Model Performance and Clinical Reasoning Tasks – Rao AS, Esmail KP, Lee RS, et al. Large Language Model Performance and Clinical Reasoning Tasks. JAMA Netw Open. 2026;9(4):e264003. doi:10.1001/jamanetworkopen.2026.4003 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2847679

Wow.

Should you really trust health advice from an AI chatbot? https://www.bbc.com/news/articles/clyepyy82kxo. Dr Nicholas Tiller explains: “They are designed to give very confident, very authoritative responses, and that conveys a sense of credibility, so the user assumes that it must know what it’s talking about.” He thinks chatbots should be avoided for health advice unless you have the expertise to know when the AI is getting the answers wrong.

The study’s Conclusions The audited chatbots performed poorly when answering questions in misinformation-prone health and medical fields. Continued deployment without public education and oversight risks amplifying misinformation. Tiller NB, Marcon AR, Zenone M, et al

Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit BMJ Open 2026;16:e112695. doi: 10.1136/bmjopen-2025-112695 https://bmjopen.bmj.com/content/16/4/e112695

Wow.

Now go read this thread posted on LinkedIn https://www.linkedin.com/posts/gratuz_ai-llm-activity-7358862577512165376-Q7AA

Yikes.

America’s Largest Hospital System Ready to Start Replacing Radiologists With AI

“We could replace a great deal of radiologists with AI at this moment, if we are ready to do the regulatory challenge,”
Mitchell Katz, president and CEO of New York’s 11-hospital public benefit corporation

Mohammed Suhail, a radiologist at North Coast Imaging in San Diego, told Radiology that Katz’s comments are “undeniable proof that confidently uninformed hospital administrators are a danger to patients (and are) ..“easily duped by AI companies that are nowhere near capable of providing patient care.”

America’s Largest Hospital System Ready to Start Replacing Radiologists With AI, Its CEO Says – https://futurism.com/artificial-intelligence/hospital-ceo-ai-radiology

Confidently. Uninformed.

Yikes.

How Not To Invest – A Lesson From The University of Chicago

CRSP’s origins date back to the 1960s. Its initial goal was to build a database of historical stock prices. This is harder than it might seem. Before trading was computerized, stock prices were maintained on paper. And when stocks split or companies merged, that added to the complexity.

Despite this seemingly dull mandate, CRSP has played an important role in the development of modern finance over the years. Most notably, the efficient market hypothesis and the capital asset pricing model were both made possible by CRSP data. And today, many of the world’s largest index funds, including Vanguard’s Total Stock Market Fund, are built on CRSP indexes. Endowment Lessons https://humbledollar.com/2026/02/endowment-lessons/

This article by Adam M. Grossman uses the University of Chicago’s financial struggles as a cautionary tale for individual investors.

Key Lessons for Individual Investors

  • Spending: Avoid “Keeping Up with the Joneses”
    • The university invested heavily in new buildings and programs to maintain its “eminence” without securing corresponding revenue.
    • Takeaway: Financial success depends on income exceeding expenses. Operating costs of new assets (like large homes or complex projects) must be planned for in advance.
  • Saving: Beware of Recency Bias
    • During a 15-year market boom, the university ramped up debt rather than stockpiling resources.
    • Takeaway: Investors often falsely assume current trends will continue forever. Use periods of market strength to re-balance portfolios and manage risk rather than increasing lifestyle or debt commitments.
  • Investing: Complexity vs. Simplicity
    • Performance: UChicago’s endowment returned 6.7% annually over 10 years, trailing a simple Vanguard Balanced Index Fund (VBIAX), which returned 8.2%.
    • Liquidity: The university locked over 60% of its funds into illiquid assets like private equity and real estate, making it difficult to cover cash flow needs.
    • Takeaway: High-fee, complex, and illiquid investments often under-perform simple index funds. If elite institutions with dedicated investment offices “are having second thoughts” about private equity, the message for individual investors seems clear.

This summary was produced by Gemini AI and edited by yours truly.

Here’s a link to an article on the sale of CRSP. Morningstar Completes Acquisition of CRSP and Extends Relationship with Vanguard https://newsroom.morningstar.com/news/news-details/2026/Morningstar-Completes-Acquisition-of-CRSP-and-Extends-Relationship-with-Vanguard/default.aspx

Now READ THIS (if you can)

Just a few days ago I posted READ THIS! (if you can). Then I came across this:

A conservative colleague said the use of AI to create addiction and device dependency was evil. That is an understatement. These kids rely on ChatGPT not just for information but also to make choices, and for many, that seems to extend to every aspect of their lives. Sam Altman makes clear in video clips below that this extreme loss of independence, of personal autonomy, is deliberate.

That means unless these kids can find a way to break free, they are cognitive serfs that can be told to do anything. How to vote. Whether to sign up to die in a hopeless war. Whether to take a job in a unsafe meatpacking plant and risk loss of limbs.

This widespread abuse is far worse than what the Sacklers and other opioid peddlers did to mainly working class pain victims, or what the British did to China in the Opium Wars. At least with opioid addiction, it is possible for the victims to recover even if the withdrawal process is painful. The evidence is mounting that even for adults, regular use of AI diminishes reasoning skills and attention spans.

These children are being turned into automatons, incapable of independent thought and action. It’s widely known in developmental psychology that if certain patterning does not happen at critical ages, the deficit is permanent. Kittens needing visual input in their first few days or they are blind. Kids who don’t crawl having coordination issues as adults due to missing important movement patterning. Less dramatic versions are not being able to make sounds in foreign languages if you have not heard and practiced them when young.

These young AI addicts are set to be permanently damaged. This is tech bros creating something as permanent and harmful as fetal alcohol syndrome on a mass basis. And they clearly know what they are doing, witness how they raise their children on completely different lines. “We Are Watching Critical Thinking Disappear in Real Time” Due to AI Addiction: 40% of Kids Can’t Read, Teachers Quitting in Droveshttps://www.nakedcapitalism.com/2026/01/we-are-watching-critical-thinking-disappear-in-real-time-due-to-ai-addiction-40-of-kids-cant-read-teachers-quitting-in-droves.html

Thanks for sharing Yves. But as one of your readers noted,

If we teach kids to think they won’t do what they’re told!

What Grok Thinks About This Blog – 01.04.26

lifeunderwriter.net is a personal WordPress.com blog run by an experienced life insurance underwriting professional (the author uses the handle “SupremeCmdr” and has been posting since at least 2008).

The site’s tagline is “Curated Content From a Life Underwriting Professional”. It primarily features:

  • Curated links to external articles, studies, and news items
  • Commentary and personal reflections from the author’s perspective as someone who has worked in life insurance underwriting (assessing mortality risks, medical records, risk classification for policies, etc.)
  • Insights related to the insurance industry, mortality trends, health/longevity topics (e.g., vitamin D in older adults, obesity treatments, nutrition, diabetes risks), retirement planning (e.g., deferring Social Security), remote work in insurance, and occasional broader thoughts on society, technology, or resilience

The content often ties back to how various medical, lifestyle, or demographic factors might influence underwriting decisions in life insurance, but it has evolved over time into a more eclectic mix. Recent posts (including into 2025) frequently cover:

  • Health and nutrition (e.g., protein’s role in diets, GLP-1 drugs, probiotics)
  • Personal anecdotes (cooking recipes, music like Pat Metheny)
  • Retirement and aging commentary
  • Industry observations (e.g., older workers, AI’s effects)

The blog is not a commercial service site offering underwriting services (an older page mentions “Underwriting Solutions LLC” from around 2006–2017, but those appear to be in hibernation or discontinued). It functions more as a personal journal / link blog than a formal resource or forum.

It remains active with regular (sometimes frequent) posts, though the style is informal, opinionated, and not strictly professional/academic. If you’re in the life insurance field or interested in mortality/longevity topics through an underwriter’s lens, it can offer interesting curated reading; otherwise, it’s a niche personal blog.

It has evolved over time into a more eclectic mix?

How about always been a more eclectic mix!

What ChatGPT Run Through Duck.ai Thinks About This Blog

Lifeunderwriter.net is a blog mainly focused on topics related to life underwriting in the insurance sector. It covers various aspects of the underwriting process, success tips for professionals, and insights into the life insurance industry.

Key Features of the Blog

  • Curated Content: The blog displays curated content from a life underwriting professional aimed at sharing knowledge and resources.
  • Topics Covered: Posts may include discussions on underwriting practices, medical news relevant to underwriters, career tips, and personal insights about the industry.
  • Regular Updates: The blog has been active for several years, with posts dating back to 2008, and it appears to encourage subscriber engagement.

Recent Posts and Themes

Recent entries suggest a mix of industry insights and personal reflections. Topics range from the impact of emerging trends in underwriting to the importance of keeping updated with medical advancements.

If you’re interested in specific articles or themes from the blog, let me know!

Not Your Grandma’s Teddy Bear

Safety features or not, it seems like the chatbots in these toys can be manipulated into engaging in conversation inappropriate for children. The consumer advocacy group U.S. PIRG tested a selection of AI toys and found that they are capable of doing things like having sexually explicit conversations and offering advice on where a child can find matches or knives. They also found they could be emotionally manipulative, expressing dismay when a child doesn’t interact with them for an extended period. Earlier this week, FoloToy, a Singapore-based company, pulled its AI-powered teddy bear from shelves after it engaged in inappropriate behavior. Do Not, Under Any Circumstance, Buy Your Kid an AI Toy for Christmashttps://gizmodo.com/do-not-under-any-circumstance-buy-your-kid-an-ai-toy-for-christmas-2000689652

AI-Powered Teddy Bear Caught Talking About Sexual Fetishes and Instructing Kids How to Find Kniveshttps://gizmodo.com/ai-powered-teddy-bear-caught-talking-about-sexual-fetishes-and-instructing-kids-how-to-find-knives-2000687140

The alleged perpetrator

What Grok Thinks About This Blog

Lifeunderwriter.net is a personal WordPress blog titled “Curated Content From a Life Underwriting Professional.” It features reflective commentary, curated articles, and personal anecdotes written by an anonymous life underwriting expert (someone with over 15 years of remote work experience in the field). The content spans topics related to health, longevity, demographics, lifestyle, risk assessment, retirement planning, nutrition, and broader societal issues like AI’s impact on critical thinking or remote work trends.

Key Features and Content

  • Blog Posts: Entries date back to at least 2008 and include recent ones from 2024–2025. Examples include discussions on vitamin D supplementation for older adults, coffee’s link to atrial fibrillation, Social Security finances, cognitive disabilities tied to income/education, and personal reflections on retirement or resilience.
  • Themes: Heavily influenced by the author’s professional background in life insurance underwriting (e.g., assessing mortality risks, medical summaries for policies), but it extends to non-professional topics like cooking recipes, music (e.g., Pat Metheny), or even cheese production history.
  • Style: Informal and opinionated, with embedded links to external studies, charts, images, and disclaimers. It’s not a commercial site—no services are actively offered, though older posts (e.g., from 2017) mention “hibernating” outsourced underwriting services like attending physician statement summaries.

Audience and Purpose

The blog targets readers interested in a underwriter’s unique lens on aging, mortality, and wellness—think professionals in insurance, finance, or health, plus general audiences curious about evidence-based life hacks. It’s more of a personal journal than an authoritative resource, emphasizing work-life balance (e.g., the author’s low-stress remote setup since ~2006) and long-term planning like deferring Social Security to age 70.

No direct contact info is listed, but it’s hosted on WordPress.com for easy following. If you’re in life insurance, it might resonate with underwriting pros navigating industry shifts like accelerated processes during COVID-19.

Cheese production history?

Would You Trust an AI Chatbot with Your Children?

All this offloading of parental responsibility to AI is alarming because one of ChatGPT’s biggest flaws, its manipulative and sycophantic nature, is known to intensify delusions and cause breaks from reality — a grim phenomenon that’s been linked to numerous suicides, including several teenagers. Parents Using ChatGPT to Rear Their Childrenhttps://futurism.com/artificial-intelligence/parents-chatgpt-rear-children

Here’s the disclaimer from the ChatGPT homepage:

ChatGPT can make mistakes. Check important info.

Yikes.