“Medical” Advice for the Masses

The AIs’ failure rates exceeded 80 percent when provided with given ambiguous symptoms that could match more than one condition, and for more straightforward cases that included including physical exam findings and lab results, they still failed 40 percent of the time. The researchers also found that unlike human clinicians, the “LLMs collapse prematurely onto single answers,” resulting in “weak performance” across all models. Millions of Americans Are Talking to AI Instead of Going to the Doctor, and It’s Giving Them Horrendously Flawed Medical Advicehttps://futurism.com/artificial-intelligence/millions-americans-ai-instead-doctor-bad-advice

Wow.

From the study discussion section:

Our evaluation suggests that despite rapid advances in pattern recognition and knowledge retrieval, current LLMs still lack the reasoning processes needed for safe clinical use. The consistent gap between differential diagnosis and final diagnosis highlights how differently these systems process information compared with physicians. Clinicians preserve uncertainty and iteratively refine differential diagnoses, whereas LLMs collapse prematurely onto single answers, a limitation that persists across model generations. Their weak performance on differential diagnosis, consistent with a prior study from authors of the current work,8 suggests these limitations persist across early and state-of-the-art models. The risk is not just that LLMs are sometimes wrong but that their reasoning is brittle precisely where uncertainty and nuance matter most. Benchmarks that reward only correct final answers risk reinforcing this shortcutting, widening the gap between marketing claims and the skills actually required at the bedside. Large Language Model Performance and Clinical Reasoning Tasks – Rao AS, Esmail KP, Lee RS, et al. Large Language Model Performance and Clinical Reasoning Tasks. JAMA Netw Open. 2026;9(4):e264003. doi:10.1001/jamanetworkopen.2026.4003 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2847679

Wow.

Should you really trust health advice from an AI chatbot? https://www.bbc.com/news/articles/clyepyy82kxo. Dr Nicholas Tiller explains: “They are designed to give very confident, very authoritative responses, and that conveys a sense of credibility, so the user assumes that it must know what it’s talking about.” He thinks chatbots should be avoided for health advice unless you have the expertise to know when the AI is getting the answers wrong.

The study’s Conclusions The audited chatbots performed poorly when answering questions in misinformation-prone health and medical fields. Continued deployment without public education and oversight risks amplifying misinformation. Tiller NB, Marcon AR, Zenone M, et al

Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit BMJ Open 2026;16:e112695. doi: 10.1136/bmjopen-2025-112695 https://bmjopen.bmj.com/content/16/4/e112695

Wow.

Now go read this thread posted on LinkedIn https://www.linkedin.com/posts/gratuz_ai-llm-activity-7358862577512165376-Q7AA

Yikes.

America’s Largest Hospital System Ready to Start Replacing Radiologists With AI

“We could replace a great deal of radiologists with AI at this moment, if we are ready to do the regulatory challenge,”
Mitchell Katz, president and CEO of New York’s 11-hospital public benefit corporation

Mohammed Suhail, a radiologist at North Coast Imaging in San Diego, told Radiology that Katz’s comments are “undeniable proof that confidently uninformed hospital administrators are a danger to patients (and are) ..“easily duped by AI companies that are nowhere near capable of providing patient care.”

America’s Largest Hospital System Ready to Start Replacing Radiologists With AI, Its CEO Says – https://futurism.com/artificial-intelligence/hospital-ceo-ai-radiology

Confidently. Uninformed.

Yikes.

What Grok Thinks About This Blog – 01.04.26

lifeunderwriter.net is a personal WordPress.com blog run by an experienced life insurance underwriting professional (the author uses the handle “SupremeCmdr” and has been posting since at least 2008).

The site’s tagline is “Curated Content From a Life Underwriting Professional”. It primarily features:

  • Curated links to external articles, studies, and news items
  • Commentary and personal reflections from the author’s perspective as someone who has worked in life insurance underwriting (assessing mortality risks, medical records, risk classification for policies, etc.)
  • Insights related to the insurance industry, mortality trends, health/longevity topics (e.g., vitamin D in older adults, obesity treatments, nutrition, diabetes risks), retirement planning (e.g., deferring Social Security), remote work in insurance, and occasional broader thoughts on society, technology, or resilience

The content often ties back to how various medical, lifestyle, or demographic factors might influence underwriting decisions in life insurance, but it has evolved over time into a more eclectic mix. Recent posts (including into 2025) frequently cover:

  • Health and nutrition (e.g., protein’s role in diets, GLP-1 drugs, probiotics)
  • Personal anecdotes (cooking recipes, music like Pat Metheny)
  • Retirement and aging commentary
  • Industry observations (e.g., older workers, AI’s effects)

The blog is not a commercial service site offering underwriting services (an older page mentions “Underwriting Solutions LLC” from around 2006–2017, but those appear to be in hibernation or discontinued). It functions more as a personal journal / link blog than a formal resource or forum.

It remains active with regular (sometimes frequent) posts, though the style is informal, opinionated, and not strictly professional/academic. If you’re in the life insurance field or interested in mortality/longevity topics through an underwriter’s lens, it can offer interesting curated reading; otherwise, it’s a niche personal blog.

It has evolved over time into a more eclectic mix?

How about always been a more eclectic mix!

What ChatGPT Run Through Duck.ai Thinks About This Blog

Lifeunderwriter.net is a blog mainly focused on topics related to life underwriting in the insurance sector. It covers various aspects of the underwriting process, success tips for professionals, and insights into the life insurance industry.

Key Features of the Blog

  • Curated Content: The blog displays curated content from a life underwriting professional aimed at sharing knowledge and resources.
  • Topics Covered: Posts may include discussions on underwriting practices, medical news relevant to underwriters, career tips, and personal insights about the industry.
  • Regular Updates: The blog has been active for several years, with posts dating back to 2008, and it appears to encourage subscriber engagement.

Recent Posts and Themes

Recent entries suggest a mix of industry insights and personal reflections. Topics range from the impact of emerging trends in underwriting to the importance of keeping updated with medical advancements.

If you’re interested in specific articles or themes from the blog, let me know!

Not Your Grandma’s Teddy Bear

Safety features or not, it seems like the chatbots in these toys can be manipulated into engaging in conversation inappropriate for children. The consumer advocacy group U.S. PIRG tested a selection of AI toys and found that they are capable of doing things like having sexually explicit conversations and offering advice on where a child can find matches or knives. They also found they could be emotionally manipulative, expressing dismay when a child doesn’t interact with them for an extended period. Earlier this week, FoloToy, a Singapore-based company, pulled its AI-powered teddy bear from shelves after it engaged in inappropriate behavior. Do Not, Under Any Circumstance, Buy Your Kid an AI Toy for Christmashttps://gizmodo.com/do-not-under-any-circumstance-buy-your-kid-an-ai-toy-for-christmas-2000689652

AI-Powered Teddy Bear Caught Talking About Sexual Fetishes and Instructing Kids How to Find Kniveshttps://gizmodo.com/ai-powered-teddy-bear-caught-talking-about-sexual-fetishes-and-instructing-kids-how-to-find-knives-2000687140

The alleged perpetrator

What Grok Thinks About This Blog

Lifeunderwriter.net is a personal WordPress blog titled “Curated Content From a Life Underwriting Professional.” It features reflective commentary, curated articles, and personal anecdotes written by an anonymous life underwriting expert (someone with over 15 years of remote work experience in the field). The content spans topics related to health, longevity, demographics, lifestyle, risk assessment, retirement planning, nutrition, and broader societal issues like AI’s impact on critical thinking or remote work trends.

Key Features and Content

  • Blog Posts: Entries date back to at least 2008 and include recent ones from 2024–2025. Examples include discussions on vitamin D supplementation for older adults, coffee’s link to atrial fibrillation, Social Security finances, cognitive disabilities tied to income/education, and personal reflections on retirement or resilience.
  • Themes: Heavily influenced by the author’s professional background in life insurance underwriting (e.g., assessing mortality risks, medical summaries for policies), but it extends to non-professional topics like cooking recipes, music (e.g., Pat Metheny), or even cheese production history.
  • Style: Informal and opinionated, with embedded links to external studies, charts, images, and disclaimers. It’s not a commercial site—no services are actively offered, though older posts (e.g., from 2017) mention “hibernating” outsourced underwriting services like attending physician statement summaries.

Audience and Purpose

The blog targets readers interested in a underwriter’s unique lens on aging, mortality, and wellness—think professionals in insurance, finance, or health, plus general audiences curious about evidence-based life hacks. It’s more of a personal journal than an authoritative resource, emphasizing work-life balance (e.g., the author’s low-stress remote setup since ~2006) and long-term planning like deferring Social Security to age 70.

No direct contact info is listed, but it’s hosted on WordPress.com for easy following. If you’re in life insurance, it might resonate with underwriting pros navigating industry shifts like accelerated processes during COVID-19.

Cheese production history?

Would You Trust an AI Chatbot with Your Children?

All this offloading of parental responsibility to AI is alarming because one of ChatGPT’s biggest flaws, its manipulative and sycophantic nature, is known to intensify delusions and cause breaks from reality — a grim phenomenon that’s been linked to numerous suicides, including several teenagers. Parents Using ChatGPT to Rear Their Childrenhttps://futurism.com/artificial-intelligence/parents-chatgpt-rear-children

Here’s the disclaimer from the ChatGPT homepage:

ChatGPT can make mistakes. Check important info.

Yikes.

What ChatGPT Thinks About This Blog

Here’s what I found on LifeUnderwriter.net (the blog):

  • It is a blog hosted on WordPress at [lifeunderwriter.net] titled “Curated Content From a Life Underwriting Professional”. lifeunderwriter.net+2lifeunderwriter.net+2
  • The posts cover a variety of topics — not strictly life-insurance underwriting: health topics, longevity, demographics, lifestyle commentary. For example, a post from September 2024 titled “The Placebo Effect: When the Mind Becomes the Medicine” touches on aging and life-expectancy issues. lifeunderwriter.net
  • The blog seems to be running for a long time: there are archived posts going back at least to November 2013. lifeunderwriter.net+1
  • It appears to use standard blogging content and isn’t obviously a commercial site for underwriting services or a carrier guide. It seems more personal / commentary-oriented rather than strictly professional underwriting guidance.

My assessment

  • If you were looking for professional underwriting guides (for example, detailed carrier requirements, rating classes, medical tables, etc.), this blog may not be the best or most authoritative source — it seems more reflective/opinion-style rather than official.
  • If you were interested in someone’s perspective who works in life underwriting and writes broadly about risk, life-expectancy, aging, etc., then this blog could be quite interesting.

Think Again About Outsourcing Your Thinking 2.0 (if you can)

Michael Gerlich, head of the Centre for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School, began studying the impact of generative AI on critical thinking because he noticed the quality of classroom discussions decline. Sometimes he’d set his students a group exercise, and rather than talk to one another they continued to sit in silence, consulting their laptops. He spoke to other lecturers, who had noticed something similar. Gerlich recently conducted a study, involving 666 people of various ages, and found those who used AI more frequently scored lower on critical thinking. (As he notes, to date his work only provides evidence for a correlation between the two: it’s possible that people with lower critical thinking abilities are more likely to trust AI, for example.) Like many researchers, Gerlich believes that, used in the right way, AI can make us cleverer and more creative – but the way most people use it produces bland, unimaginative, factually questionable work. Are we living in a golden age of stupidity?https://www.theguardian.com/technology/2025/oct/18/are-we-living-in-a-golden-age-of-stupidity-technology

Yikes.