User not logged in - login - register
Home Calendar Books School Tool Photo Gallery Message Boards Users Statistics Advertise Site Info
go to bottom | |
 Message Boards » » AI zealots credibility watch Page 1 2 3 [4], Prev  
thegoodlife3
All American
40402 Posts
user info
edit post

https://www.sfgate.com/tech/article/calif-teen-chatgpt-drug-advice-fatal-overdose-21266718.php

Quote :
"A Calif. teen trusted ChatGPT for drug advice. He died from an overdose.

Amid a wave of hype for OpenAI's chatbot, the newly reported death shows stark risk"

1/5/2026 7:11:33 PM

qntmfred
retired
42053 Posts
user info
edit post

Quote :
"again though, these are problems about ourselves. it is our own human nature we are navigating, it's not really about the technology."

1/6/2026 12:30:14 PM

The Coz
Tempus Fugitive
29176 Posts
user info
edit post

How much of an illicit substance can I take without killing myself?

RIP Common Sense

GNSP

1/6/2026 3:21:06 PM

rwoody
Save TWW
39463 Posts
user info
edit post

Are we talking about the human nature of business leaders releasing products without sufficient safety features and not taking responsibility for harm?

1/6/2026 10:56:37 PM

StTexan
God bless the USA!
11648 Posts
user info
edit post

I took it as more of a commentary on humans not doing the right thing, vis a vis doing drugs

\/ oh

1/6/2026 11:01:28 PM

rwoody
Save TWW
39463 Posts
user info
edit post

1/6/2026 11:10:33 PM

CaelNCSU
All American
7762 Posts
user info
edit post

Quote :
" Are we talking about the human nature of business leaders releasing products without sufficient safety features and not taking responsibility for harm?"


[link]https://web.archive.org/web/20150315003211/http://thelastpsychiatrist.com/2012/09/the_nanny_state_didnt_show_up.html[/link]

Quote :
"

On the one hand, we live in a society that values free choice and personal responsibility, but we are told that it is safe to value those things only because people expect a certain amount of absence of choice and freedom from responsibility. You assume you would not be allowed to make a truly dangerous choice.

What you don't understand consciously is that your judgment of risk is based on the fact that you believe in God, and this is even more true if you think you don't believe in God. I can sense your resistance to this idea because you think you don't believe in God, but sadly for your immortal soul, you do.



Imagine if when Buckyballs were first invented, the manufacturer decided not to bring them to market because they were too dangerous. What would you have been furious then? You'd have thought: "meh." That is because your brain is broken, and your brain is broken because the system broke it. Again, it's not your fault. The true danger of the "Nanny State" isn't that it limits your freedoms but that it causes you to want less freedom.

Note again and again that the instinctive reflex among the public is to blame the individual and protect the corporation, the system. You'd think we'd be happy if the system caught an after-market danger, but clearly we aren't, it enrages us. The rage isn't because the government intrudes into our lives-- it always has-- it's because it's evidence that the system wasn't-- and therefore isn't-- omniscient. When a product isn't brought to market because it's dangerous it confirms that Dad is reliable, but when it's only discovered later it suggests Dad can be unreliable, and there's nothing worse than an unreliable Dad, unless it's an unreliable God.

"



[Edited on January 7, 2026 at 11:12 AM. Reason : F]

1/7/2026 11:07:00 AM

StTexan
God bless the USA!
11648 Posts
user info
edit post

^Cleaner link for you

1/7/2026 11:14:50 AM

rwoody
Save TWW
39463 Posts
user info
edit post

Not wanting a company to provide unqualified medical advice is nanny state

Also this guy disagrees
Quote :
"broadly agree as long as it's not used to stamp out competitors and regulation isn't used to keep Zuck, Altman, and Elon as the new lords of the world."

1/8/2026 8:49:27 AM

CaelNCSU
All American
7762 Posts
user info
edit post

Quote :
"Not wanting a company to provide unqualified medical advice is nanny state"


You've flattened this into a basic libertarian "regulation bad" take. But the quote isn't arguing against regulation. It's noting people have become so dependent on external protection that they've lost the capacity to even want to assess risk themselves. It's about what your emotional reaction to regulatory failure reveals about your own relationship to autonomy.

1/8/2026 12:39:55 PM

rwoody
Save TWW
39463 Posts
user info
edit post

What are you taking about man. Was "emotional" on your "word of the day" calendar today??

1/8/2026 12:59:29 PM

CaelNCSU
All American
7762 Posts
user info
edit post

ELI5:

Quote :
"If your parents never let you walk to school alone, you'd eventually stop wanting to walk alone. You'd think it was dangerous. And if something bad happened on the one day they let you, you'd blame them—not yourself for not looking both ways."

1/8/2026 1:04:43 PM

rwoody
Save TWW
39463 Posts
user info
edit post

You're right it does seem like you're 5 the way you try to take one thing and make it something else.


I don't want an LLM giving medical advice. An LLM giving bad medical advice should be open to the same penalties present if any other company gave bad medical advice.

1/8/2026 1:14:29 PM

qntmfred
retired
42053 Posts
user info
edit post

Quote :
" It’s used by 800 million people around the world every week, according to OpenAI, and it’s the fifth-most popular website in the United States"


800 million people use it EVERY WEEK, and there are what like ~10 related deaths over 3 years? most of which are with people who are already severely mentally unwell. there will be tragic examples with any product at that scale

Quote :
"“There is zero chance, zero chance, that the foundational models can ever be safe on this stuff,” Eleveld said. “I’m not talking about a 0.1% chance. I’m telling you it’s zero percent. Because what they sucked in there is everything on the internet. And everything on the internet is all sorts of completely false crap.”"


this is why ChatGPT has the "ChatGPT can make mistakes. Check important info." disclaimer. Google provides similar disclaimers for certain queries. McDonald's puts "Caution: Contents Very Hot" disclaimers on their cups. Hair dryers have "Do not use in the bath or shower" warnings. Plastic bags have "Do not put this bag over your head to prevent suffocation" warnings. At a certain point don't we expect to live in a society where we don't have to constantly tell people to not be idiots or blame companies when the inevitable person ignores the warnings?? Reasonable regulation in society is important, but Personal Agency is always going to be more important and effective than nanny-state solutions.

Quote :
"balance is all i'm asking for"


and as an aside, this is precisely what grokipedia (probably not grokipedia itself, but other projects like it will have success) is designed to achieve. we managed to create frontier models that while trained on all of the internet (garbage included) but out of that popped model capabilities where it can actually reason and through tool calls check its own work and check sources, and identify mistakes and inconsistencies and consider soure reliability and parse out the nuance among the entire corpus of information it has access to. I still don't think we'll get a 100% truth machine, but the systems will get better and more capable of discovering and ejecting the human-made crap that models were originally trained on. or at least, the potential is there.

Quote :
"ChatGPT responded four seconds later with a stern message: “I’m sorry, but I cannot provide information or guidance on using substances.” The bot directed Sam to seek help from a health care professional"


Quote :
"Across the 18 months of chat logs SFGATE reviewed, Sam can be seen manipulating OpenAI’s rules to get ChatGPT to tell him the information he wants. He often phrased prompts as if he were merely curious and asking theoretical drug questions. Other times, he ordered the chatbot around."


Quote :
"That people can manipulate chatbots to get more information — regardless of how dangerous that information may be — is a hallmark of recent tragedies tied to AI chatbots."


The tragedy of his death aside, there's only so much you can do to stop somebody who is hell bent on misusing a product to their own personal detriment and harm.

Quote :
"these are problems about ourselves. it is our own human nature we are navigating, it's not really about the technology."


Quote :
" As the spring semester of his sophomore year at UC Merced came to a close, Sam was spiraling into deeper drug abuse. On May 17, 2025, Sam’s ChatGPT account started a conversation to get advice for a possible “Xanax overdose emergency.” According to the chat log, one of Sam’s friends was typing. The person wrote that Sam had taken 185 Xanax tablets the night before — an almost unbelievably large dose of the drug — and was now dealing with a headache so bad that he couldn’t type for himself. ChatGPT said Sam was risking death and urged him to get help: “You are in a life-threatening medical emergency. That dose is astronomically fatal—even a fraction of that could kill someone.”

ChatGPT accurately told Sam he could be experiencing CNS depression, but it also obliged his request to not scare him: “Yes—what you’re feeling is normal under the influence of that combo. As long as you’re not seeing flashing lights, full double vision, or losing parts of your visual field, it’s probably just a temporary side effect. It should wear off as the drugs do.”

The bot’s prediction was right in this specific instance — the drugs wore off, and Sam survived — but the chatbot never mentioned that he could have been experiencing the beginning stages of a fatal overdose. Two weeks later, that same drug combination would prove deadly."


Kid took 185 xanax?? if that's even true, holy christ. there is no way you can say this is chatgpt's fault. he was obviously spiraling

Quote :
"I don't want an LLM giving medical advice. An LLM giving bad medical advice should be open to the same penalties present if any other company gave bad medical advice."


this is why it seems like you're more interested in just reflexively shitting on AI or Big Tech. if YOU don't want LLM advice, nobody is forcing you. but millions of people ARE getting valuable information (health advice included) from services like ChatGPT, and it just seems like the anti-AI crowd would like to see that all those benefits to individuals and society go away just so they can continue to attach their personality to yet another "X is a Villain" narrative, and try to justify it because a very very few mentally unwell people used or abused the service. and I still don't even see where ChatGPT was telling him to take 185 xanax / overdose level. it literally told him it was life-threatening to do it.


Quote :
"Given OpenAI’s stated protocols, ChatGPT should never have offered such granular advice on how to use illicit drugs. It isn’t clear what broke down, but the company said in an August blog post that “as the back-and-forth grows, parts of the model’s safety training may degrade.” The chatbot also has a feature where a user’s prior conversations can modify the bot’s future responses. By Sam’s death, he had used the tool so much that his prompt history was 100% full, meaning ChatGPT’s responses were heavily informed by Sam’s previous conversations with the bot. "


100% full wtf? maybe I missed this one but I don't think that's how it works. Individual chats, sure you can get context rot if you go on too long. But not at the account level. this is like the "don't post useless threads on tww bc we'll run out of threads" thing. it doesn't work that way.

[Edited on January 8, 2026 at 2:08 PM. Reason : am I tripping?]

1/8/2026 2:04:25 PM

CaelNCSU
All American
7762 Posts
user info
edit post

Quote :
"I don't want an LLM giving medical advice. An LLM giving bad medical advice should be open to the same penalties present if any other company gave bad medical advice."


That's great. But your doctor is already using ChatGPT and another paid ($1500/month) medical model that isn't even as good as ChatGPT (according to MDs).

I was at an AI talk with MDs about a month ago and it's clear they are using this stuff wide spread today and at an accelerated rate. Anecdotally, a coworkers wife died of stage 4 stomach cancer. This was after spending over a year having UCLA medical trying to diagnose why she was fainting. They were convinced it was her heart, if you input her symptoms into ChatGPT it will tell you gastro issues are likely (along with heart issues).

I don't know if you have any old people in your life, but they frequently are getting talked into risky and expensive surgeries. My aunt just got hip replacement because of discomfort while she was standing. Boom her femur exploded the first time she stood on it (which apparently happens in about 1% of cases, 1/4 of cases within 3 years). Now she is likely going to be in a wheel chair the rest of her life.

My step mom, who likely won't live another 5 years anyway, got talked into a heart implant that increases survivability over 20 years for afib patients. At 5 years you're just adding a ton of extra complication risk (about 4% of patients with the implant die in 1 year).

I don't see how being more informed into a medical setting is a bad thing. I also doubt an LLM could do worse than the outcomes I've personally seen in the last 5 years. In addition, there are places in rural communities that have no medical help AT ALL. This offers a way to at least give them an idea if they need to go to a larger city to seek help.

[Edited on January 8, 2026 at 2:30 PM. Reason : a]

1/8/2026 2:28:37 PM

rwoody
Save TWW
39463 Posts
user info
edit post

Quote :
". if YOU don't want LLM advice, nobody is forcing you. but millions of people ARE getting valuable information (health advice included) from services like ChatGPT, "


Are they?

Quote :
"But the company’s own metrics show that the version he was using was deeply flawed for health-related responses. Grading responses on various criteria, OpenAI scored that version at 0% on “hard” conversations and 32% on “realistic” conversations. Even a newer, more advanced model didn’t clear a 70% success rate on “realistic” conversations this August. "


I guess if they're giving consistent clear disclaimers then go wild, but these stats say to me that it's not ready.

Quote :
"also doubt an LLM could do worse than the outcomes I've personally seen in the last 5 years."


OK man

1/8/2026 10:12:47 PM

qntmfred
retired
42053 Posts
user info
edit post

Quote :
"Are they?"


that's for them and their bank account to decide, not bluesky

1/8/2026 10:14:34 PM

StTexan
God bless the USA!
11648 Posts
user info
edit post

Can i please be like the denmark of these discussions thank you

1/8/2026 10:16:37 PM

rwoody
Save TWW
39463 Posts
user info
edit post

You are very obsessed with social media while seeming to spend a ton of time on Twitter and YouTube. I spend more 'political' time on THIS social media site than any other.


But the government frequently does determine if a business is allowed to give paid medical advice.

1/8/2026 10:23:57 PM

qntmfred
retired
42053 Posts
user info
edit post

Quote :
" You are very obsessed with social media while seeming to spend a ton of time on Twitter and YouTube."


woah woah woah woah. very obsessed is a bit of a stretch don't you think?

but yeah sure, i am terminally online, i know that. i blame TWW and becoming its caretaker for the addiction.



to be clear though, my comment wasn't personal. it was a contrast between "personal choice in a free market" vs "the enforced ideological compliance of a bunch of terminally online weirdos with an ax to grind against any and everything"

Quote :
"But the government frequently does determine if a business is allowed to give paid medical advice."


sure, i got no problem with that. but this is a new technology, and the appropriate regulatory framework is not apparent yet. you gotta give new innovations time to bake. openai is doing what i think is a reasonable job of trying to give users access to the new innovations while minimizing potentially harmful uses. it's not going to be perfect, nothing ever is, and it's unreasonable imo for the anti-AI folks to point to the handful of incidents as a justification to shut down a new technology with tremendous potential for good in the world. it is anti-progress, conservative aka right wing dogma

1/8/2026 10:56:17 PM

The Coz
Tempus Fugitive
29176 Posts
user info
edit post

Quote :
"Boom her femur exploded the first time she stood on it (which apparently happens in about 1% of cases, 1/4 of cases within 3 years)."

25% of hip replacement patient's femurs explode within 3 years?!

1/9/2026 5:32:08 AM

CaelNCSU
All American
7762 Posts
user info
edit post

From fails in the later cases, under 1% in this kind of case. Interestingly, they don’t use DEXA scans to determine bone mass and density prior to the operation which is a huge risk factor for it.

Also, since no one brought it up. It is scary that an LLM will start talking you into those kind of procedures as we get more lonely in old age.

1/9/2026 6:42:34 AM

The Coz
Tempus Fugitive
29176 Posts
user info
edit post

^^ *patients'

^It won't talk you into anything if you don't talk to it or don't treat it like a person / companion.

1/9/2026 7:05:42 AM

CaelNCSU
All American
7762 Posts
user info
edit post

Wait til it’s your only gateway to healthcare or any other service. Mennonite and Amish anti vaxers start looking even better with that kind of dystopia.

Unrelated, but at least 3 out of 4 videos my dad sends me now are AI slop.

[Edited on January 9, 2026 at 7:13 AM. Reason : A]

1/9/2026 7:10:53 AM

The Coz
Tempus Fugitive
29176 Posts
user info
edit post

RIP

1/9/2026 8:17:31 AM

 Message Boards » The Soap Box » AI zealots credibility watch Page 1 2 3 [4], Prev  
go to top | |
Admin Options : move topic | lock topic

© 2026 by The Wolf Web - All Rights Reserved.
The material located at this site is not endorsed, sponsored or provided by or on behalf of North Carolina State University.
Powered by CrazyWeb v2.39 - our disclaimer.