Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It doesn't matter if it looks good real artists will always have more unique and…
ytc_UgztcVrz2…
G
The billionaires WANT millionaires to get hit hard. Then comes the total survei…
ytr_UgyrKj2tk…
G
Not before humans do, although humans invented Ai, so yes humans will eventually…
ytc_UgxEYhiGm…
G
I'm from Spain, right now staying around the Barcelona area, and over here there…
ytc_Ugyu3IKFp…
G
UBI is coming , even Elon admitted this . If AI is better in doing anything we d…
ytc_Ugzp6u2ti…
G
AI's imposition on the arts is definitely diminishing my enjoyment of it. Human …
ytc_UgxN3RNVX…
G
I for one hate ai its all bs easily corrupted and has been shown to have racist …
ytc_UgxNbFP-1…
G
> should be good enough that they don't need a steering wheel, or they should…
rdc_e14tpul
Comment
A.I. for healthcare is terrifying because A.I. cannot be 100% accurate. The fact that mistakes made by A.I. are called "hallucinations" does not sit well with me. When you have hundreds of people developing an A.I. model, it's impossible to determine who is liable when something goes wrong. Corporations love this because they can replace their staff and limit their liability at the same time.
Even if this patient had kept the chat transcripts, I assume ChatGPT (the company) would claim that its model hallucinated and they've patched it accordingly. It's slimy that their patch included flat-out denying the existence of the conversation. This only makes sense if chat transcripts are not fed into the model to help train it, which I doubt is the case.
youtube
AI Harm Incident
2025-11-26T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyPcZyyhSKq1VfvBHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzi-_Fap-wdL5Zp4kh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzm_j_Qb58zGyZXA6B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxNm8yFA4J4CcFuZlt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzzrHSrSr_igtZtAd54AaABAg","responsibility":"company","reasoning":"unclear","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw9c5Kf-w5xx5_2djR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyWgwaR3wXVQKu_h7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxoOJMNoWDis0v3TWN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxN0Jtr0BeBhJ8z7hd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUe9zIi_SobhB4NI54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]