Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All so they can make AI and robots capable of taking human jobs. Consolidating m…
rdc_lpau5n6
G
the thing about ""AI" is it's just an enlarged version of neural network trainin…
ytr_UgypfZlNR…
G
We have entered the Age of Psychopathy. Just as well I’ve retrained as a therapi…
ytc_UgybiMnBC…
G
the sad part is, youre gonna be layed off and told you were replaced by Ai, but …
ytc_Ugy-yPkrv…
G
As somebody studying to be an artist, I think everybody is being too dramatic ab…
ytc_UgwT09jbU…
G
I think the biggest problem is that you have read so much of the BS and optimist…
ytc_UgzY4WLqa…
G
@LucceiaVerres Not at all. AI can't replicate human art. There's much more to ar…
ytr_Ugw07NkBT…
G
This is the doomerest crap I’ve seen this week on AI lol. You seem to just spit …
ytc_UgxHs9PCu…
Comment
A better statement would be: Imagine a world where PATIENTS, *not doctors* , use AI models to get better, more accurate diagnosis and treatment plans, and the patients are empowered to judge the competency level of these doctors. It is a noble goal to build accurate, fast AI models that replace incompetent and arrogant doctors, and that would allow patients to catch any mistakes of such doctors. It will force doctors to also keep up good performance and competence.
youtube
AI Harm Incident
2024-06-01T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyjLyV58DUvLHmQraF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzETU8XcYSW3E9k4IN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwMKAs0RaTATguPEsh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzaqEDMCbDt2OHkJZ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzPaGAxp7cSWyd46X54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwj4d-u4rExbOHC4iV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwqJBIloVlTOzA5BOd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx5IsQIwTxNES8U6zJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugxf9OY9ClB-z-90_b94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw93H5nF2sAxOCYRYp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]