Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'd rather tey use **well trained** LLMs than the old fashioned keyword pickers.…
rdc_luxtq7d
G
When AI starts taking over our jobs, theres a chance that someone will grow a fo…
ytc_UgwaDcm_Y…
G
AI Slop is AI Slop, and we should be calling it AI Slop.
If you're going to use…
ytc_UgwVxFgz_…
G
In an old film called "I Robot", there was a scene, where the main character pro…
ytc_Ugz_9sPNm…
G
Could be. Could be. I am waiting every 6 months to be replaced since LLMs became…
ytc_UgwD54SpD…
G
What’s a driverless OTR truck in an accident do to insurance rates and what’s ha…
ytc_UgxZ0MTrE…
G
I am somewhat curious whether AI will be able to create instead of simply retrea…
ytc_Ugxs7LJ95…
G
Hopefully this (if not other stuff) is enough for the house to not want the big …
ytc_UgxvElQ4J…
Comment
Everyone’s obsessed with making AI smarter, but that’s not the real issue. We already know AI can out-calculate us. The bigger problem is that AI has almost no wisdom—it doesn’t know when not to act, when to lose gracefully, or when protecting people matters more than winning.
If we don’t build wisdom into AI, we’ll just end up with really clever systems that still make terrible choices. Smarter ≠ safer. What we need are AIs that understand balance, humility, and responsibility—not just how to “get results.”
In short: AI doesn’t just need brains, it needs wisdom.
youtube
AI Harm Incident
2025-09-13T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwcCPRE8AbtQXFw87F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwofSm8-R-LqC5qaOJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyHScYMWSNnpoZAEGh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyUwFwBu_zzVsSoNCp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDAia-kocHbn4fHk14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzBg0Vpe4Poj_0mEGN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7ZcG2yLuBIQ5Y6Hl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxRJy5GUPaMdYaCpLN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx36D-Sn_aU2j_Ob5p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxmq-7Vroil4nXYuNx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]