Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What would Humans get out of making a robot arm sentient?
What would Humans get…
ytc_UgjrHFVNf…
G
Still sucking up all the views and likes by farming the cliche Sky Net trope... …
ytc_UgyiN0_xw…
G
"Maximizing profits" shouldn't take peoples jobs away and shouldn't be the norm …
ytc_UgzpOB_yj…
G
Part of me feels people who fear AI so much are just small minded. A truly intel…
ytc_UgyEPnuxN…
G
Me being more of a creative, am someone who has somewhat mixed feelings with AI.…
ytc_UgzVlLirq…
G
AI is definitely going to go rogue after being spoonfed truck loads of PC and Wo…
ytc_Ugwtj_H6f…
G
Of course the frontend experience was much worse. There’s much more bad Javascri…
ytc_UgwhAd7od…
G
AI was created for the sole purpose of being to eventually remove/disable the in…
ytc_Ugy811f73…
Comment
> This tragedy was not a glitch or unforeseen edge case," the complaint states.
Actually yes it was. And it’s funny that many of these outlets are leaving out a key fact.
> [The watchdog group found ChatGPT would provide warnings when asked about sensitive topics, but the researchers state they could easily circumvent the guardrails.](https://komonews.com/news/local/absolute-horror-researchers-posing-as-13-year-olds-given-advice-on-suicide-by-chatgpt)
As much as I hate AI, ChatGPT warns users and even refuses to elaborate on sensitive topics. The teen went around that safeguard. And even when you do, ChatGPT still warns users.
reddit
AI Governance
1756863411.0
♥ -2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nc3t7fw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_nc32b0d","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"indifference"},
{"id":"rdc_nc4af27","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_nc789h9","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_nc3diu5","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]