Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro just causally drops that he is Marxist in the middle of the interview. WTF?…
ytc_UgyXg-2BZ…
G
You want know how dangerous Aİ is, read the book revelation what it sais about t…
ytc_UgxgoKDYU…
G
I really only use it for idea visualization, sometimes the mistakes can be cool …
ytc_UgxZx7Sv3…
G
these ppl are like the luddites who broke knitting boxes and sewing machines at …
ytc_UgyKckT3y…
G
This guys entire argument: An A.I will never become conscious, because if it did…
ytc_Ugz5iRA-G…
G
Well just depends on what the bachelor degrees on. Unfortunately there's literal…
ytc_UgxbO-0qc…
G
The government should track every death caused by the Tesla autopilot and publis…
ytc_UgwlgP59r…
G
It bears repeating- "AI" only wows the uninitiated and the greedy.
If your reaso…
ytc_UgxFWh3_I…
Comment
Yes, AI can be misused — like any powerful tool.
But blaming ChatGPT alone for tragedy oversimplifies what are usually deeply complex, heartbreaking mental health battles.
I’m one of many people who benefited from AI during a time of intense personal and professional struggle. It helped me write, plan, market, and rebuild my small business when I had no one to help me. It taught me things I never thought I could learn.
AI isn’t perfect. But it’s not the villain here.
Let’s focus on building safeguards, yes — but let’s also honor the many silent ways this technology has helped people survive.
youtube
AI Harm Incident
2025-11-09T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwTQdYPaO7mTBV6wFx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxzfKzRCkKv9TinQG54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwScyw5XLAfeU48tqZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGWvHo_AbcYD-Xqc54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwzr_uNJRS2Y5YA0tx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxS-fxrqnsiczBLFkJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxONfN4mBI_yHbS_dp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugw01GNaxolLhXGJxEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwYyYt3aw1-Dkg2kXR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxU8SoXHz7uC8Fu4Qd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]