Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
.....Yes. The AI model gets capital punishment though, not jail time.
Your AI …
ytc_UgzAe6j_E…
G
It may be more accurate to say the AI has memories, not experiences. Experiences…
ytc_UgxNSpsc9…
G
Many have criticized Apple for not investing heavily in AI, as if it were a stra…
ytc_UgwUpkoXk…
G
Since I was in my teens I have always said humans will destroy themselves & My f…
ytc_Ugx15SOaJ…
G
We had an incident similar to this happen in my hometown back in the early 2000s…
ytc_UgwSO5Eua…
G
I see videos like this, but I don’t buy it. Every time I use Ai it feels like it…
ytc_UgyzzuN_Q…
G
Until they can embed something like the three laws of robotics into AI it will e…
ytc_Ugxs33mOQ…
G
Just imagine cars having been driven automatically caused so much of an accident…
ytc_UgwC_lq9I…
Comment
I find this grossly inaccurate. Please don't ask an AI model about it's data provenance, moral / ethical reasoning or anything else without expecting it to be IN its data set, and not necessarily anything it actually uses as part of its operation or reasoning to get to any anwsers. This is simply a misunderstanding about how LLMs operate and is extremely misleading and worrying
youtube
AI Responsibility
2025-03-03T16:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzrTJUByJXa-kD03fJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxXJnsm66W72jM9aH14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwZgQd7QKDl64YRuCB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2PVsX4QMJDhJm-e94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyaJ-TI16dQ8W6Dmpd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyJgil0IpqH6khDPK94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwQ5yZ3X1GzhLaNOth4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxDMhOIVH1MTpgbWoJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzlYZ4t2NbuhQ2DPlt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwfso7fAf-FJSIst0l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]