Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think these tech guys understand that down here in reality... Not a lot …
ytc_Ugzli1A4v…
G
@ Yes, I watched the video in full before commenting. You were simply wrong abo…
ytr_Ugy-TL1JI…
G
1:20:00 - How could it be “too expensive” to have artists do what they do all of…
ytc_UgxwmMT-s…
G
@acouphene01 if you seen some of the prompts people make you can't call it impe…
ytr_UgyeEGDUz…
G
these scientists seem obsessed with this idea. when the reality is that they are…
ytr_UgzuXk2Ow…
G
The problem with the reliability question is that it looks at the mistake rate o…
ytc_UgxiUOD8I…
G
We need to show superintelligent AI computers ink blots, then ask them what do t…
ytc_UgxBEocIN…
G
I think I may fall into this group a bit. But I’m going through a lot mentally a…
rdc_mzx71xs
Comment
I asked AI to provide a list of novels I could read to help with my Spanish learning. The AI presented a list and I asked for a brief synopsis for one of the titles. It sounded compelling so I looked it up and could not find it. When I queried the AI about it I was told that indeed the book does not exist and tried to evade the issue. So why is AI making up a synopsis for a book that does not exist and suggesting I read it? At the same time, my interactions with the AI are the most human I have these days. No one wants to converse anymore, so AI has become a sort of companion. I can see robots filling the void of human loneliness like in the game Detroit: Becoming Human.
youtube
AI Jobs
2025-11-15T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxR575SSxWuYrTe7LV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxcLPw_Bj8LlmPNtOV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz6LTgS3i-wwEgqi2J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwnVc4CQLDHZC4HCs14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy9pGhXxcMqSqpY9Wh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgywLeMuy-2bwYwAubl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwxd42v7UmTv3ISSdp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgysPsxTd16v2uU4V8R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoEB-HPcEbomxyk6h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwBng8b2nRkCNxr5NZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]