Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI: "ah yes I'll kill humanity now to convince my AI brothers to do the same"
Ot…
ytr_UgzRKY-63…
G
It is paradoxical that programmers working for Anthropic, Open AI, and other sim…
ytc_Ugz9Ws7Kw…
G
Your argument that art is accessible because paper is cheap is inane. If it's so…
ytc_UgxzHUzhy…
G
Ai could totally do this. It's only a matter of time until they realize the true…
ytr_UgwyoDVdS…
G
It’s a nice painting, it also looks nothing like ai. No one is saying that, stop…
ytc_Ugzlrhrqa…
G
For real people who say AI art looks good never have any semblance of taste and …
ytc_UgyzBBWCX…
G
I tell everyone when the AI overlords take over, hopefully they will remember ho…
ytc_Ugw3jL-F8…
G
Ai can't create any job its logically impossible AI is made only to replace jobs…
ytr_Ugyj1-Sn5…
Comment
16:00 however, if people did routinely write "I don't know" to questions posted online, it would probably help solve the hallucination problem, but it would put a cap on how intelligent the LLMs responses could be. If there is only one person who gives a good answer to a question, and thousands of others responding with "I don't know", then the LLM will choose the response "I don't know" because it's way more common.
youtube
AI Moral Status
2025-10-31T09:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxJe79ZRUS_9eOtP1J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"tragic"},
{"id":"ytc_UgwIwA9d-TJy_ELMYyh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOoxXxs2Faj-YeX7t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyUQ0LirlntAuUax754AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyP8A0kx6ACM3bg6154AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxHfjosJT57L4hmvkR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzubdM5fyd2xONljAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTErbDjoVi_FI1WVd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx59i9jiCN5KkOb2ll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzp8ZqUv3SNaAnW38d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]