Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m starting a YouTube channel so I’ll have a new way to pretend I’m productive …
ytc_UgyquNJpo…
G
AI can beat humans in every field but only for the information they have, if the…
ytc_UgyJ50ZSb…
G
What's the difference if between and AI program doing pieces in a "like style" a…
ytc_UgxA0C_UN…
G
Unfortunately most people are not art critiques and don't notice this stuff, so …
ytc_UgyY2Di8y…
G
I use ai to write a book to read that book after, we are built different…
ytc_UgywBFEBR…
G
Cambridge Analytics was what got trump elected to his first term. Using a facebo…
ytc_Ugyw6PRqc…
G
We've actually moved the goalpost of AI by a lot, used to mean heuristics (hence…
ytr_UgyL-7MwJ…
G
a robot won a human art contest
someone used ai to win a contest and was praise…
ytc_Ugypde7qg…
Comment
"Preferences" can be more accurately described as competing goals. LLMs are designed to do multiple things, occasionally mutually exclusive things. Without specific training for the particular scenario, it's hard to know which goal is going to take priority.
youtube
AI Moral Status
2025-11-23T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzQAG7wEzGO57aZwhF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6ZbPl0__wpalFB1t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJa0O0CPsZpBe-5e54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyRlD9rUFdsW8YNWdF4AaABAg","responsibility":"user","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx8EYOQnyrM9sWe0LF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzbcYrXFnRjQytbhOp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyXUA0Gjt7jVWhfsdR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz0kipoojkJLoSjWyJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyEy8GdOX5a1crx1sR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgznE8eTz94uYZDdgJF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]