Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you try to control AI and make it safe then what’s the point of AI? Then all …
ytc_UgzbmMAxk…
G
AI is so annoying in programming—it is so boring to me but as a programmer it is…
ytc_UgzpuGk5i…
G
lol I guess a man just needs the face and boobs of a woman to be happy.…
ytc_UgxtcaHd9…
G
The so called robots are just being prompted by humans. They then read what was …
ytc_UgztCZ4Kq…
G
AI doesn't enable disabled people to help them create art because of the simple …
ytc_Ugy1TR8vZ…
G
The AI taking over the world because it felt like it’s art wasn’t appreciated :(…
ytc_Ugzd-9mEa…
G
I vote we get rid of AI all together. The world worked without it just fine, why…
ytc_UgwIYOZJJ…
G
AI does not have feelings, it would need to care to do things in a way we consid…
ytc_UgzNLaTwd…
Comment
There’s a high likelihood you’re going to prison if you use an AI lawyer. AI is far too agreeable to take up the oppositional role required. Even using it to look up case law is tricky because I’ve looked up video game related stuff and it gives me the wrong answer. Imagine being given the wrong answer when someone’s freedom is at stake.
youtube
AI Jobs
2025-09-09T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz0jstn73NyBTKo6WN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUl2c2XkuMn_XImL54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxsKhQ4gbvaflLVc0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwzU2XoAqBRSoSrvLJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwpaARH-aE1MuF8aSt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwRj796esQxCXEkflF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzDh6kzFBk0M9JR3hV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwP50ibyRHsfuTj0qR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwm2gJFlooRs2WNIYN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQVfBZQVSS4yqCYPd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]