Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've done art for a game studio. We have one rule about AI, "you are allowed to …
ytc_UgzVHSHzu…
G
Neither AI nor people getting rich over them pay taxes. Hows that gonna turn out…
ytc_UgxwPwteZ…
G
The most amusing thing is watching everyone on LinkedIn try to jump ship to AI r…
ytc_UgwECqJdp…
G
@chriscurry2496I’m just gonna say this. The way you word yourself and even the e…
ytr_UgzGc2rCt…
G
UC Davis has popularized the idea of 3 Revolutions in urban transportation: auto…
ytc_UgxKQI3PP…
G
Wow, so much opinion and so little fact. Maybe interview somebody who actually u…
ytc_UgyD4joqC…
G
Brilliant. But I don't think anyone should want to have their face recognized by…
ytc_UgzFZ0_gF…
G
HELP
so ima guess my mum was searching through my phone when I was asleep, and…
ytc_UgzomKiPF…
Comment
I agree with your basic premise, but, human beings suffer from both prompt injection and hallucinations all the time.
Phishing attacks are similar to prompt injections, in that they substitute authoritative instructions for fake but seemingly authoritative instructions.
And how many of us have said something not because we know something is true, but because we're expected to give an answer so we say something that has a feeling of truthiness. LLMs' hallucination has the same type of truthiness. I'm not saying that's how an LLM works (it's not), but the result can be the same.
youtube
2025-12-20T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwwhumPmFEAUBgjMk14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKoosa3Rf73gYrl7d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwj3VAK9uu7twBfoox4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxm9KGCv-6DS2AWZYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuWy70bLxnVQsPMul4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwfB8iGKr5wE9-zmcN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzO8YpazkuTmSutIGd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw2c9Z52DMu5_jsVMF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzsJkDHb_wG2ZVko1B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwH-KFFMdgjFDLKdQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]