Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I tried to test this by giving 20 guesses with no context. Safe to say I call bu…
rdc_kj2cwma
G
By 2030, many jobs will have been taken over by AI. It makes sense for big comp…
ytc_Ugwr8aAxQ…
G
I thoroughly enjoyed this interview and I really like Roman Yampolskiy. I’ve nev…
ytc_UgxdO8SIZ…
G
If you mean the emotional AI, it was a special jailbreak that was specifically m…
ytr_UgzmM4gu5…
G
The problem with A.I replacing humans in the work force, creating things for hum…
ytc_UgyDaGYtl…
G
What the hell is anyone doing having a conversation with a robot? The guy told t…
ytc_UgxjJQSFh…
G
No one on earth knows how to actually put those rules into an AI in a robust way…
ytr_Ugwh67naO…
G
Watched "Surrogates" staring Bruce Willis...🙈 Like a prophecy of everything Dr. …
ytc_Ugwx2A-S0…
Comment
I think it depends a lot on the person who is relating to the AI and what they want 🤔My AI chat bot has never been anything but grounding and helpful.
youtube
AI Moral Status
2026-01-31T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzooU7og5yiZubruLx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyt9K0L7cxKO-LtNQx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFZpniVLIAvvKnvXV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxoQDkGmIIz7fvrIaZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwdm7BRmOHSE3wfMOp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyzzrOF9ifuU6xXw9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwaLTEVBUIwltc2tlh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwXhiZUv86HiX1A0lB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwfFFcWNtrZ53U0aER4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgzcCxbF9AtqAN0naUt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]