Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So relatable, I never left the chat hanging just in case AI hold a grudge on me …
ytc_Ugxqb-9-8…
G
is it just me or is this not even scratching the surface? like, I knew all this …
ytc_UgzwROVTU…
G
I don't trust those self driving cars! I can't believe people would trust those …
ytc_UgyIuqFcQ…
G
Most of what i asked for was returned with a long text of excuses and explanatio…
rdc_ks4jcj3
G
Blackbirn definitely met up with her kid at college to bring dome questions to t…
ytc_Ugy-nNw8d…
G
Go break AI, I want AI that does my work and gives me free time to do art.…
ytc_UgzjEEwNl…
G
i don't think anyone with a brain would deny that AI is being shoved where it be…
ytc_UgwEtJGsw…
G
I don't stand against ai. Yes human art has more emotional value but ai is somet…
ytc_UgydN3ESt…
Comment
There are no long term risks. Llms can't even do the most basic shit correctly
reddit
AI Governance
1716097435.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_l4tad0f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_l4qlm3c","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"rdc_l4qn2sy","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"rdc_l4rdt6d","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"rdc_l4p8b79","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]