Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Easiest way to destroy humanity is to make it turn on itself. AI could spread pr…
ytc_UgydzkdUF…
G
I stay up until five am just using character ai and I spend all day on my phone …
ytc_UgzzHR58B…
G
I think he is sugar coating it. There will be no jobs at all by 2030. AI and rob…
ytc_Ugzp4IT1n…
G
"You're beginning to sound like Jordan Petersen ..." "I understand your frustrat…
ytc_Ugxww1BeW…
G
One of the things AI can't do is have experiences. It can't be in a car crash, i…
ytc_UgwNkHbri…
G
God I hate how dumbed down main STREAM NEWS is. The whole "can you tell which ve…
ytc_Ugw-pVWBn…
G
Yes thats true theres ..no one can compete in natural ..human naturally speskin…
ytc_Ugy4Nn8fk…
G
One could just create a short movie clip cinematically. What exactly is your poi…
ytr_UgzUw_768…
Comment
The Uber self-driving car requires the human passenger to take control of the car every 1 mile? I think they're a little farther away from cutting out the drivers than they want to be.
reddit
AI Harm Incident
1491320660.0
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dfu26yo","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_dfu8qoc","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_dftvnyo","responsibility":"company","reasoning":"unclear","policy":"liability","emotion":"unclear"},
{"id":"rdc_dfu347c","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_dftia6a","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]