Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not super convinced by this logic. Firstly, we often use non-literal language fo…
ytc_UgxBW02Sn…
G
I see a conspiracy, you and Lex Friedman release podcasts one hour ago about AI …
ytc_UgzINkcMf…
G
The first 25 minutes of this video left me with the impression of anthropomorphi…
ytc_UgyABG2Bq…
G
I'm getting that problem fix as we speak. I got a piece of land in a tropical co…
ytc_UgyK3Qqfi…
G
No one is saying white-collar jobs will be wiped out completely. Only that, sinc…
ytc_Ugzb6zv2w…
G
In my opinion, truck drivers won’t just stand by without reacting — they have th…
ytc_UgyCijcVZ…
G
I think 5 years is far too optimistic for this.
Self-driving cars still have tr…
rdc_ecyz1xo
G
My biggest problem with autonomous vehicles is that there is no way to know if t…
ytc_UgwGWJXNt…
Comment
Let’s not say never. These things take months to train. For all we know, the next big thing could be cooking somewhere not at OpenAI - especially now that we know there’s money to be made
reddit
AI Harm Incident
1702220732.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kcpjcal","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_kcrsqem","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_kcrga57","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"rdc_kcpbe0g","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_kcny7sn","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]