Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm a 40 y old woman and Ai creeps me out , I don't know what to believe anymore…
ytr_UgySY-ir7…
G
When my kids are a bit older I will retrain from law to midwifery. I am 100 % re…
ytc_UgwiuG9R5…
G
Ai,is litterally destroying humanity and we are allowing it to happen
Ai,will d…
ytc_UgzAVC0KI…
G
AI was supposed to get rid of the 'assistants' whatever that is in the art world…
ytc_Ugwcep2z2…
G
I asked indian doctor that chatgpt suggest this and he mentioned go to chatgpt f…
ytc_UgwmaQ6If…
G
I'm starting to think money as we know it is going to become a redundant concept…
ytc_UgwA-jkoA…
G
8:50 No offense, but it doesn't. I was at a distance from my monitor and it stil…
ytc_UgwSUbxCe…
G
I believe Cara automatically does something to your works and even has a link to…
ytc_Ugxh5-aCa…
Comment
Robots lack compassion.. but robots also don't think things like "I want to kill fucking terrorists". They don't have racist or bigoted attitudes, etc. Robots do what they are programmed to do. If they make mistakes, it is because they were programmed poorly. I don't see any reason why AI of the future won't be much better than humans at determining if someone is a threat. And they won't make judgements based on bigotry either.
youtube
2012-11-23T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxVTDG_AcOqtX5Mat54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxP9paH9FALh-nIfnN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxZzZdUq5YTfEBRWuB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyvoV0RgNJfvfGauOl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxyCaSrLWjXndY9nGh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwcUcbQq_FNZ__zAWN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx6cXP0pv4_NK9-6IN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwCbLlgUMEG7OZrV9R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw8fQ-5ELa48r5vVPV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxVOqPyOnA2Rcu-oAB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"})