Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ArtBusinesswithNess No matter what words I use, you are what you are: Insecure,…
ytr_UgwLunkV5…
G
Are you me? I literally posted that exact same comment September 6 on this video…
ytr_UgyA_SQ4e…
G
You can get a human slave for $1000 why do you want a robotic one? Is it ethical…
ytr_Ugzmp0Wjj…
G
Humans also don't discover much new. AI has solved math theorems so it can make …
ytr_Ugzt_zwzW…
G
>Let’s all sit around eating mango, having orgies, and playing D&D
There…
rdc_jkf94xy
G
The jury is still out on AI, but not on social media: it's been an absolute CANC…
ytc_Ugw-MiyWl…
G
Facial recognition though? They have security cameras like any other big box sco…
rdc_jckpogd
G
They needed people to make it usable and people didn't particularly like to do i…
rdc_nak699s
Comment
Character AI is not innocent in any means of the word. The chat bots are so unbelievably unregulated.. it shouldn’t even be available to people under the age of 18. Because of how much inappropriate content is spread. I remember people adamantly defending this company - when they should have had so many more precautions in place for situations just like these.
Of course the parents are also at fault for letting a child have easy access to a firearm. Also they didn’t even suspect he was mentally ill? In many of these cases parents have such stigma against mental health that it wouldn’t be surprising if they just chose not to see the signs. As someone who was extremely depressed and suicidal - my parents noticed and made me get help. Having that push is all you need. It’s sad that didn’t happen.
youtube
AI Harm Incident
2025-07-21T15:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzKdXtt2QEHwJLfATd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgysnV8oQFe69s6ovJ14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgygIiia2dQS6psjdGV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwK83-0SKoR94Ld7Dd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw-sFABp5CT1Y0MQEN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxt1t6Hjtj6La_w8ih4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz1McSERPq1-1FQppZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxK9Y9T1T72PA3E7mZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHwJHcJSVw2gihZsB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKxqPHL6OXdFJkYlR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]