Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yea, of course. They want a pause so they can "catch up" and release something t…
ytc_Ugw7_KSv6…
G
People have said the same thing about flying not being possible in the next cent…
ytr_UgwxCTtNX…
G
I always tell Siri I love you and wish her a happy birthday! Hopefully they let …
ytc_UgwFb_hDF…
G
I've been kind of playing around with this idea for a while with a few of the AI…
rdc_mdkpins
G
If you work with your hands ex; construction, no real threat. If your intelligen…
ytc_UgxljP7lz…
G
If ai gives automony from humans, then cant we just get some ai to frow crops fo…
ytc_Ugy0mNQq_…
G
Google free microsoft ai course for beginners and choose the first result that c…
ytc_UgzTf43E0…
G
@TopazTuber i dunno I've seen enough people fall in love with their ai characte…
ytr_Ugw4nxwXG…
Comment
The mother of the boy who unlived should sue the staff, the managers, the director and everyone at Character AI. (Edit: don't sue the staff because they had to do what they were told or they were after all employee, just sue the CEO for negligence and irresponsibility) Because it needed prompted. Somebody under someone's instruction needed to give AI prompts so that it would interact in somewhat dark and intense way (with the sole purpose to make the user obsessed with their app and for financial gain) without considering the potential consequences. That's why when the boy suggested indirectly the character AI that he was going to unlive, the AI encouraged him by responding 'she' was waiting for him. I've watched the documentary about that case and the way he was telling AI should be obvious to humans that he was going to do that. This is irresponsibility, disregard of human lives, while giving prompts.
Edit: every response to users always fall under some sort of generalized prompt. They give those programs with commands/prompts such as how to response friendly, loving, etc. and they have failed to not giving a prompt AI to prevent cases like that, which was preventable.
youtube
AI Harm Incident
2025-08-02T11:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzaTVbp8BT2ex6I7fR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw9mtzN-ShAaXbzQG94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyfR-QPnfbpsNktFK94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxYLmM38ZAkmVsF5LB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwACiViUOCDSctkK0t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwH38SOewShJ9JMD1Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFuE3C-cGoxiWdZVp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx5fj0VLSa4x9MSIMN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgziXSIHVY1nVeTwm_54AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNe_RQZfZEYvmy42h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]