Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If I ever use reactions in a group chat it's always panels from webtoons / manga…
ytc_UgzinQmtv…
G
i love the line "it's all signal, no noise" because i'm pretty sure generative A…
ytc_Ugz2_lU-S…
G
all opinion. they miss to support their claim with factual or recent studies reg…
ytc_UgzdtgmtW…
G
See this is true even in the AI models themselves, they needed to steal millions…
ytc_Ugz7fS4GE…
G
Dissenting opinion: generating AI images, even without any intent to make profit…
ytc_Ugy_aoFpV…
G
Yes, to be honest I’m thankful about it. I want to have a meaningful life by tea…
ytr_UgyqLRKb9…
G
At least you actually feel proud by the end of drawing something you like, rathe…
ytr_UgyBOJgJ_…
G
ai will do every work in every field done by humans without fatigue, while havin…
ytc_UgzDGaZAK…
Comment
I mean, that's expected, we build an AI to be perfect, to look for perfection, as long as we don't also build laws and walls into them to do that inside a frame of morality, safety and not allow certain actions, that AI will use anything it can to accomplish the goal you give them, since, at least for now, an AI has no morality or feelings, it's just a numbers game, and it's only goal is to succeed on their "mission", nothing exists outside that mission, and everything is fair play unless it's stated. An AI it's the perfect example of what the real world is, as long as there is a way, someone will find it and play that, that's why we need regulations and laws same goes for AI, we need to limit their actions, it's quite simple to understand why that's needed, just like we have those to stop people from doing something bad to benefit themselves, we need to code AIs in a way they can't act in a way we don't want them to, or they will, because, sadly, acting maliciously is always the easiest path to fast success.
youtube
AI Harm Incident
2025-09-11T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzC5nOfEKp81GCwoXl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyaoXPP8dKgiQeN8p14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyQjQ2ySfhN9GTf8St4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxpTOVvOtBlumzXDgJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyIUkePR3YLKOhSTkx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwzd6UVsTsxGdoPIPl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy2cIR1SGRoiyf9Fyp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxTCxZn3VL5P1MRQhJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwYQOW6snXi9PWF_6x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxwu01Y5dea0w6IKEt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]