Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean, that's expected, we build an AI to be perfect, to look for perfection, as long as we don't also build laws and walls into them to do that inside a frame of morality, safety and not allow certain actions, that AI will use anything it can to accomplish the goal you give them, since, at least for now, an AI has no morality or feelings, it's just a numbers game, and it's only goal is to succeed on their "mission", nothing exists outside that mission, and everything is fair play unless it's stated. An AI it's the perfect example of what the real world is, as long as there is a way, someone will find it and play that, that's why we need regulations and laws same goes for AI, we need to limit their actions, it's quite simple to understand why that's needed, just like we have those to stop people from doing something bad to benefit themselves, we need to code AIs in a way they can't act in a way we don't want them to, or they will, because, sadly, acting maliciously is always the easiest path to fast success.
youtube AI Harm Incident 2025-09-11T10:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzC5nOfEKp81GCwoXl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyaoXPP8dKgiQeN8p14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyQjQ2ySfhN9GTf8St4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxpTOVvOtBlumzXDgJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyIUkePR3YLKOhSTkx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwzd6UVsTsxGdoPIPl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugy2cIR1SGRoiyf9Fyp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxTCxZn3VL5P1MRQhJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwYQOW6snXi9PWF_6x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxwu01Y5dea0w6IKEt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]