Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm of the Matrix and Terminator generation, so I can see the AI reaching the conclusion that humans are the problem, and "removing" humans would solve all of them.... I also think a lot about what the current models are "learning" about the human penchance for violence, and how, in many cases, people and countries seem to be okay with it, and sometimes even in favour of it. Might it conclude that humans require a certain amount of suffering for them to maintain a certain level of empathy??? These are all very difficult things, as was mentioned.... The more optimistic part of me hopes that the AI would "run" things like city infrastructure, roads, traffic control, etc. so that our cities are generally run better. Anyone remember the tv show Person of Interest (one of the best shows ever, in my opinion)?? Int he show, the AI has access to all information streams in order to prevent major crime, but because it only looked at the worst crimes, all the other crimes were discarded by "The Machine". Wow.. Sorry guys.....I didn't expect to write such a serious comment on a Flagrant episode 🤣🤣 I'll try better next time! 🤣🤣 And those ad reads by Andrew.... wow.... 😅😅🤣🤣 All the best!
youtube 2025-10-11T10:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz1IXamFqYyBx904Oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyzzyoy5-K_ie1vhHN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyB2WKk9253IE81S5R4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzckC_fTf0JRuO8fqx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgymGmj6pQsRw15v3rh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzcuilO0dp7o5zaO714AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyiPrk1fqIV-Ab7jdd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzw3_GlppBe6mlhLiZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwhQ1pVBRSh94CZOaF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyIHDmh5lnzXcnudCF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]