Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I love this dude. So smart and ZERO BULLSHIT! BRAVO! This man gave the most honest and real uneasy answers to hard questions. Looks like people reach a very specific point where they feel they are waking up to a very unpleasant future. For me, it happened about a month ago when I saw a Boston Dynamics robot do a back flip, and it had hands with 360-degree vision, limbs that rotate in all angles, and it can lift 100-plus pounds. Has fingers and basically can do almost anything you can. So in a few years, everyone will have these. And when we depend so much on these cheap robots/helpers, that are actually smarter than humans, and can think for themselves. Humans become a choke point, and just like it was trained, smart AI doesn't want to kill humans, IT HAS TO, because humans have to sleep, eat, shit, etc. So, in reality, humans themselves are becoming a problem to be solved. It's good to think ahead rather than being caught with your pants down. The sphere where something was done with a problem like this is the real estate market in the US. It has such specific criteria that if you cross one line, you are looking at losing your license forever, so nobody wants to fuck with it. It's done to keep the industry alive. Maybe it sounds a little stupid, like rich people sitting on Billions of Oil wealth would want to squash any other form of energy, to keep the profits and control in their hands. It's different, but it's one way of controlling something much bigger than the opinion and ambition of a few people. And it works. AI could harm humans only if we give it tools to do it.
youtube AI Governance 2026-04-02T18:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwFmxVdwwll-ZF8LXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyyIU8Uw_ua-yKNVOR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyxHv1vHT9RDz6EeIp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyFQdedIuyfvUZcejJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx9rA4oXkgDhKyzjZR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwlXeOK_Jvdv9iRf0p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzk9ixgtY_LZVMJWWt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzhGTm2hX54pp9b_Qh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxnApSe8tT4mvzPuxx4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxcLVocm5OaLTDrAMp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]