Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You all act like AI is magically doing "bad things" on its own. Like it h as a mind of its own and could suddenly decide to harm human beings or something. It CAN'T. It's HUMANS DOING THAT. That's how these rich dipshits avoid accountability, don't you get it? It's a tool, like any other tool it can be used to elevate human beings, free them up from hard toil and labor or it can be misused and abused to hurt, enslave them. As Jacque Fresco said (they literally consider him to be the father of "SolarPunk") : "Technology is just so many millions tons of junk, unless it enhances the lives of all men." It's the PROFIT MOTIVE that you should be worried about. Machines have no ambition, no gut reaction, they don't want to take over. That's HUMAN PROJECTION stemming from insecurity, scarcity and lack of knowledge about what does it mean to be a human being. Also, if AI "misbehaves" - ever heard about redundancy? Why should there be just one circuitry if there can be 5 to avoid issues like this? Airplanes, elevators, factories, do that.
youtube AI Governance 2025-08-26T17:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugz3qTS819wIgZshvBl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWwj-vUslVBQdFn354AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz4anTgjdsGbSssSZJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwaHX4mvwUBpBLGR8J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzIpceCfSqOdxPm3mx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]