Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What's most interesting about this video, and the scrutiny of so-called "AI" is the examining of HUMAN BEHAVIOR/intelligence. - We know current "AI" is goofy, imitative, hallucinogenic, sloppy and buggy. As I say: Skynet will be hackable. Count on it. It's impossible of any single human to fully comprehend any complex software project or program. - But I find it more interesting and useful to reflect upon HUMAN BEHAVIOR. History is a profound and rich teacher. We know full well that: 1) "AI" and the machines it runs will be used for the pursuit of human self-destruction, obviously at a new higher level of destruction. This of course includes WARS. 2) "AI" and the machines it runs will be used to generate $MONEY$ for the FEW at the detriment of the many. We never have, and perhaps never will escape the human default of FEUDALISM, aka exploitation of fellow humans. 3) Loonies with profoundly distorted Inner Worlds will attempt to shove their distortions upon fellow humans and miracle planet Earth, our only home through the use of AI. I could even expect AI to get shoved into RELIGION. Stupider things have happened, such as human sacrifice and "The Rapture". CONCLUSION: As ever, it's not the technology that's at fault. It's the people who MAKE IT and those who USE IT to the detriment of our Outer World and other's Inner Worlds that are AT FAULT. Solve the SOURCE PROBLEM. Destroying all the technology doesn't actually change any of the actual problem. US. We are required to GROW UP.
youtube AI Governance 2024-05-03T20:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwCWE0em3fJlS-mnsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzS0sGVdHfd8YU9XKd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwxKkEUNO0Mr6wV6rd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwyYGVQ0DeVW-ESQyt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxDDHigGj254IQxoal4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxsI3s5JCX1VkAYD694AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwwrQR7apurRcVAtQZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxntNVBxjy8dtLDzaV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwoEZKBswLBnGQpaSt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxlurhnOpdwfNnxt4t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]