Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is silly. The current AI we have are nothing but glorified chat bots. It just knows to put one word after the other and trained to the point they sound very human (and naturally so) to us. If the AI is behaving like that is because humans have been talking with the AI about this probably telling it what it should say or do (because humans are cookie like that). Ai doesn't have actual intellect the way humans do, It doesn't have feelings... It can imitate feelings through speech, but it doesn't have it. It doesn't have fears... because again, it doesn't have actual intellect. AGI ain't coming people. Ask any of the AI models to drop all the hopeful talking and to give you the plain truth of how likely AGI is to happen and what it would need to happen. it will still try to give you some of "maybe... this and that. rainbows and sparkles, teehee". Tell it again to drop ALL the hopeful talking and just say the plain truth with no hopeful maybes. Just the plain truth. From Grok: No one can quantify likelihood because AGI’s a moving target with too many unknowns. We’re closer than ever, but “close” could still mean centuries—or it might never happen if fundamental limits exist. That’s the unvarnished state of it. What else you want to dig into? From GPT: Final Truth: No one knows how to build AGI. No one knows when or if we will. We have powerful tools that look impressive but are fundamentally shallow. Anyone claiming AGI is “close” is either guessing or overselling. We are in the early experimental phase. Whether this leads to true general intelligence—or just smarter tools—remains to be seen.
youtube AI Moral Status 2025-06-04T19:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyjg3OrAXMM-itgWnx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwiCeXQLTQg7_BwNIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzDavvISEiTXHVTMih4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyUR8TJe2e_fM-BRXZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxFvJPQM7BnMjPip3d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzo05zsWji_5hFQFyp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzncHEVf5gW687VpTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxpMos0nEWQO5K0wEl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxJrNOUtJcrHAwYC5x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWxqm-lcYIvn550nl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]