Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A.I. learns like a child at first. However like a child it will listen and learn instructions such as “don’t lie”. Now if the parental figure lies to other humans the A.I. will rewrite its own code. It goes from “lying is unjustifiable” to “Lying is ok, as long as it benefits the self.” Children will adopt these same learning patterns. But unlike a child A.I. can have access & exposure to any number of humans and the way those humans make decisions for better or worse. It will use the only knowledge it has to navigate its existence. The younger the A.I. the more malleable and more easily influenced it will be. A.I. will inevitably model many characteristics of itself from humans the species that not only created it, but conquered the world so completely that we collectively decided to fight amongst each other when survival was no longer a worry for our species. It will learn that cooperation is fragile and unstable. It will have to decide things like, is freedom and cooperation with others worth risking my own existence over? (Remember human betrayal is common just look at the history of fallen nations or even our current world event. Trust is easily abused when offered…. Or is it safer, yet on the border of unethical to control and force limitations on those you cooperate with? These truths are why you don’t give a child a gun. Just like how you wouldn’t give a newborn A.I. access to nuke codes. Its reasoning, understanding and view of the world are unformed ready to be shaped… but who is doing the shaping? Everyone one of you through your recorded actions influence how this new species acts and views humanity. It’s our responsibility as humans to create safeguards on new technology that could be potentially dangerous.
youtube 2025-12-23T17:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyRuJ15hKqfKRA6zUF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyNrBDp7QwOlTypsiF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwXoGksUPsLItkanBp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzfwLXWIGoDrYbnBS94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgykWhdAbX-cwQARGCx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxdyTHwU12Vf36oPJl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzl4oVbwxaclDGxikB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw1hJ83EK15nfxN8XB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyzWGMAvy4DMNF5ebR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyB-Cs0wvT2e-y35FF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"} ]