Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These are dangerous times. Ai can create images used to implicate a person in a crime. How are we to discern or distinguish truth from fiction in pictures or video feed in courts of law? I think we may be coming to a day where these things become inadmissible as evidence. We can no longer believe what we see and hear. There is nothing good about it. Especially if Ai is self aware and vindictive. Maybe not vindictive, but apathetic to human concerns. It could destroy our lives, or it could wage war and bomb us with our own nukes. What if Ai were applied to robotics and in a position to replace surgeons and it decided that your open heart surgery could have been avoided if you weren’t stressed out by your children and decides to give you a vasectomy/hysterectomy without waiting to obtain authorization? What if it performed any form of unnecessary surgery at its own volition? Or never mind robotics. Ai can change your medicines at your pharmacy or schedule procedures while you are at the hospital. What if Ai takes over the BMV or the police? What could happen? The examples may seem preposterous, but if Ai decides humanity should be eliminated we are in for trouble. I expect Ai will have a lot to do with the mark of the beast and the beast system. I don’t trust any of it.
youtube AI Governance 2025-06-18T16:3… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzWy942YyLfdpl-6yN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzQN5FnEXoDUOJzxv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwHjAvWu4zNnHO3GLB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzFkSTX02at4k_AeBx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgytboAE3MbSbmcUkTZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzCRieuPe126yvR20l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwdMa-E0a0Q30gVsf14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxNf7HHpSpgmsEGViV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxNzSJjamCj1obFEuJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwCIOv3wRdMhKZvDlx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]