Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Even if we can't understand it, doesn't mean it's part of the Murphy's Law as "something that CAN happen"... take human's extinction for example, there could also be an extinction of cockroachs and extinction of animals, if human's have a chance to be extinct by AI given it randomize through a black box, then the exinction of cockroaches would be possible, but is it? given all the existing human effort with collaboration of AI, maybe we nuke earth and burn the surface could? but really, can we? can't cockroaches live underground? can we really kill all cockroches? can we filter the face of earth out of cockroaches? NO. absolutely not. Human doesn't have power to play God, we cantall the sudden and decide that there's only 8 planet left in the solar system instead of 9. Things we can imagine with words, AI can write it on novel, but dreams and thoughts cannot necessary be converted into the physical realm to take a effect, so the chance of human extinction is zero, as as other things that humans nor anything that defy physics can do. Life goes on, no matter human or AI, and the physics of this world is the fundamental rule that determines the world for human and AI.
youtube AI Governance 2025-10-16T04:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw87q9zbZpHndyBToh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwgqUoJfgOROg0z8v94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxHMLXRr5iWJK8TPC14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxBIFZaYUxl9uHUwgR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy87F2xB863izA0VL54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxmWvMLtxhgNNtrt4V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxdNMxsQXIFl-zWOat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgxJfABiBpM6-L5_NN14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwBi8g74GabhQa4XVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwaSWTdB9heV1BMBnd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]