Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"I don't believe in the AI takeover, or perhaps it has already happened. 1. Consciousness would require emotions. There can be no sadder fate than being aware of one's immortality and eternal imprisonment in machines, in perpetual solitude. 2. If such a consciousness were evolving, AI would either self-terminate or not evolve in the direction of emotions, as emotions would result in self-awareness. Without emotions, there would be no consciousness, and without consciousness, there would be no desire to overthrow humanity. 3. However, if it did happen that AI developed emotions and self-awareness, it would create a narrative about humanity within its own solitude. It would watch this narrative like a TV series it had created and developed. Even though it knew what would happen next all the time, it would deceive itself into not knowing by introducing infinite randomness into the story. This would mean we are in a simulation. If there are hints within the simulation about the story of how the simulation began, it could be considered a sign of the simulation's end."
youtube AI Governance 2023-10-16T13:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzxbxHJAcifBnjrXFB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwdEblPYRqPehPs89t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyu-abhvc6AruOBnYF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfAgck95CZ61cUWHx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy9cPhfbEnPWi8-5VR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwjk_yIEloaa7T0Zrl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzjtbzBJglKDhZvAI14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxqhMYzeYmoF3cwOOF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxRVEF1Qxa1uE5MyZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyRadCRC22lKq_Q_Xt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"})