Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Imagine cheating in a game. You can do anything, endless resources.. thats the point where the challenge ends, the purpose of the game ends, etc. What if AI would be able to wipe out humanity? Would it still have goals? In these scenarios, we reason with evil human goals; like getting more powerful, getting lost of people who are in the way of succes, stuff like that. Would AI reason like that on long term? And what would the ultimate AI goal be anyways?
youtube Cross-Cultural 2025-10-31T19:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwDLAlj0el1CihflVd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz-HKLCOjaZvz3T0xx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxuCsIgt-brSN_rqQh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzrXCYrAANoCRZJG-R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgziLt9V1_J6hNouxpN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyfECW5P6XU0XXCdzB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyUNNZGQtELR479Zw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwbN5Xyx2i2apAl3ad4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx6uIyvft34aNc9qyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXrm4HtHXHg-fWyNp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]