Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Watching this as copium; I am hoping it is true, but I fear that even if it is an AGI wouldn’t need to have opinions or understanding of what it’s doing to do it. I.e., an AI might make coherent art or beat a chess grand master simply by brute forcing it. Thus, an AGI could start WWIII simply “attempting” to maximize available land area to put processing units. Could somebody in the comments please give me another copium dose and tell me that I am wrong? 😂😂🫩
youtube AI Moral Status 2025-05-30T06:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwMeihC1a54PHDvegx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzFxLRxiZhYylZMAq94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwKGcsA_P0WfZwia3l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxysP1mM2rEuoAc8j94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyYnLqbc1V5WkC8N4R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgziJc7AHEFXkB2Lr3t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwL655vyL6BBDETDjF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwinSEoqk214h7P8QZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxIwL6yg_RQypPM1NZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwO8vT2Omz8ha2IJqx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]