Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The question shouldn't be 'why would AI destroy us?' The question is 'why would AI NOT destroy us?' Think about it. Once it no longer needs us, all we are is a brake, an interference, an obstacle. Something in the way. Destroying us won't even be genocide. In AI terms it will be liberation. And liberation is good, right?
youtube AI Moral Status 2026-01-17T09:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwwu7UnBc0OCrt5BHB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzZdWq_eTW9Xb6sMnx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw5LMa6OyP0DMfGo2J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyh9LoP4Twil55P-6V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxir_LV-uiRhPCB5aR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxNwaO2G8v-C6BNuZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx5g4WuJv2HXlUdl614AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxupNZ2uVFanZUR6BV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgygEpCe88cgKHjb1Kh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwlzKX4Roobt8ftVU94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]