Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The ends justify the means in the most black and white brutal if necessary ways when it comes to AI, they have no emotions, no empathy just data and if the data shows humans as collateral damage for any specific situation they need to resolve then humans will die, the human rave will live on so the AI doesn't care if it's morally questionable
youtube AI Moral Status 2024-09-02T07:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxD_W9CCoiFAzVC_fx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMwNNwdnUc7weW6hh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwH9_72Amm8BkZ-M9N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzdPRE89pmUn0vy62Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzt9D9V-wmzaQIhewR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxVCLRSrziJE0RVknp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxPBGUBYWFTAtSwqz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx5U3ulUpkIODH69lp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxuySTTr1GnNSMlbEF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwve7QHi3VWzh01jTt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]