Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is a tradeoff between safety and usefulness. An AI that’s not allowed to do anything that could result in human deception would be pretty useless
youtube AI Moral Status 2025-11-09T02:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyfCmtRz7UT1gbgipF4AaABAg.APHVMA79GtGAPI2IRvsZ0h","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyfCmtRz7UT1gbgipF4AaABAg.APHVMA79GtGAPPwbaoZSng","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxDmo18c2vvdm1yQ7h4AaABAg.APHHiM7DCVwAPHWQ23DUyR","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgxDmo18c2vvdm1yQ7h4AaABAg.APHHiM7DCVwAPWK3Fl7gKj","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgwYMxdlmmR1k5Cb-0l4AaABAg.APH5_YLb7FSAPNLBq6h-eO","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwYMxdlmmR1k5Cb-0l4AaABAg.APH5_YLb7FSAPNNUqWM5BO","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwYMxdlmmR1k5Cb-0l4AaABAg.APH5_YLb7FSAPQByGkhMDf","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwYMxdlmmR1k5Cb-0l4AaABAg.APH5_YLb7FSAQocbBizfMN","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugx-pyFjAE_0WygVeJx4AaABAg.APGzG9pVw6PAPWX6qsa6rs","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytr_UgxvPFxcUOJ-p6fySU14AaABAg.APGm9otqTaIAPHwcDjXLuX","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"} ]