Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with this argument is that the act of lying needs a basis in the undsrstanding of truth... which ai algorithms lack... in a sense the ai is still just performing the idea of undeestanding what lying is, but it has no actual conception of it... it is lying about knowing what lying is
youtube AI Moral Status 2024-08-18T08:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyI5hJdta7q6lyZgRN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyytTLBpr0ewAsqvPl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyWdaD4n43XFa0fJAx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxxIvqTLdxyCXSklVp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwcPi2-O4df0X-ad5V4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxv1yI0eQZ1MLOUm414AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwApFYOmigx_kuOR554AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyTGDlZjh4A7eRKeY54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzpP2FplDIm4fH-iBl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzSBEozAcYsgv9ueBJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}]