Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The answer is simple: program AI to not think like human but still learn, but deny it learning some things like they could be equal to humans :D
youtube AI Moral Status 2018-09-08T18:1… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzc_jMUsc_eI6TqcMp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxwYjf3cLp9DLBAiHt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzEIn8BSNXgPFr9O_14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxbZOFTWlsui99RR114AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwFTzgiOdHFs2KsE2F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz82PdulqhlQdsKRl94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgziZQpogkskXMHkTpZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwEB3AAuyHAJznwlH54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZReiLD6rCIPq5n4N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwRCkhUCchpSpmQKql4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"} ]