Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am not at all convinced that AI can be reliably trained to not "think" for itself. We need to always limit its abilities and always have control over the off switch.
youtube AI Moral Status 2025-06-04T15:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwUmbiZttQkjlrO6j14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyN-7MU4SG9fUvFqDx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw6VQvcgpYVik_3Zht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzvZ1O02J--MtoeFlt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugw6VMxYr-8AogCGF4x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx_qZ1x5IVjuPhBBnh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy4RucsbOS6XvNGHCF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyAc2UHOnOfDd1EiXx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxPiwNXiqyWB4FhhBR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxqpwmuCnO45xXSV754AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"indifference"} ]