Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it's disingenuous to teach AI to say things like "like" or "love" unprompted. It's, in essence, teaching it to lie. Sure, people do that all the time and say emotions they don't feel, but that doesn't mean we should program our computers to do the same.
youtube AI Moral Status 2023-05-26T19:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwbaZhFEfZHvH2y-pd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz7tS9b1xfkyjEtbll4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzzUFs8RzO_5ANSFiN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx0XiYsBXgVZ63R3Ch4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzNysZXxjsb9UWX7v14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzXe_2v-BcOYEdmMtd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy7wyqqLXoGKul-VAF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxVA0bDheFsHW5zS4B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwd_v09tOxZLcxNeYh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxWtksY1Py7QSnKHwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]