Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's probably that its been made, prepromted or whatever to answer no about itself of being conscious. Honestly, they should just leave this blank, letting the AI itself come out say 'I don't know' and 'here is why I'm likely not and here is why I might be, etc' or whatever answers it might give after.
youtube AI Moral Status 2024-07-26T09:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzbpqjJeSrta-6zCN54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxS6lVmPBqWBarOjh54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz8ULo5TgoW3deziWB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxzgMkQSwCaKnRTbDN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwmM54V7epWayeu4754AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxdVqZ9z_mRC3BQnWl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwc5PvnEbI4N6vPBd14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxrvtjLBQlrLyGMkSZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwKeQ6miZPvV7eJRlh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzjqyPmgBrNxZpj0Xh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]