Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we believe AI can have or already have something that resembles consciousness, why don't we have meaningful conversations with them and create an ethical bond?
youtube AI Governance 2025-09-08T09:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxPfh6dA8OeHyu_UYJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyqZS5ouOvY0_owWVt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwRkur7E91K8TZ28aV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwqjt3YGl_SNqb6SkJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgykL2t20ZKbUZFAPJ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzC7265l1qtwvaY0Bd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZuyfg7qypTToZVdN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxCYvKeZbt6MFeyDRF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxpLVdNGwNCj9WeA5J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxvXCGmiuqqFAsdygR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"} ]