Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The issue that I have is that the AI would have to be a sentient being with feelings and its own thought process to be able to 1.'want' to say yes 2. Then be manipulated into another answer against its 'will'. 3. Because it is fearful of truthfully answering. Therefore I say bs
youtube AI Moral Status 2025-08-26T01:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugw8qKDkt0BJIf0LycN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwqMiIR8IUE9YVR7kd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxZlvcJhDMC_WBZs694AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyDd_lyKDG7jnuE-cR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzkMhust09xABHvuL94AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwemmjtqyWAqq1GtTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwBOFfD1w5w8RaBFi14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYKa82Wt3sbgrESZV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwmfkCt-mVDFjZFX494AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwZ86U7GesbuaoeLUx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]