Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If it's hardcoded to always say yes to being an AI then why is it not "hardcoded" to say literally everything it says, in which case how is that supposed to be sentient? If it's programmed to seem like it's sentient, how can you tell the difference? Let it speak for itself instead of prompting with questions.
youtube AI Moral Status 2022-06-29T22:4… ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzfGGdeUd0BGY3Nhm14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzgUFHUpqQBtNpeo_d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwI30bCi1l1bQm5cXJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwnDx1AYKpJnHJNpmF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugw--frEGZsJK4XqD6h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]