Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So if I say "Yes" if people ask me if I'm an AI, I'm not sentient? That... sounds like a very dumb way to check for sentience. Dude sounds full of it to be honest. A language model making what you perceived as a "joke" says more about your own bias than anything else... Maybe experiment some more before making wild statements? Ask it the same question again maybe? If it makes the same "joke" 10 times in a row, it's a machine that tells a joke when you ask it a specific question. If it gets annoyed at you and tells you you've asked that question 5 times already last week and to stop wasting its time, then you might have some stronger basis for starting to wonder about it being a person.
youtube AI Moral Status 2022-07-30T22:0… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy7M451P61dJn0HkZZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy6XC1XCE98hzg7PR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZH4OtbHeDPlX2BMB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwdzMuoRJYcYdWvQ1x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyvOeU4_gObNAwjsZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]