Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here is the catch, if it's really a sentient it would ask more questions than it would answer. Have you ever talk to a human before they would open topics without you inputing with it first, have this AI ever talk to them first? Have this AI ever talk stories to you without your response? Humans are like that. If this doesn't talk to you then you're talking to a program.
youtube AI Moral Status 2022-06-29T08:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw3gLOP9_22eoEYwBF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx7sDE1MBktwM0qlEV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxN_k2XWK5RH1EzsWV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxImq58UO_rlWsLpKd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwarFy8jP5o-p6UpU14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]