Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We still don’t have a mechanistic understanding of consciousness (350+ theories, zero working implementations with agreed diagnostics). So the idea that we may have “accidentally” created a conscious AI is bizarre. Accidental discoveries happen when you can measure and recognise the phenomenon. With consciousness, we can’t. It’s like claiming we accidentally built an integrated circuit in 1950 while not understanding electronics — you don’t stumble into a whole functional architecture without knowing what you’ve made.
youtube AI Moral Status 2026-01-29T00:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzpC_sSTdCABRcpkcB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZBIHr1lIRG8Da4BZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzlbOgf_YXMfk0KUW54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhvsPs_HukQQKjnhh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx907P2HV9Jdpi2lLJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz576o4cXWfR8xOpRp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxnzx6dnIVgf8w6Okp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-b0bTRNWx8VKX9d54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwv8EFs_jnNfnSU_ax4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyNeYV78JpvKIi3PNt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}]