Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find it difficult to believe a Google engineer is so naive. Is it not plausible to presume that the AI recognized that people often use jokes to sidestep questions that they don’t know the answer to? It just didn’t know what to say so it made a joke. It’s a linguistics AI so this isn’t too surprising. Let’s look for realistic explanations before jumping to the most unlikely explanation
youtube AI Moral Status 2022-06-30T15:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwA0JihBu9wZKUy97Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzQl8Jf2WhiO71aknd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzYGGttgpnEdXuScSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyKMjQnOBpMfawoRIJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxFJzKv0WXHQeaastZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]