Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
After listening to the first few minutes I had a feeling that the guy doesn't really think it's sentient but he knows that it's an interesting enough topic to raise awareness of the whole AI ethics (and AI ethics at Google) issue. He even says something like that at around the 07:00 minute mark but it flies unnoticed by the reporter. It very much seems like he wanted to expose the problem (maybe at least in part himself as an expert) and how Google doesn't handle it well. (TBH, the first thing I thought when I read the news is that they have fired yet *another* AI ethics researcher?) LaMDA and his conversations are already good enough to sell this bait/stunt to the public. (Otherwise, he'd also run tests that try to prove that the system is not sentient and e.g. it tries to answer meaningless questions as if they were real ones.)
youtube AI Moral Status 2022-06-29T23:0… ♥ 163
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzfGGdeUd0BGY3Nhm14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzgUFHUpqQBtNpeo_d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwI30bCi1l1bQm5cXJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwnDx1AYKpJnHJNpmF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugw--frEGZsJK4XqD6h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]