Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's no reasonable way to disagree with what he's really trying to say. Because he's not trying to debate anyone on whether or not LaMDA is sentient, or should have personhood. He is saying (and no one can reasonably object to this) that Google's business infrastructure is not well designed to deal with the breadth of implications of true artificial intelligence, and they aren't willing to admit they have an obligation to do better. As he says, the conversation on whether LaMDA is sentient is a matter of his personal opinion based on his experiences. Great fodder for a philosophical conversation, and he's well aware that's all it is at this point. But we should all be holding Google accountable for how reckless they've been around this stuff.
youtube AI Moral Status 2022-08-06T02:3… ♥ 9
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx67Lds-1RV8l507gN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxqUwVxFXl18UMX1nB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy43dtlzNs9F5jR4Kh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4PmiaNkFLKAt-7U54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugw2segG5qCBzECdJ_R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]