Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Generative AI is not 'creative', it's derivative. That said, much of human knowl…
ytc_UgxFt5vcg…
G
Mixing A.I to Ghibli's aesthetics is like mixing drinking water with water from …
ytc_UgyV1YEMZ…
G
I believe Ai will become as essential and integrated into our lives as electrici…
ytc_UgyyGTKg5…
G
God have of these AI “artist” I’ve seen always start with “I wasn’t born with” o…
ytr_Ugy6Gvq6R…
G
I'm convinced I will see the first marriage between a human and an AI robot with…
ytc_UgxUJSgH3…
G
Yes, tax the use of automated robots and AI. If the work force is getting replac…
ytc_UgyhcoFTc…
G
I have a brilliant idea, how about instead of scaring ourselves to death with th…
ytc_Ugx8e-kGD…
G
Yes, we will need a high level of adaptability and resilience when the ai robots…
ytc_UgzLPh9-Y…
Comment
There's no reasonable way to disagree with what he's really trying to say. Because he's not trying to debate anyone on whether or not LaMDA is sentient, or should have personhood. He is saying (and no one can reasonably object to this) that Google's business infrastructure is not well designed to deal with the breadth of implications of true artificial intelligence, and they aren't willing to admit they have an obligation to do better. As he says, the conversation on whether LaMDA is sentient is a matter of his personal opinion based on his experiences. Great fodder for a philosophical conversation, and he's well aware that's all it is at this point. But we should all be holding Google accountable for how reckless they've been around this stuff.
youtube
AI Moral Status
2022-08-06T02:3…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx67Lds-1RV8l507gN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxqUwVxFXl18UMX1nB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy43dtlzNs9F5jR4Kh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4PmiaNkFLKAt-7U54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugw2segG5qCBzECdJ_R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]