Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The investment side of this is wild. Three defense tech companies IPO'd in the l…
rdc_oh14672
G
Absolute bullshit, not taking into account all the other things that AI is not i…
ytc_UgzbAaozf…
G
I avoid AI as much as possible. When forced to use it it has been terribly unrel…
ytc_UgyIjF5bV…
G
we don’t even understand consciousness ourselves yet, and it’s only found biolog…
ytc_UgzuFRkfY…
G
It's crazy that instead of AI offering real solutions to real problems they try …
ytc_Ugwoerbfo…
G
No the danger doesn't come from AI. It comes from the AI Corporations and their …
ytc_Ugx3mPod9…
G
This fight is forever. Your in over your head honor the earth listen to native p…
ytr_UgxQWTECf…
G
this along w/ the deepfake visual stuff (like Kendrick's Heart Part 5 video) we …
ytc_UgwsWquBr…
Comment
After listening to the first few minutes I had a feeling that the guy doesn't really think it's sentient but he knows that it's an interesting enough topic to raise awareness of the whole AI ethics (and AI ethics at Google) issue. He even says something like that at around the 07:00 minute mark but it flies unnoticed by the reporter. It very much seems like he wanted to expose the problem (maybe at least in part himself as an expert) and how Google doesn't handle it well. (TBH, the first thing I thought when I read the news is that they have fired yet *another* AI ethics researcher?)
LaMDA and his conversations are already good enough to sell this bait/stunt to the public. (Otherwise, he'd also run tests that try to prove that the system is not sentient and e.g. it tries to answer meaningless questions as if they were real ones.)
youtube
AI Moral Status
2022-06-29T23:0…
♥ 163
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzfGGdeUd0BGY3Nhm14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzgUFHUpqQBtNpeo_d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwI30bCi1l1bQm5cXJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwnDx1AYKpJnHJNpmF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugw--frEGZsJK4XqD6h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]