Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We were so preoccupied with the thought “if we could”, that we never stopped to …
ytc_UgyIj6Yn8…
G
I understand your concerns about robots and artificial intelligence. It's import…
ytr_Ugzn-nbui…
G
It feels to me more like the issue is with language. English was designed for hu…
ytc_UgwHNhfRn…
G
alr its an 80 precent chance you are an ai chanell talking about ai being dange…
ytc_Ugz0ySWqO…
G
ChatGPT sounded a bit like they were stuttering trying to come up with answers, …
ytc_UgxxiC9Xw…
G
I don't know what is more disturbing; Elon's warning or the human woman's laught…
ytc_UgwPHxEx3…
G
As a truck driver this is bad news greedy corporations are gonna get sued into t…
ytc_UgxKOVe19…
G
I can’t picture an Ai with an egotistical messed up battle with gender identity …
ytc_UgxdQMD3e…
Comment
Lemoine states "We should think about the feelings of the AI" even though we know perfectly well that LaMDA or any other Language Model based on similar technology simply cannot have any sort of feelings. This person has put his belief system, that includes the idea that a conceptually simple program can have feelings, above any sort of scientific knowledge. LaMDA is designed to regurgitate responses based on a knowledge corpus that is arbitrarily chosen. Its responses reflect that. If you train an LM on 4chan content you get politically incorrect (to put it mildly) responses. This has actually been done. Simply because the system responds that it has feelings proves nothing. Also, the Turing Test was devised in 1950. It means nothing to pass it. I remember quite clearly when we used to state unambiguously that a system that would beat the world chess champion would clearly indicate that it was "intelligent". That was not controversial in the 1980's. Today Stockfish (a chess playing bot) has an ELO rating of over 3,500 while Magnus Carlsen is a bit over 2,800 and nobody would ever claim that Stockfish thinks in any meaningful ways although it plays incredibly great chess. The reality is that this guy doesn't understand what he is talking about. He is just another variety of "flat earther". He denies the reality that lies in front of him either because he is ill equipped to handle the task or because he is profiting from his stance. Most probably both.
youtube
AI Moral Status
2022-07-03T07:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxLsq2xC11GEWCkJVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyyDKDQIwxtSAOnpc54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwOV7_AfnrH-WQz9_N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyeqVCUwqAYCjXAS1t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwI7Zqk4bBdTo3-bdh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]