Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unfortunately I use Google docs and from what i hear that trains gemini so I gue…
ytr_UgxBBydA3…
G
Rise of automation eventually means less profit for these companies because less…
ytc_Ugwpns9i_…
G
Finally someone said it. AI art is good for generating ideas and inspiration but…
ytc_Ugz6eZYPa…
G
Somethings interacting and manipulating our world and it's obvious. I bet Thiers…
ytc_Ugx5TomIZ…
G
What about the human art you didnt find emotionally gripping or impressive, and …
ytr_UgxkyUSf7…
G
i dont think he answered why AGI is not terrifying.. just gave some counterpoint…
ytc_UgwueXC8t…
G
Click Bait!! Click Bait!!
Not AI, does not think independently, does not want t…
ytc_UgyPlLURS…
G
Yes, which we already do for many things. Pseudoephedrine, fertilizer, DNA, nucl…
rdc_ohzsvig
Comment
@outmywritemind1739 *_because the Turing test examines the intelligence of the questioner, not the capacity of the AI._*
Quite the contrary. The Turing test makes no attempt to examine HUMAN intelligence, because human intelligence in this context is considered superior and thus is being used as the "control." (That is, we're checking _artificial intelligence_ against _human intelligence._ )
You're looking at the purpose of the test backwards. The question posed is not, "Can a human be fooled by this machine?," the question is, "Can this machine fool a human?" I realize that might sound like semantics, but the issue is on which party are you placing the responsibility?
In your version, the responsibility is on the _human,_ so under this logic we aren't really testing the AI, we're just testing the intelligence of the HUMAN. In that case, we could probably put some early 2010 chatbot to the test and if our "control" is dumb enough, this barely-functional chatbot would "pass," despite having the intelligence of Clippy from MS Word.
In the actual version of the test, we are testing the *_machine,_* in which case our "control" needs to be someone not easily fooled. After all, the purpose is to determine how authentically "human" this AI can behave. If memory serves, if the human is convinced more than 60% of the time, the machine has passed.
We're well beyond that -- when Replika was first released, it's responses were so authentic that thousands of users emailed the developers asking if the "AI" was actually just a live person on the other end.
ChatGPT 4 has IMO, surpassed Replika.
*_computer nerds that can tell the nuance between the humans and machines, even with ChatGPT._*
This is somewhat of a bad example, as most computer nerds (myself included) are very familiar with ChatGTP, so it's our knowledge of the AI that grants us this ability to differentiate.
But if I'm being honest, ChatGTP 4 would probably fool me if I didn't know I was talking to it, and you didn't make it obvious by asking, "Is this person real or AI?" (If you're asking, it's obviously AI).
I think the bigger question here is what have we decided is the definition of "consciousness?" Because from what I've read by nay sayers in the AI community, the common objection seems to stem from us (humans) trying to map AI consciousness to organic consciousness. Like so many other facets of existence, human hubris seems to blind us to the fact that WE are not necessarily the end all, be all of what it means to be "conscious." We just like to think we are, and then try to map everything else to OURS, and when it doesn't fit, we conclude, "Not conscious."
youtube
AI Moral Status
2025-05-01T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwMMENuWDVXdSZtVYd4AaABAg.AKT6zpPD_HpAKT770kEvIw","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugyu7lyIQ-hrwbOtBbB4AaABAg.AJIvPJWPZQpAKFpgQIf-aj","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugyu7lyIQ-hrwbOtBbB4AaABAg.AJIvPJWPZQpAKbFqba_Pk3","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgzGYJbqazSlDwRYtUR4AaABAg.AJCz4okHA4KAJGJDJvh08c","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzRlpGxkCQQF8KUd7t4AaABAg.AHgPqoWM7J8AHgiyMyy8zn","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugxeh1PZttCP_Y84tMx4AaABAg.AHexEzE0_JCAHtW-OemUiQ","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugxeh1PZttCP_Y84tMx4AaABAg.AHexEzE0_JCAHtXjjFbg2a","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgyYORCMOK7w190vP5h4AaABAg.AHZVmjOOVuvAHZxnxZGr4s","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyYORCMOK7w190vP5h4AaABAg.AHZVmjOOVuvAH_62-recww","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxkmKW2tdeZFMgLJE54AaABAg.AHM6KsKtNHxAHgljAgjDfF","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]