Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is more than "some routing issue". Poor Waymo's brain is overheating, and o…
ytc_Ugx1TWqDV…
G
Perhaps this is why eating an apple from the tree of knowledge in Eden was forbi…
ytc_UgxPcqbFY…
G
Sold my AI company in 2020 when i saw the Techs CEOs were just men tryna please …
ytc_UgyP3ayau…
G
2:55 lots of sample points are used
If the USA starts implementing facial recogn…
ytc_UgzFmhvk3…
G
Well, this is reassuring. This is entirely in line with what I've thought from t…
ytc_Ugxty1NO7…
G
I DESPISE so called "AI". But if it was all about C3PO, I'd be fine with it. My …
ytc_Ugx6vhTUt…
G
I wonder if the AI would have done better with a different prompt. …
ytc_Ugyl1iyeH…
G
It’s possible to do all that without using facial recognition, and if they’re go…
rdc_jck23dx
Comment
I'm not sure it will ever be possible to prove that a machine is or isn't "conscious" in that I agree with the article that we don't even have a particularly strong consensus on what being conscious actually means. About the only actually workable definition of it is "awake, aware, and responding to stimuli (i.e. being conscious is the opposite of being unconscious)" but people want to use the word to mean something else, and nobody seems to really know what that something else even is.
I think as a result a far better standard for us to work around is general intelligence. An agent that can think and reason about roughly any task, make plans and act upon them, deserves our consideration as a person. I think we should be very careful about creating such a machine because we don't really know what the safety or moral implications of doing so are. We could be making a slave, a friend, a benefactor or our own annihilator.
Is Google's chatbot a general intelligence? Not as far as I've heard. it's a sophisticated engine for responding to queries, but it doesn't appear to have an internal model of reality that allows it to make plans and do things it wasn't programmed to do.
reddit
AI Moral Status
1655294125.0
♥ 22
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_icg0n7o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_icfwvfn","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_icg0goj","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_icg04dc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"rdc_icg19wh","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"})