Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its interesting because this is literally AI is destroying the future that these…
ytc_UgygbLdF5…
G
well then, we'll need a socialist revolution. guiliiotine a couple of thousand h…
ytc_UgxZ54igK…
G
Questions like the ones he has asked in this interview are very much needed. AI …
ytc_UgxgNBfhq…
G
Ai is likely to become the worlds most dangerous weapon. Lets be clear here. wh…
ytc_UgxRBo6I6…
G
We are at the edge of a social collapse, A.I will destroy city life as we know…
ytc_UgyJFzhkh…
G
This brought back a memory, sometime in the late 1980's I was following a friend…
ytc_UgwHBOQsO…
G
Things you can do - outsource to AI.
Things you can't do - seek help from AI to …
ytc_UgzfwveQa…
G
After my experience at UofU 1980-84, and the extreme injustice of the system, I …
ytc_UgzNa_T1B…
Comment
There’s no such thing as verifying the sentience of things that are not you. You can only directly observe your own, AND ONLY YOUR OWN, sensations, full stop.
I’m tired of hearing us talk a big game of how do we prove an AI is sentient when we can’t even do the same thing for ourselves.
Cut to a couple decades from now, and there might just be robots saying “oh humans can’t actually feel things! It’s just a series of physical and chemical reactions facilitated by ion channels and neurotransmitters! There’s no actual understanding going on! It’s just chemistry!”
youtube
AI Moral Status
2023-08-20T22:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz4LmweJWCyvg_WLT54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzlUAjvg07Gfn40e_94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyE2bKu9n3YwAqQ3F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw8pVMDZ8MhE1Gyf8Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwE0UsrNw6z2lzTUHp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6cwHuglGeZ4GYB-p4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMCNAnH3scCR_NkDx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw071Ztqhkg0exG7p14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNbshG2oVOuTOZo294AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwsFiHWjq4cPPisKdl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}
]