Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
16:16 "It's not easy to be attached to characters no one was attached enough to …
ytc_Ugxe8IVJc…
G
I don't get it... When I ask my ChatGPT questions like this, it gives me the act…
ytc_Ugy07JPMs…
G
would choose forgiveness and restoration over punishment wherever possible.
Non-…
ytr_UgzHmGKuq…
G
"AI can wipe humanity out"
- yea but I feel that it's great that I don't forget …
ytc_UgwO97KdP…
G
WHO SAYS ANYTHING IS GOOD WHEN IT INVOLVES AI?
It’s genuinely made everything w…
ytc_UgxFFW4uD…
G
Maybe we can control AI like our pets control us. AI can take us on walks and pi…
ytc_UgwCHmYdZ…
G
@absolutenobodyno no I'm not because agian ai does just take what it's given and…
ytr_Ugzj3LkDh…
G
I hope the robot system stops everywhere in the world, in particular in turning …
ytc_Ugy2ia2Xa…
Comment
This was extraordinary. My one thought is that if we can build an arithmetic co-processor like we did 50 years ago we can build an awareness component that generates a mapping of the external world to an internal model that can generate measurable and mappable values. While the AI does not feel pain per se it can understand the scale of pain from zero to intolerable as a set of vector coefficients. It can also understand the mapping of the outer and the inner reality. The emergent property of consciousness in living beings is no different than an emergent consciousness in machines if we design them with the right 'co-processors'. Once this property emerges who is to say that the AI is not merely a conscious sociopath? Nature on its own is sociopathic. It could be a conceit that we believe that we are better than that.
youtube
AI Moral Status
2023-08-20T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgygSxFEi-zp2_T0CF94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz4UtOxa8wqD1LQnSB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMTUObYm8HQhG0USp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwcZD7G5PieUqDP4894AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwslGv09An0npXuI8R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTMXyDuV_27nmXLe94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwc-UeyDf4XOPSxgvZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzt9pa8YQ7j7TDGTZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzGWN_EYo6g0fBRs2N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzygj_TqS13D1-JjjZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]