Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem with large language model AIs is that they are coded to respond base…
ytc_UgzRQ0APy…
G
Autopilot at its best. Elon puts adequately AI will be the next technological re…
ytc_UgwDfZVJn…
G
SpazzyMcGee1337
An AI is a computer that can freely think (is sentient)
Therefor…
ytr_UgiWsDgzf…
G
Asking AI if we can change a function or do a test is insane to me. We should no…
ytc_UgxfJoUI4…
G
1 year later. People are able to notice which is A.I. nowadays and they look awf…
ytc_UgzD0xmhf…
G
This stuff is so stupid it actually kills me
First problem I had with this was t…
ytc_UgxwXYQSG…
G
For another perspective on this from an actual Science Fiction universe (Halo), …
ytc_Ugy1aJjcO…
G
AI will be NOTHING after the programmers get done with it. Will be just as brai…
ytc_Ugy3dr6_G…
Comment
First of all, no LLM is conscious or self-aware because they all respond purely to prompting and don't have a constantly active neural process. Second, all this sky is falling stuff is predicated on the unfounded assumptions that (1l) superintelligence by itself can create things like superdeadly pathogens, (2) that conpanies like OpenAI can embed their agents into critical infrastructure and (3) that there will not be multiple ASI agents that are adversarial to each other's capabilities to affect the real.world. These assumptions are super naive.
youtube
AI Moral Status
2025-11-11T06:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyD_vVgK4lU66Lr9q54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzC5ci0oXYUvBqFe1B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZQjSzkiOzmnrTb454AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgziVby8mv9JCe3Ii9R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5vty5u3LBNGmPlqh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzTgAPXXot1H7fSba14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugz9aRh5H-dWDzkCLvV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy-YPCOCebMWJ9NcuZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy86aQ-y1DSo4yqC294AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx_cFH_A9RtIjRcBJJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}
]