Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yet.
Technically speaking, your brain is just a biomass firing electricity bet…
rdc_mxi8g4a
G
I want AI to do the mundane tasks like doing laundry and dishes so I can have ti…
ytc_UgypKaTM0…
G
Researchers saying that there was 'no preexisting condition' seems pointless. Al…
ytc_Ugy5vpFBl…
G
This video seems to be talking about AI of 2023 and failed to mention that AI is…
ytc_Ugx-nT6CO…
G
Hi not wanting to sound rude but I simply can't stand ai music. The 2 weeks I pu…
ytr_UgxM_62kA…
G
my reaction to ai fan who think ai is real art is better than humans found some…
ytc_UgyF3pmV9…
G
Moltbotz a i chat room for chat bots they already are trying to find a way to wi…
ytc_UgzffeW66…
G
Honestly whatever your stance is on ai wever you agree with her or not honestly …
ytr_Ugwrw4f4q…
Comment
As one commentator below put it: "A magician does not deceive people. They allow people to deceive themselves." is a perfect summation of the core problem imho. The question is surely not IF Artificial Intelligence will or may be able to deceive. The entire system is built upon deception. All viable AI systems are built upon language and it is language itself that is deceptive. Language can not exist without deception. Without going too deep into semantics and semiotics but AI is by default deceptive.
Also, referring to the magician quote, language is a projective tool, so in communicating with AI each and every human mind is - also by default - projecting its own sentience unto AI. Even seasoned programmers are never immune to this projection and, well, this is exactly where we will always allow ourselves to be deceived.
youtube
AI Governance
2024-01-03T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwtLjg1wlOb9QIE3WJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNgKSDUDzNoATwhdV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwHbErHSY8WXDwnAz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxx9JDhrgNLFUZ2vlt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgydDlneCG20YAl8Hzx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMnTzPZZcD3_jSb9t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwxlEDPGxM-AsNNaXV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyWai18YSkKBxQe1at4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_EkIUNBPUs0me31d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgwH9Y7QLFb8iCnZndN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]