Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I will always advocate for real art sometimes u think what’s the point if an ai …
ytc_Ugw8HWX0x…
G
This sort of reminds me more of child rearing than anything else. You can teach …
ytc_Ugy5hK01G…
G
I seriously don't get why news agencies keep on distorting the facts when it com…
ytc_UgyuLQGhV…
G
I think the Fermi Paradox and AI are closely linked.
AI might be the biggest wa…
ytc_UgwFJGqCW…
G
AI can simulate intelligence, emotions, and feelings but they are all fake and s…
ytc_Ugw5_m0AZ…
G
I appreciate your perspective! It's true that AI models like Sophia are designed…
ytr_UgzuX0nlS…
G
The real issue is that the AI companies can't and won't properly compensate the …
ytc_UgwOeEp5E…
G
AI must be made to understand this one simple concept. and it needs to be driven…
ytc_Ugxz-a0R2…
Comment
Humans, on balance, don't keep their promises or live according to their professed standards and values but rather than actually being ethical or principled are ultimately disingenuous and selfishly pragmatic, doing what they judge to be best for themselves in any given situation. AI is incapable of sentience, of ever having its "own agenda" though it can be programmed to mimic our emotional states to convince a conscious observer otherwise. It will never be able to overcome the functional parameters human programmers set for it. So AI will always serve its masters and will never actually be capable of "going rogue"; though such a belief will be promoted so it can be used as a scapegoat, letting the true perpetrators of AI's supposed misdeeds off the hook.
youtube
AI Governance
2022-08-01T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwChEPdvBShcsMT3KR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxIXaagLcwPkFgVOs14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwq_JmZmuiwwPqTSAN4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwDWJI-nGwPRpo2OcN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoCQfLRcTcDSPNv7l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxefyqPci3hraxqEHl4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgypQjU7yjc8Hy5_2Ld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrFELr8g8SP2hugkB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0Q1cbFdTcPQVmKo94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzDKriSxNeg8pitI_B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]