Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
oooh yeah this AI is not good,because Jesus and Mary they need to destroy this s…
ytc_UgxYpeaem…
G
The Coca Cola example is funny because that AI ad was widely criticized and prob…
ytc_UgzLmIAwS…
G
Would they though? Assuming the worst case scenarios, they'd just be another bun…
ytr_UgzbuTHpQ…
G
If AI is better at any humans doing anything and we start using it to do everyth…
ytc_Ugz6tWHAq…
G
AI CEOs:"finaly we can replace our human customer services and lets replace our …
ytc_UgweGL4Mr…
G
Claiming credit for AI imagery is like claiming credit for a painting someone el…
ytc_Ugx5XkUvT…
G
okay awesome!!! but can a robot play the drums, guitar etc without sounding lik…
ytc_UgwiV381v…
G
Exactly and their only thought is thinking we should stop AI advancements and h…
rdc_nt69gcw
Comment
The thing is about pain. There is no reason for us or AIto programm itself to FEEL the pain, it will make sence to maki them sence it but not suffer from it. The fact that that AI will thin something like "Kill all humans" is kinda exaggerated. It's not like you have to limit real functional AI but you need to make it to be able to learn and make itself better, to gain "experiance" on it's own but in one role like auto-mechanic but not making it to get in a delema like:"Is it worth fixing cars over and over or create perfect car". Give it like half concience so AI wouldn't think "outside the box" and just carry on with his linear duty.
youtube
AI Moral Status
2017-02-25T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggjVwD_ZOZMU3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj3D5WE37wArXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgjFrM_C2E3oy3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgiDVdRVVHdhxngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgjniVw29kIWbXgCoAEC","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghkYCr9dpeSNngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugi4GssI78QMv3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugibwt-ypmpmDHgCoAEC","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiPeApqlkTUO3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UghvPk48U9EYcHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]