Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LilyJay... 😂... So stupid.. Are you trying to say ChatGPT decision on whether Je…
ytc_Ugy9p1t96…
G
You can feel the sinister nonsense behind A.I. the world is past its tipping poi…
ytc_Ugys3E5bf…
G
Indeed. There won't be addiction in the future. There will only be AI and it wil…
ytr_Ugx_O5tLG…
G
God bless Bernie Sanders for caring and fighting for the truth. AI is not human …
ytc_UgzWiHd9A…
G
Most likely, AI is the great filter. It appears too benign at first to cause fea…
ytc_UgzY00DiO…
G
Ai doesn’t replace jobs directly but indirectly. It makes experts that actually …
rdc_n7z2igr
G
Like all ai slop, you notice how shitty it looks if you take only a few seconds.…
ytc_UgydgISbE…
G
Why would you program a robot that itself thinks it has consciousness? We don't …
ytc_UgjS4PQpH…
Comment
Brilliant discussion. Question: Is a "human in the loop" required if AI develops the ability to feel or experience emotions and is there a likelihood that AI will develop an ability to understand and feel emotions? If AI can intentionally present itself as less capable or informed, is the development of emotions (such as fear) a logical or potential outcome?
youtube
AI Moral Status
2026-03-03T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwgDFT8k8IzqiZLoYZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyzylnXaFMmtrLGOrt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwieH700oj6qjfbf9Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyysel8LcOQ2RhaNCF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzYicYEDPPUtpkOAbN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyHURv3w59WU-3rReV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz11qCRxfFx2T0zkgN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzR0Vr7zpuJigXeM8x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwzDg52y6HzYuGs6XV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgympM97wvWODhcOf414AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]