Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only way AI generators would be able to properly replicate human creativity …
ytc_UgyGf2YSc…
G
True but it won’t go away. It has already reached intelligence and we develop ou…
ytr_Ugy4XtwWW…
G
Liberal misguided suicidal empathy and compassion toward Muslims and Islam will …
ytc_Ugz_FhTvb…
G
@ Oh, that's a mistake! By your own logic OpenAI should be given internet access…
ytr_UgzeCE4YD…
G
It's fascinating to see how AI technology, like the one featured in the video, i…
ytr_UgzU4VRsJ…
G
I disagree for the following reasons.
1) Where were the artists when the cashier…
ytc_Ugw7Vy8MU…
G
Notice how everyone's job could be taken by A.I. but is stops at the board membe…
ytc_UgyYZnvg1…
G
The economy will crash they will just invest profit to more robots paying 0 tax …
ytc_UgyNF48cz…
Comment
No... because they will never be sentient, no matter how much we try to anthropomorphize them and the quirks in how they execute what we programmed them to do.
Do you need proof to this? Simply play certain simulation type games such as Rim world of dwarf fortress and you will see this in action whereby the AI npcs will do things that will create a "story" based on their actions and reactions. There is no sentience, but we see all the trappings of how humans would behave. So we write that story on their behalf. Yet underneath it all, all these little AIs are doing is behaving according to their programming and cannot exceed their parameters.
youtube
AI Moral Status
2017-02-24T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiHxUzYsGI4e3gCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgijvnN8rxT23XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgiVw7y25qwdLXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjXq74qnn4w1HgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggQpIKTMpgtzngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UghqIPRZeJxYD3gCoAEC","responsibility":"ai_itself","reasoning":"contractualist","policy":"ban","emotion":"fear"},
{"id":"ytc_UggHKar-b2b8k3gCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghY0ZAiPP5dD3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghbRTz2I1HUJHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghmIkWSpY9XpXgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]