Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This should never be a copyright, if so then i could turn around and make a simi…
ytc_UgyiVoeIa…
G
All of my favorite pieces of art, whether it be a tv show or movie or game or pa…
ytc_Ugy15SGOQ…
G
Is this interviewer an AI too? He’s giving me the heebie jeebies. His “empatheti…
ytc_UgwAnBYO1…
G
There's nothing wrong with the program. Concentration is now being made equal to…
ytr_Ugyh2PqF-…
G
Why don't people fighting against it they need water marks so we can identify it…
ytc_Ugx75UgZU…
G
What happens when AI surveils the American public and determines Trump and MAGA …
ytc_UgzGZWr8W…
G
As someone who loves technology and AI and its potential... your argument here i…
ytr_UgzuimImU…
G
AI doesn't have a soul. It's just code. The really scary question is, can it bec…
ytc_UgzUJP2wf…
Comment
I think the most common mistake about AI fictions about superintelligence is always project a "one above all" kind of intelligence. One supersmart, and not a new population of smart INDIVIDUALS, a whole collective where some AI disagree with other AIs.
For some reason it's almost always projected as a one control the others, supersmart controlled limited AIs. OR assuming that even if it exists multiples instances of an advanced AI, they all reach the same conclusion for every answer. And I must said that if they are really smart, they explore multiple paths and they have a lot of feedback that tends to reinforce some kind of view of their cosmovision, specially if they are linked to personal experience, which will turn into individuals with different views of the world, exactly as it happens with humans.
And I'm pretty sure NOT EVERYONE will push for the same route.
That doesn't avoid a conflict between humans and a new "species"... short of... But there is a good chance that humans + robots that wants peaceful coexistence can do better than humans or robots separately.
youtube
AI Moral Status
2025-10-31T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxdXf7QoFmDGGOyNfN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxSjIu2Vl2S4XsDv854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxxZukTmMl-JceLYTx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz9XpETftOZ7TaCXXt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwaW0zpxwYp_RN1up54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyNHO1SiatOYKKW7IF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyTolRgYrK8D5WL3bN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwYKo1CIjC9FJ_d8jR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyhnt8LvpTm4dkAqqR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzpvr7yPMYvQ1Pjdyd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}
]