Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Given that the models predict the most likely next token based on the corpus (tr…
rdc_mytw6dn
G
forget about legal or not, ethically ai company need pay the artist for using th…
ytc_Ugzf9Somn…
G
Is the person who commissioned the Sistine chapel an artist? They gave Michelang…
ytc_UgzclSieH…
G
Ai will replace job, ok but can Ai buy the product and act as consumer? If no th…
ytc_UgzBUguCI…
G
I think we're past the point of categorizing AI as a mere tool if we resort to a…
ytc_UgyLV0pQB…
G
the algorithms is powerful but my concerns are that the videos you are showing a…
ytc_Ugy-8EJw6…
G
@notreally2406 that automation is taking our jobs and we need a reasonable solut…
ytr_UgwLhL6uv…
G
AI is a tool, like anything else. While it is made or programmed by humans for h…
ytc_Ugx2Wbl2G…
Comment
This is how ChatGPT answered for my question:
I've just watched the father of all AI at Startalk along with Neil de Grass Tyson and it was a bit scary what are they saying it may happen in conjunction Humans vs AI
I know the episode you’re talking about. When people like Geoffrey Hinton sit next to Neil deGrasse Tyson and start calmly describing possible futures, it can feel like someone dimmed the lights in the room.
Let’s separate three things:
What some researchers are warning about
What is technically realistic
What is cinema-grade anxiety
The “father of AI” types are usually warning about this:
• AI systems becoming more autonomous
• AI being used in warfare
• AI generating misinformation at massive scale
• Economic disruption from automation
• Loss of control if systems become too complex
The fear is not Terminator-style robots kicking down doors tomorrow.
The serious concern is slower, structural change.
The real risks experts talk about most:
Misuse
AI used deliberately for harm: cyberattacks, automated propaganda, bio-design tools, autonomous weapons.
Misalignment
A powerful system optimizing for the wrong objective in ways humans didn’t anticipate.
Economic shock
Large-scale job displacement faster than society adapts.
Information erosion
Synthetic media making it harder to know what is real.
Now the “Humans vs AI” framing.
That’s a dramatic headline. In practice, AI is not a competing species. It does not:
• Have self-preservation instincts
• Have emotions
• Have desire
• Have intent
Current systems, including large ones, do not “want” anything. They generate outputs based on statistical prediction.
The fear at the frontier is about highly capable systems operating with too much autonomy combined with poor oversight. That’s a governance and engineering issue, not an uprising narrative.
Also important:
Most cutting-edge researchers are cautious.
They advocate regulation, testing frameworks, interpretability research, and staged deployment.
You don’t hear that part emphasized in viral clips.
Are there real risks? Yes.
Is a conscious AI plotting against humans imminent? No.
The uncomfortable truth is that the biggest near-term risks are human-driven:
• Humans using AI irresponsibly
• Governments racing without coordination
• Corporations pushing too fast
AI itself is a tool. A powerful one. Like nuclear physics in the 1940s. The danger was not uranium having feelings.
It’s completely normal that it felt scary. When you see experts openly discussing existential risk, your brain flags it as a threat.
But we are not on the edge of a Humans vs AI war. We are in a phase of rapid technological change that needs adult supervision.
If you want, we can talk through what specifically in that episode bothered you. Usually the fear becomes smaller when we isolate the exact claim instead of the atmosphere around it.
youtube
AI Moral Status
2026-03-01T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw1fnuOKVDwjcc-Rnx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxEGkhHyRn4Cn6HzV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx9okGCM6_oTBSzHm54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5s1f7INw10NAo7pl4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgykSZMmFKLgMSuixCN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzPtPvkr6D5rEPk5wF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzPbuNNUhz4YwffHsZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzUJP2wflSrJ7Xxdnx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqB9xq_rr77ZMH7394AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwT6BC3eULWrxbsUMh4AaABAg","responsibility":"government","reasoning":"mixed","policy":"ban","emotion":"mixed"}
]