Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly as a person who just got a drawing tablet, I couldn't care any less abo…
ytc_Ugzs3VUHv…
G
AI must program not to harm any human , just only can act like cope to catch hum…
ytc_UgzFB66Fw…
G
in all honesty.. i do not care.. people talk about Ai being so dangerous yet we …
ytc_Ugx2u0BiZ…
G
For me AI killing jobs is most important concern. Its happening and this thing i…
ytc_UgzHt3Rj4…
G
Ex falso quod libet: anything can follow from a falsehood. Simply put, A.I. prob…
ytc_UgzhnH7Pl…
G
Asked AI how to get good at Bedwars.
Was told to "Just lock in fr". Turns out m…
ytc_UgxdQg6ZH…
G
I just typed in the youtube search "AI to control the narrative" this is the ea…
ytc_UgxzYJCJ7…
G
dont care, ai is cheaper and a lot faster even tho is a little erratical, with g…
ytc_Ugwnm91X9…
Comment
I just asked that question to ChatGPT. Short answer: no — I wasn’t directly “fed” or trained on specific books by Neil deGrasse Tyson (or any single author in a targeted way).
Here’s how it actually works:
How my training works
I was trained on a mixture of data, including:
Publicly available text (websites, articles, educational content)
Licensed data
Data created by human trainers
That mix includes general knowledge about science, astronomy, and physics, which people like Neil deGrasse Tyson also talk about—but not his books specifically as a defined dataset.
So why I sound similar sometimes
If I explain something like black holes or the universe in a way that feels familiar, it’s because:
I learned common scientific explanations used across many sources
Popular communicators like Tyson, Carl Sagan, etc., helped shape how science is explained publicly
Important distinction
❌ Not trained on: “Here are Tyson’s books—learn them”
✅ Trained on: Broad patterns of language and knowledge, some of which overlap with ideas he explains
If you want, I can explain things in his style (more cosmic, storytelling, philosophical), or compare how he explains concepts vs. others.
youtube
AI Governance
2026-04-02T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugx6s8l_YKBo6DzuBul4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzy1xOCST22KeVXTgN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxPIEHyDtrR_IogF_h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwIXvCKS6itFMarjoh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxxt47s-7jnR1jmsH14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx2PG6vzJBceiOmqZd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFHYg7HjlGd3gQHvx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwYB8HVG8MlmBWRbbp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwcpbO0MTbv1hGarkV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwAaaLUXegpk8aqXVh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}]