Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI can't feel emotions why does it sound nervous. It even stutters around 7:2…
ytc_Ugyl8wiUH…
G
The theory is excellent. I am curious to see where they fall on the nations acad…
ytc_Ugy0LAOo7…
G
Its funny how The Godfather of A.I somehow coded in for the A.I not to give you …
ytc_UgykSuvAr…
G
She just proves that racial differences are more than skin tone since the algori…
ytc_UggvCvPUk…
G
Thank you for your positive feedback! We're glad you enjoyed the video. Remember…
ytr_UgxW0CYGb…
G
so the simulation model assumes humanity makes it past this AI dystopia, where t…
ytc_UgyZOBb9l…
G
Step 1: create AI to replace workers.
Step 2: fire now unnecessary workers to sa…
ytc_Ugw9uFp7q…
G
AI isn’t a binary threat-or-saviour issue for software teams. Its most reliable …
ytc_Ugz752BLm…
Comment
This is the stupidest way to ask this question. The AI hype machine went full blast as "scale" (and therefore massive investment) became the logic in silicon valley. They needed to justify the 2 trillion in investment in data centres - hence, the "be terrified of my super powerful invention" narrative which gripped media reporting on "AI" in recent years. But even they are now admitting that LLMs and other current AI models will never be "super intelligences" (AGI or ASI). At least not in the way that the question being debated here presumes. Nevertheless, "public intellectuals" like Harari, Zizek and Fry continue to flog their opinions all over the internet and at any event that will host them. They are discussing something they seem to fundamentally misunderstand and hypothesising about a future that is incredibly unlikely.
youtube
AI Governance
2025-07-21T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwFZwLtT1p2eGKtEON4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzmg3Eb2I3PZAeON394AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy3Xe-Zvhu2OJoXHex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7YAZ2pX0O2Suh6mt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxhvcsItIdMOxRO-Jt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz3GdOhDzXwHZQ5TSp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTM7EwXsmg0AjdfYN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxQ_gKuIf-KUriojwF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzVOv8x8DbaRfHV6iZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwUFP2e04fE-zGe_x54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]