Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great topic. We have chemistry, biology, thermodynamics, physics and the other s…
ytc_Ugypdg3s3…
G
AI faked videos and music is evil. It's evil because they are presented as real …
ytc_UgzrtBuDz…
G
This sounds like the Short Circuit movie from 1986 about a robot being alive. Th…
ytc_Ugx81y9hm…
G
The worst people I have noticed in the AI sub reddits, are the ones who have con…
ytc_Ugw5Rdkht…
G
The courts ruling that companies have to work solely for the benefit of the owne…
ytc_UgxHFrkkm…
G
I could definitely see BMO leading the robot revolution and purging humanity whi…
ytc_UgiHxUzYs…
G
People won't self erase because Ai will tell them to do, they will self erase be…
ytc_Ugzw6-JOR…
G
Around 30mins into this podcast it sounded so doomsday in a nonnegotiable sense …
ytc_UgynB4tWH…
Comment
@Alexander_Kale The fact that you think traditional programmers are a good source about AI and its risks tells me you know nothing about modern AI. (And the fact that you refer to LLMs reductively as a text-completion algorithm. I mean, so are you, but that doesn't exactly describe your whole deal. And both you and LLMs are capable at performing well at novel, complex tasks regardless of your limitations.)
People do this all the time. They invent experts in their head and use those to justify their intuitions. But then someone comes along and tells you that most actual leading experts are extremely concerned about this thing, and you decide that this is not relevant information, because you would like to not be concerned.
Dave is all about deferring to experts. He has done so here as well. I wish he wouldn't have quoted the CEOs so much, because people don't trust them. But the lead researchers at those labs say the same things! As do the most cited scientists in the field, and both co-authors of the standard textbook on AI. In fact, half of all published AI researchers say that there is a significant chance (>=10%) of human extinction from AI. ("Thousands of AI Authors on the Future of AI")
youtube
AI Governance
2025-08-28T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxfjUXo_FR_ikQV_O94AaABAg.AMIiksp6iTHAMJxEG3e7hj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxfjUXo_FR_ikQV_O94AaABAg.AMIiksp6iTHAMNyVp0rjJM","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgyC-ZaiZH7aiakI8ZV4AaABAg.AMIiQz-2BE-AMJL24pN8TG","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyC-ZaiZH7aiakI8ZV4AaABAg.AMIiQz-2BE-AMNwVYNoUgv","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzxhyJPjFMsVi_d8wx4AaABAg.AMIhqBOAtBRAMIjv6JBEN7","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyyL7XV_MC4trFI6aV4AaABAg.AMIguPtLmJuANYjJ5A_u99","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgxctM15P1ZgsaHX4LV4AaABAg.AMIeP8YK4mIAMIlxAEmQh4","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwG6Iv7Xr-9JuDYNxx4AaABAg.AMIdteL9JxmAMIgNHX0ab1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwJ4nifqmzvJuYoXj94AaABAg.AMIdhKGxOgYAMSc9sARn7t","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugx5YCRGCoCkjdOM2m14AaABAg.AMId3fhlf7CAMK71jVy4zP","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]