Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@Alexander_Kale The fact that you think traditional programmers are a good source about AI and its risks tells me you know nothing about modern AI. (And the fact that you refer to LLMs reductively as a text-completion algorithm. I mean, so are you, but that doesn't exactly describe your whole deal. And both you and LLMs are capable at performing well at novel, complex tasks regardless of your limitations.) People do this all the time. They invent experts in their head and use those to justify their intuitions. But then someone comes along and tells you that most actual leading experts are extremely concerned about this thing, and you decide that this is not relevant information, because you would like to not be concerned. Dave is all about deferring to experts. He has done so here as well. I wish he wouldn't have quoted the CEOs so much, because people don't trust them. But the lead researchers at those labs say the same things! As do the most cited scientists in the field, and both co-authors of the standard textbook on AI. In fact, half of all published AI researchers say that there is a significant chance (>=10%) of human extinction from AI. ("Thousands of AI Authors on the Future of AI")
youtube AI Governance 2025-08-28T21:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxfjUXo_FR_ikQV_O94AaABAg.AMIiksp6iTHAMJxEG3e7hj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxfjUXo_FR_ikQV_O94AaABAg.AMIiksp6iTHAMNyVp0rjJM","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgyC-ZaiZH7aiakI8ZV4AaABAg.AMIiQz-2BE-AMJL24pN8TG","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyC-ZaiZH7aiakI8ZV4AaABAg.AMIiQz-2BE-AMNwVYNoUgv","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzxhyJPjFMsVi_d8wx4AaABAg.AMIhqBOAtBRAMIjv6JBEN7","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyyL7XV_MC4trFI6aV4AaABAg.AMIguPtLmJuANYjJ5A_u99","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgxctM15P1ZgsaHX4LV4AaABAg.AMIeP8YK4mIAMIlxAEmQh4","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgwG6Iv7Xr-9JuDYNxx4AaABAg.AMIdteL9JxmAMIgNHX0ab1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwJ4nifqmzvJuYoXj94AaABAg.AMIdhKGxOgYAMSc9sARn7t","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugx5YCRGCoCkjdOM2m14AaABAg.AMId3fhlf7CAMK71jVy4zP","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]