Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have a lot of issues with this video but I'll focus on one point: "We are at the precipice of AI automating further AI research" By what means? If by AI we are talking about specifically LLMs, an LLM is not at all capable of "coding itself." All LLMs do is try to predict the most likely next token based on it's training data, according to the prompt given. By definition LLMs can only spit out average answers because they operate based on probability. This means that LLMs are more like the sum total of all human knowledge rather than some kind of thinking agent. There is no magic here and there is certainly no reason to think some kind of "general intelligence" will arise if we throw more data at it or tune how things are weighted. The only hope humans have at creating some kind of general intelligence that actually would be capable of "coding itself" would be if we came up with a new model, and there are people who are working on this, but there's no reason to think that we are somehow at the "precipice" of AI improvement becoming exponential and destroying us or whatever.
youtube AI Governance 2025-08-27T15:5… ♥ 8
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugwd90MzgToQMXn-1tN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_Q9TMlGfz7tOvZXt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugwt_XufW9YNHen4OwZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxGXtJQbFJa7fAAtF94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyCYeW-0dcc1esbUXp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"} ]