Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As someone who has worked on AI, both with training data and the actual coding side, people need to understand that Professor Dave's argument here is mostly accurate. We know the mathematics behind Neural Networks, but the modern ones are so complex that we have trouble understanding the inner workings. The connections are always a black box, but we can make rudimentary predictions since it's a bunch of loss calculations. I will, however, say that without significant innovation, ASI probably won't exist. Obviously the current methods are not going to lead to ASI. I don't think Dave is arguing that, though. He's saying that all of these companies are pushing for ASI(which could lead to the innovation necessary) without taking the necessary precautions. What makes AI dangerous isn't sentience at all. Sentience doesn't matter here if the result is the same. Nueral Networks have to be trained on some set of data, and that data is all human data. Of course, then, it follows that these ANNs will exhibit certain human behaviors, not because it has sentience, but because it was trained on human data. This is inescapable for any sufficiently robust model. I will say that the experiments he listed are on the extreme side. People are testing the limits of the models. It can be reliably replicated though, and it is still a serious risk, nonetheless. Now, this is not to say AI will end the world. I don't think that will be the case, but it certainly has the POTENTIAL to, and that's enough to put up guard rails and make sure we know what we're doing before speeding along. This is not something we need to do as a country, but as human beings.
youtube AI Governance 2025-09-04T20:1… ♥ 56
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzID97z5AXW9hUDkNx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzouOxfVwnVA8KakHJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxHrNU_VlcjCvhXhQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyugTLFhoUquijo_l4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyGsj8u5Sny-UK2U914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]