Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You want to ban government regulation by the hands of the government. Of course it didn't work. Let's do a quick analysis of the measure from a libertarian perspective. Banning innovations based on fear or morality is a worst thing in the world. The system doesn't have a wider picture of what and why to do. Every regulation is attacking freedom and rights of a person. The thing is, you can actually regulate AI without regulating AI. Ban porn fakes instead of porn deepfakes, even if they are photoshopped, not generated. This will have the same effect, but it exploits existing laws rather than creating new ones. If we start banning research in the field of AGI, then we will simply stop progress without any reason to do so. It is very important for Trump to keep up technologically with China. Uncensored, open source LLMs are extremely important for ordinary people, because the corporate environment is already overly regulated and filtered. On the other hand, banning regulations at the state level takes away states' rights. This is not very consistent for Trump because he said that the repeal of Roe v. Wade expands the rights of the states. The ideal path for the US is to reduce the role of the federal government and expand the states rights. This eventually leads to the freer society that the founding fathers wanted. So I agree with Marjorie on this completely. Therefore, criticism of the law is absolutely valid. However, I would most likely be against removing this measure, simply because any regulation is an absolute evil that kills your personal freedom. People who advocate for the government to break into your private home are the enemies of humanity. A federal ban on AI regulation, but the absence of a ban at the state level is something that I would support with both hands without any "but", however.
youtube AI Governance 2025-07-05T15:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwFQroNBCLfJX2jp2h4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgywByN6wbnSMQVYGud4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzIOKRVfvShQuIivS54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwrCxgzIP6dlOrb02l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzfXl72U_FOktRQayZ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx43fR7t9uyJg0evfZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwOAr2Q1Qckgs1XvbR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwpmtEvnHg6JDUmSvd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw_Rn31X76B44SfoIJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyEPwkfYGHS6FyMsZl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"indifference"})