Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As concerning as AI can be, the truth is that misinformation, lying, editing, and simply normal framing, priming and construction have always been more than adequate to achieve the goal of misleading people. The problem is not that a new powerful tool exists, but that there has not historically been any meaningful regulation of any aspect of the system, and voters are not equipped to navigate the informational landscape, and never were. Humans don't evaluate facts or arguments independently, they cannot do so, they lack relevant expertise and knowledge. They determine whom to trust. Despite how tenuous that methodology is in today's world, it always was. Again, it's never quite as new as you imagine. The big problems in recent times have not been a new trend in information navigation, but rather feedbacks and cognitively available compulsion-feeding sources of affirmation and conflict. That is to say, people can easily and repeatedly have access to confirmation of their views, affirmation of their righteousness, and examples of people they want to be angry at. This consistency and availability maintains a high level of emotional arousal, and it diminishes self-reflection and critical capacity, because people are, frankly, too busy engaging in conflict to evaluate their own positions. It makes people more defensive, more anxious, and less informed because they are more guarded about competing views or values, seeing them as a threat to their strongly consolidated perspective. They also are more strongly engaged in enclave polarisation activities, such as purity testing within their peer groups, which are more rigorously selected for agreement. The point is, the problem isn't more sophisticated tools for fooling people. The problem is that there is very little epistemic authority that people who might disagree on topics can agree to trust, and people in general are less and less interested in attempting to gauge the trustworthiness of sources, because they are more concerned with agreement than validity. Basically, it doesn't matter how you deceive people, if they want to be deceived before you even get started.
youtube Viral AI Reaction 2024-02-24T00:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxFCNU8_n4UJKt1WtN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzFOtNAGw-VetBWLgp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxV2MsBem071PaXUBV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyK5eKapBBzm4v4yA94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgykqKt9-JJTo56xCu14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxFPPmkPBDpqfLnzwN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxV4xRAn2lZjaL6Dfd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxt_0B13JpIypJTUmp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgywDpH7dInIk-Vs3Hl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxzNlx_3ayEn8tcdCt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"} ]