Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Although I have recently started using AI features when I Google things (and generally like the results), I doubt that AI is going to change the world much before Real Stupidity (RS) puts an end to it. THAT is the thing that scares me the most, not AI. Just consider, American voters elected Donald Trump president, twice. Fool me once . . . To understand how this might happen, one only needs to understand the intelligence bell curve. The peak of the curve is set at 100 IQ by definition and almost 70% of folks fall within 85 and 115 IQ. So a sizable majority of the human population qualifies as unintelligent. In a democracy, every once in a while, they get together and elect a president, making a Really Stupid choice en masse. But the stupidity of their choice almost never fails to reveal itself during his first term in office (During a public briefing at the onset of the pandemic, DT actually suggested injecting a disinfectant - perhaps bleach - to kill COVID-19) , so it is really unlikely that he will get elected again. The dumbass majority of voters actually did made this Really Stupid choice twice this century, narrowly electing George Bush in 2000. But he got re-elected in 2004 because of the two wars he started (Afghanistan and Iraq), and the voters traditionally don't like to change presidents during the midst of conflicts, so there is something of an excuse for them in that case. But there is NO excuse for electing DT twice, NONE. At this writing, he has moved the End of the World WW3 clock forward considerably and there seem to be no checks and balances in play anytime soon. The only real hope we have is the continuing onset of dementia.
youtube AI Governance 2026-03-09T08:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugx_IvoI8dg3alkpRjB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwz1ABRC1Jp2Jco77V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx8t4_ACPd2zUnjt6Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzEJA2OBIxuRlukioF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxS0BUx1L0U44TH1JN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw4ndNk_vjfORSCa5F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwEc2TIrHij7Th97u14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw7sKeihP-ZKne8qdR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwL5_HAYRUl2JLhBs14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwAcGY6DOdNFRuTmph4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]