Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As I wrote in December 2025, Artificial (Alien) Intelligence, Artificial General Intelligence, and Artificial Super Intelligence are disruptive technologies that are the biggest existential threats to humanity and its future. They are going to destroy life as we know it and displace millions from employment possibilities. Millions of college students with unsustainable college debt and no prospects of future employment will be the biggest losers. As AI/AGI/ASI grows, learns, duplicates itself, makes more "agents," and eventually surpasses human intelligence, it will wipe out millions of jobs and dehumanize societies, immiserate the masses, and create the end of civilization as we know it today. It is designed to replace human life with digital life. In time, the machines and robots will take over and eliminate "the eaters" (the useless class - that is all of us) who are unnecessary and costly to maintain on Universal Basic Income (UBI) (a form of communism). The AI 2027 report is a must-read, to better understand the history of AI/AGI/ASI. Man-made change is coming, and not for the good of humanity. To paraphrase Robert Oppenheimer, “Now (A)I am become Death, the destroyer of worlds.” A human-created dystopia for short-term profits by a select few...there are different roads to take. Do we want Superman or Supervillan? Let us hope the human designers of AI/AGI/ASI chose the right road...
youtube Viral AI Reaction 2026-02-18T04:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyRFbS9rycjnSFTcf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzBka9T0NmUuSOtzQ54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx0HtVXtItdm8qH4QJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwSwocaC0JN2n9dk754AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy8TeW2sJPgCK_KIaZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKorzFzym_lT6Y2Kd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwk-xArzTednfR6R654AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz7I5RFQKVffWsMZG54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyKU3sfuRjc7b3DyER4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugw46kv4OIJ1JWLLyQZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]