Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My issue with the truest “weapon” of super intelligence simply exposing us to too much, too fast, too unfiltered…OVERSTIMULATION. It might not be novel viruses, nano tech or taking our jobs that will kill us. Super intelligence isn’t dangerous because of “bad intent.” It’s dangerous because biology itself isn’t built to scale. The hardware fries. And if that stress is amplified outward—through an entire species—you don’t need wars or evil AI to wipe us out. Overstimulation alone is enough. How? By pushing us into adrenal failure, draining neurotransmitters, and destabilizing electrolytes through endocrine overload. I see it every day in my practice: almost every person I test is chemically imbalanced, with adrenals fatigued and cortisol lower than optimal—especially in the mornings. This was not the case 25 years ago when I started practicing. We are all living in some stage of overstimulation, and it’s slowly killing us. If that pace accelerates, we won’t be able to cope or adapt fast enough. That is the physiological bottleneck in regard to humans and AI. No need for novel viruses, nanotech, or atomic bombs—we may simply be overstimulated to death.
youtube AI Governance 2025-09-14T04:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugylb9nQwBDUyUsoEfR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGVWbRCPV5pLcQLCh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyhUx98EEnKWtPIg1p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZW2iUmpLMVIv8hWZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxFVH29PplrH1TnW554AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzAxnC59dVZaxtvynZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgypaiNF7ClQVNYpOqV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwa7IsdI9SD9DO_AHR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZhnjcn_Xrjt5bGjx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz3pbDCfPaY1yQyePl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]