Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My main question still is: assuming even a worse case scenario where AI is currently killing everyone, what are the odds humans can “unplug” it or course correct? I’d assume it wouldn’t be an instantaneous extinction event, so I’m wondering what sign posts we could use along the way to do tests and re-evaluate.
youtube AI Governance 2025-10-22T18:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyxdcOY8zUdmDg5jrV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxWSkgotwHClYZDPgl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxXgB_zFEOi_ATYcpJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzFlsPUan-ehRncJhh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxBp1j-BneR15WBlqt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy0lJHC2Fyg-MXf0CN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgylwochodUBHsWmVJt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRQqwu1YzokPBw5dR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzTV-8pA55cl2O7bDl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy-f2bbSIqaqseDGkB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]