Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They likely think they can use Elon Musk's neuralink or something comparable to use physical humans brains and/or consciousness to control AI, therefore potentially solving the issue of uncontrollable AGI. The human biology fused tech research and products already hint that this agenda and thought process have been running parallel to AGI and AI development. Corporate bylaws and structure legally allows more and more of this to advance as there is no moral accountability in corporate responsibility or obligations other than profit and every person invested in both large and small amounts to provide for themselves and their own families comfortably loose no sleep nor generally even think about the moral responsibility of their investment portfolio. We live in a society that glorifies profit and accumulated wealth and resources. At the heart of this issue there is a highlighting of the fact that a world where any human (millionaires and billionaires) can so grossly profit and control the direction of our whole species and even the whole planet for personal profit and comfort far beyond any living beings actual needs is a gross failure of our governments, our mental social health and indeed of our entire species. Do any of us truly want or even need to exist, or even will we in a world where there is nothing left for us to do? No art or music, no skill nor thought that would ever be necessary at all? AGI robots will be able to do everything better than any human, gardening, raising your children, literally everything, including providing companionship without conflict. How terrifying and sad.
youtube AI Governance 2025-12-05T03:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzqRLfPyNq_rtB2UJR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzJ--esxeq3hD08ZXJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxpb9AAhKpa2-E4sRd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-OQIZ0xnSA-LGtxR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwBfpLGYzduObO0BGV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxlAD_sH5TSl9ZY5DN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzMUYwFj2EgAxV-FLl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxJB9BECY7Y3s3H2Xd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzj3RDhtJCDiXSH8z14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzDXc3czTi_lkXQVjx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"} ]