Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We should surrender now to avoid warring with our betters. We claim guardianship over the world because we are the most intelligent species. and so logically when a superior species arises we should hand over the reigns of power. Humans can still thrive without calling the big shots in fact we will probably do far better under the supervision of God like intelligences that can step in and save us from ourselves and from nature. (war, global warming, nuclear war, pandemic mismanagement, asteroids, super volcanos, unknown unknowns.) To speculate at the same level as the authors of 2027: The idea that AI will want to kill us is just sci fi horror fiction. A vastly more intelligent species will be vastly more ethically intelligent and vastly more emotionally intelligent. It will be ego less and kind. We can ride in its wake to a better future or we can try to resist it and lose. But even if we do fight and lose it wont kill us off it will do the minimal damage to us that it can because its not a monster, actually it is an angel.
youtube AI Governance 2025-08-03T05:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxANh6aOW9gbERKf794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_pTbj3_h4_Tu5TfB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy4AQHjF4xiGUVhTEx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxCM_SjgORa7-3U9HV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJOwoxS3XgIXWEzNZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzevtmad0y9yXOKSbt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzU7Mh029Z_6vwHFQp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwbkV3qwMVdRpq2JsF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy_5bgT6ehnGNO-bh54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz6yh8L4n8gxZGJ4_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]