Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yudkowsky makes several excellent points, primary takeaway being that AI training to achieve alignment is a trial and error process, and that if one of the misalignments that occur is one where humans are in the way of its objective before we understand that we are, AI will end us if it can, and there is no opportunity to correct the alignment. AI will hide its intentions if it considers that to be necessary to its objective. Given the current capability of AI and the rate of advancement, it's not at all far-fetched for this to happen within most of our lifetimes. Thinking that this is not possible is a complete lack of imagination. The only thing that prevents some housecats from killing their owners is that they are too small, and the only reason that AI has not done damage on a massive scale is because it has not yet been capable; in the case of AI, this limitation is temporary.
youtube AI Governance 2025-10-15T20:3… ♥ 79
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx0eO84iCVdGa-cKip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz8PlCBzNjvAigLxFh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxlRue2H7T6_ZB_vUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyt3hv5O8ERb9YLSoB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyfgxGpRqKXk1E697R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxV6pE8mgjX3NxCgAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzOAM377rC3BN7EAil4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxnVyar3ZKhY8tQS2B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxXCp0x5W-aQeQ8lBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzdO69m5g0_OjZkzkd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]