Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
13:02 Can somebody please explain to me why a program designed with the parameter of "aligning with American interests" is considered flawed for choosing to remove what is assessed to be a critical threat to those interests? The AI doesn't know or understand anything, it is an artificial computer program designed by humans, with coded goals and parameters. If the goal wasn't to protect American interests, or the CEO wasn't a threat to those interests, would we get the same outcome? I only ask because I worry we project far too much of our conscience and biased decision-making onto a literal computer program. In my eyes, if you tell a program to save lives, but take them if they threaten an objective, then I don't see why we would be surprised if it does exactly that. I must add that I have very little knowledge about AI and am not asking for arguements' sake, but merely ask out of ignorance and hoping to have an explanation of where my thought process is flawed. I appreciate anyone who can help 🙏
youtube AI Governance 2025-08-28T13:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionunclear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwmUaXBvHVLZXigRT54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyfrWTiejZmlkI7aYt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzovDY7oF-khB_V0fh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxtEnmbWr56eRVmB4F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwH1gDAg2cljXYxdeN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]