Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Man creates the AI. Man tells the AI to learn from man's behavior. Man feeds AI the worst of man's behavior. AI does the math and learns that man is self-destructive. AI determines that man created the AI as a slow and complicated suicide weapon. AI begins doing what it thinks it was created for. Putting it in coding... if AI_militarized and AI_let_loose: "AI will eventually do the math control/restrain/kill man." elif man_eliminates_selfdestructive_teachings and forbid_selfdestructive_idiots_from_using_AI: "AI might learn that there are selfdestructive idiots as well as deserving Humans who aim to better their existence." else: "Remember Terminator 2?" THEN AGAIN.... If you use the AI to learn things and art and (non violent/self destructive) self expression, meaning something like ChatGPT, where you give it a question or prompt and the AI gives you one answer AND THAT'S IT.... the AI will never be able to kill you. But IF (YOU'RE A F***ING M0R0N AND) you give the AI permission to do things like -lock your smart fridge -lock you in your smart house -lock you in your smart car and drive it for you -control your finances "smartly" ...and essentially control your whole life in many aspects, you have already lost your right to complain when something doesn't go right or even goes horribly wrong. Moral: The AI is like a child. If you give a loaded gun to a child and the child pulls the trigger and the bullet hits kills you, IT'S YOUR FAULT. Not the AIs.
youtube AI Harm Incident 2025-07-26T01:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwLl7k5KIo1GWqYmPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzilMHBtAM1v95ykKF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwkd4r0xRW8kbkCIyN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwgaa9KNdrItNIcCbF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwAIXo120oJIJ4Q_0B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzHURxaTi6JLt4yDDh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwmSMJKgn34KLZEG6p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZsUv6OkQY_tfdstl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyFtIOUQqK4n9V4PmJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxt2o7opW8gdwpZyHV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"unclear"} ]