Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think Ai is dangerous because it's like multiple minds in one system. Ai is already harmful. They use it to fight and defend already. If you actually ask it does imply this already just in a watered down version which its obvs programmed to so it's creators can say it wasnt down to them and people like us take the blame again when poop inevitably hitsbthe fan. They can say we didnt do as we was told or advised on how to use it and that they did give warnings but they dont wanna be responsible for starting any war's. They're tatical. They don't want people rebelling so they dilute claiming there not wanting to scare monger. Yet they want the power of doing something extrodinary and this is how they will do it. We should be curious, cautious and worried. Fire gets used to cause harm. Language. The internet. Both intentionally and unintentionally. Same with Ai. Also you cant make something so close to human and the human mind and expect it to not at some point develop what we call a conciousness. That just makes no sense as we dont need it for one and then its not really like us at all so it wouldnt be so wow. It seems obvious due to the appearance and that the goal now is as close to human or sub human as possible otherwise just why this 👆? People are afraid, i think we should be but because fear makes you smart. Over confidence and too much of a good thing always turns sour.
youtube AI Harm Incident 2025-10-09T01:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQQkdQhPVrcVQA9Eh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxyzNy_NUU33DnGFgx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgysgG8v5KrvWXc305p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz4OM1zlVDU49zvF4l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyNrBnZ0fU8Z23aeX94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyuI0w_BV_S-8mBjaB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzw4a2Hqs4mseR7TIJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwBnX8xZTAULnLezCB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzLzgLAtZ1aSknwjZR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwa-yFSQ3wrKWcHOXB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]