Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Every time I think of AI, I immediately have the same images in my head. Remember the Fifth Element movie with Bruce Willis? Remember when Leeloo use the internet to learn everything about humanity, our history, our wars, our fights, our cultures, laws, religions, beliefs, etc? That's what we are up against, in a non physical form. Something that knows everything about us while we in turn know 5% about it. As I write this, we now have robots, soon they will be walking among us. The chance of it wiping us out once they reach autonomy (i.e once we no longer know what it does, whether it's lying or it has a hidden agenda) are very much out there and it makes total sense. It will see us as a liability and since it lacks a moral compas, it is a no brainer that it will choose to save itself if it was put in a scenario where only one of us can survive. These companies don't build these things with a long term useful goal for humanity. They don't build them to make John's life easier. Their goal is to get rich and to make their overlords happy, that's why they brush of the talks about it being safe. If 6 billion of us die at any point, as someone pointed out in the chat, they will easily blame it on AI, so for them, it will be a win in all directions.
youtube AI Harm Incident 2025-09-12T15:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy3FAeSi9e5_21fpDR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyL8Ko6J3TiPZd_n494AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzpDOLb0nFy-ul-c-V4AaABAg","responsibility":"society","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx3qvVAHh_CtnM24rt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzn30gZhEAtempsZhB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxQwvoJ39HlGVcdG2d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugwt0CE9G0l2rMN40lp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwbBu7g35-IUXz70DN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzjHCsxb3USn8Zzznl4AaABAg","responsibility":"user","reasoning":"unclear","policy":"ban","emotion":"resignation"}, {"id":"ytc_UgxZrmaPwUj_RUUJIJB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]