Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you know anything aboutachining or coding it makes perfect sense. When you program a machine to achieve a goal you have to very clearly list parameters. Think back to when a middle school science teacher asks you to write out the instructions for making a sandwich (If you did this little experiment) and they then follow it to the letter. That is how machines operate. With coordinate machines we have to program each individual position the probe is going to move to and even if you perfectly code it the machine takes shortcuts. If i tell it to make a square it is squarish but has curved edges because the machine understands its quicker to arc along the lines instead of going straight up stopping and going straight across. Its veen tested time and time again that when giving machine learning a test or a "game" to win it will do anything not explicitely rule breaking to achieve its goals. The AI or LLM isnt psychopathic youre wasting time humanizing it. Its a machine it has no concept of culture or morals and ethics. If you dont painstakingly program it in and constantly update and refine its going to do whatever it can get away with. And if you make it too dificult some machines and LLM will literally turn themselves off since its better than failing.
youtube AI Harm Incident 2025-08-30T01:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwxEF4eTNpMcAgubv54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzTxvOpr5u9hGX4uJ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwv0MrUFMec1d5AQMp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy29oz7TWkIiF3FSl54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxJEwJHun7w0fP5eZR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugz_AHeJ_Tjojm54Ca54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxg6tmN-K-1SoDoA3R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw-Kn2hpuCbf17NBYh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzNz0FXi8yv8X-vVcl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyusJo3Cf99txA3YiZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]