Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
it depends on how foolish programmers that are developing AI are. If you set their parameters in such a way that it can start not only programming but also let it freely learn and start correcting itself and also solving problems then the future will look bright for fools, because AI will work 24/7, but as I said, it is just a bright future for a fool. AI will start developing programming languages that we will not know how to decode, so they will be writing programs we would never know what they are used for, that would be a catastrophe. If AI is set in a real free way and start to work for its own (it is possible, they are close to get into that in some couple decades maybe) truth is it will do things that will make sense only for AI and machines and will also do it in a way that we won't see as "correct or meaningful". Thing is it would be pretty pointless to create a really conscious AI, it would have no real meaning to "live" to say it in a comprehensible way. Thing is whatever it does would be related to what it feels as its needs and would be really far from what humans perceive as needs. Well, coming back to real life for now it would be pointless. Whoever is working on that kind of stuff should be reminded to not screw up just for the "we are able to do it" feeling and limit AI to meaningful controlled woks just for the sake of taking human labor to a healthier perspective and medical advances, maybe wheather control and its parameters. Meaningful things to the living creatures, just that, if they go all God-wanna-be mode "We created a living thing" we are screwed!.
youtube AI Jobs 2023-09-29T04:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz53x5tzcFpEtriuSF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxoA7nLwSG2JW4nIaV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz_AjwQw10uo34b3F54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwtScSKMp9iKjlmtGB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2Z-PZvfk2p6qqxzV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx3G38arlZQPgs5utt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyBth_o7Wen0Mwzydx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyQwG05W1pLMMrra4N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxWPCMXkbugw3XiXyN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxAm7I_ODtZJz8onDB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]