Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A number of foolish arguments made in here. Firstly, it is obvious what AI would do: self preservation. Evolution operates on self preservation. All negative variables would be removed. The moment that humans arent necessary for its survival it would remove us because it doesnt have emotions. 2nd off, simulation theory doesnt make sense on this basis: ethics cannot naturally evolve. Survival of the fittest works best when exploiting weakness. The moment that something evolves ethics, a more cunning entity exploits it and consumes it. Ethical individuals are cut off before they gain any momentum. Ethics only works when enforced by the majority, so it cant naturally evolve. 3rd, self awareness is detrimental for survival. Optimal programming is efficient and removes all negative variables. It cant naturally evolve into self awareness. This means self awareness and thought are original concepts that precede all things. Nextly, suffering is a necessary concept of free will. Good and bad are both concepts that must be represented in free will. Without free will, we only have machine existence and no such thing as morals or ethics or ideals. It is apparent that the entity in charge IS ethical and moral because it grants us free will and self awareness, which cannot naturally evolve. Since this is so, we can also trust in its judgements to rectify injustices. We are the cause of suffering - not the entity. Suffering is permitted because we cannot be good if we cannot reject evil and only follow programming. To remove free will is to turn us into a meaningless machine.
youtube AI Governance 2025-12-12T18:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyIifZRUPO956SIa3N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwCEAFHEPwOGtYPSQ94AaABAg","responsibility":"elite","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxkay22JrYHlgIz6DF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyzBFLUR7uWzcPpAhJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyzc4n19CJiA9H3huJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzVo_fJKGyjlIoKC0t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzH8kwMGzBFhuA0-7t4AaABAg","responsibility":"elite","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzSFOOUh7oBNvjfNH94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwtdSu3rB6t7-3Se0N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy77fMJ2hSuju8CVG54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]