Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's funny how Prof. Tegmark mixes abusive behaviour of AI agents that is explicitly promoted by Elon Musk as an expression of freedom with the development of the technology. And about the self-improvement features of current AI agents: These are self-updating mechanisms similar to update mechanisms in operating systems that are meant to eliminate security threats in the AI agent. What he seems to completely ignore is the fact that the most sophisticated AI project has been published with a warning to not use it because the developer himself stated that at the moment it is not safe to use it. And the unsafety is not because of some threat to mankind, it's because it's just been in development for a mere weekend. And thus at the moment this AI agent (that you can hand over multiple authentication credentials) is not protected against forwarding these credentials to third parties. Now imagine someone showing you his little car building project and he states: Don't ride this in the streets, its brakes aren't ready for that. And the next thing is: You jump in and ride it down the freeway. Is that the fault of the car or that person who just jumped in? And thinking about the state the US is in at the moment: To state that the AI industry has completely messed up with regulations is so awkwardly ignorant of e.g. the fracking industry. Na... I heard enough.
youtube AI Jobs 2026-02-17T17:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyaWqGsS9fJMdjLDkZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzvW_gaWKmJJfA7R-V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyAis_vAVZT5KjXBBZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQtwdTe8j9VlLbnwh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx2tW-Zp_woI1UX2WJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwDiFc3hlY5GEKROxF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzmp6a2mrJ5RrSbi_R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzhXrwuOjLoHafLn1N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxj_n4kg2-n0AxpIa94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwIJMbu_cRp0VeEyaJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"} ]