Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was amazed by the knowledge of the godfather in this episode. However I was also disappointed by the way he was not always able to have a philosophical contradiction to bring on his view, which is specifically philosophically on his view that AI is already there in thinking. I know things were said like philosophy doesn't have things like the verificationist ways of thinking like cooper, cooper failed in some points, but that's not completely true right? Cooper is also a philosopher believing in such a verificationist view. The question is if AI can get rid of us by lying how do you know he knows better than a human? Humans also lie to get what he wants? How can we trust AI? Do you just need to trust AI like all these religions people who AI people even don't believe? That's why the logical point of view was guiding us, and in maths we found proofs and maths is trying to find the impossible lie so now I can't understand why we should use maths, mostly probability to accept whatever lies from AI. I don't think mathematical truth is what we find most probably but only what we explain in ways we never know so not probable and not in lies. However don't get me wrong I use AI everyday for coding, as I believe it's a really structured situation when it knows lots of things that I could not think about and it does much faster than me. I also know some questions at the end of the discussion reach my thinking so sorry for being a bit offensive earlier.
youtube AI Moral Status 2026-03-14T00:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyk1fT8I8LiJ0Zr9AB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw6dyxk0eT0krfsMdx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx-JIGHzJdXvUENVO94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwF4yVY6ViznLY2Uih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyURZ6p8tdpYWdKv3V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz5IOQH6U3qDh6vAg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzv_SwEn0dZmh8LrLZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyyXB-2gp8Vk8VvWhp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzLtEgZyDiOPWS7lMR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzoJtk3rrT5HMkgo-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]