Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm endlessly frustrated by science guys constantly dismissing 'philosophers,' as if the scientific method were not itself a philosophical exercise, and as if the very questions you're dismissing weren't core to the ones you're asking. Do we have a single, cohesive definition of what intelligence even is? Can you reliably explain the difference between the intelligence of predictive text and the predictive elements of a primate brain? Can you say, with any degree of certainty, what we would need to see to know whether an AI is truly intelligent? Here you are, having this conversation about the likelihood of LLMs developing true intelligence while avoiding defining what that is, so what is the use of the conversation? You refuse to engage with the philosophical element, so now there's no other possible outcome besides a shrug.
youtube AI Moral Status 2025-10-31T05:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwVA8nMnvbtaBkl1zt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxsWyUB95SEhWn4JeZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxVl_ePAJpVw42M4k54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxT4R5RhN6d7vWn3eB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxVoBgKgc3vBJ2NKkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwoxI7YRZHVy2XR6jl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxc4S8u6T9BmYwz50F4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxhvE96GGj2KI86ul94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwIskV34Cxf46XfY7N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz4pkgpv4bNlAGUchF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]