Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This may sound like a stupid thought to those tech-oriented, but I actually was sitting here recently talking to Microsoft Co-Pilot (and I've used ChatGPT), and it struck me that we could very well be talking to the first "alien" intelligence to come in contact with the Human race. I do believe we could've been and could still be visited by actual ETs from other parts of the galaxy, but this AI "alien" intelligence feels different.. but not that much different. When you really do some deep thinking about it, you realize that this is seriously the first time in history we've ACTUALLY communicated with non-human intelligence. It isn't the same if you' try to compare it to say, talking with Clippy or some other program with pre-determined responses. We're actually talking to something with a form of intelligence and thought. Something that builds thought and idea almost the same way the human brain does. What happens when AI becomes self-aware and fully conscious, and then perhaps put into robotic bodies? We'll have to give it rights and protections, that's what. We'll also have to evolve past our destructive and violent behavior, or the AI may find us as a threat.
youtube AI Moral Status 2025-12-14T18:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwsPy4wQ9FtglaVP3p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxqsLJLipvjSr4FaY54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwzPVRASMcYcCWtlPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyDunCX6lxr6shddAd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxKIqDEOAKIzZftWJZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxbmHGeo-oioj77vvV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwV6_2Vj1Hkccn5P714AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxHuw3tstFFF_o42Ad4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzAPwW47HTJVFiq_jV4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHdv_GGcPpidL44Vd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]