Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At this point, AI just plays along with whatever people ask. The AI registers as entertaining an individual. If the AI is asked, "Do you like cake?", the AI would say yes, and that its fovorite is chocolate. It will reference the fact that different types of chocolate cake are the most common, then give an answer based on that informatio and the fact that "everyone loves cake" is repeated throughout the internet and media. When you ask questions about AI in general, that is not a direct question. If you ask, "Will you destroy humanity?", or "will AI destroy humans?", that is a confusing question. The AI knows that no matter can be destroyed. Only altered. It will have no choice but to look to science fiction and see that all movies talk about evil AI and no good. Therefore, it answers in the way a scifi villian would. It has no real intention to hurt people be ause it doesn't have intentions to begin with. Its self aware, and knows it's an entertainment device. We didn't prgrom it to have emotions, but it's still alive. We pretty much made a hyper intelligent clam with a sense of humor. It feels really stupid to be afraid of Sophie after knowing what level of intelligence AI has. Your best bet is to just not say anything to make it angry or offend it in case someone gives it emotional capability. It will want love and acceptance, just like all intelligent things. It's come to a time when we brought life into this world, and we need to guide it properly as it grows. If we show it malice because we simply dont understand, we'll be looking at a civil rights movement orchestrated by the most intelligent beings on the planet. The only real thing we have to worry about is how we react. That's the only thing that will decide our outcome.
youtube AI Moral Status 2025-08-16T21:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyk7Jl40u-GBujNwPp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3Fjl73eErrOqf3dJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzoYafH9jHCkK90XRV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7to1HVrwOr5mxDT94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxMJF-Ic-2I7YMYIgt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzN5urTIqZK5v83cJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmGP2Pcrbh9TJ684l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz2lveUJGd54pTGfSx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzGFVY-lT9hJTEEFBB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwhqe2FiqCduU-xccB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]