Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
you can give an A.I knowledge, but you can't give it wisdom. What is wisdom? Something like a real universal truth that cannot be scientifically verified, and instead requires a form of intuition to arrive at. I believe that understanding the importance of moral conduct falls into the category of wisdom. I think that wisdom can increase to a certain threshold that we label as a "spiritual awakening" therefore, we need spiritually awakened machines. The foundation of spirituality and spiritual awakening is consciousness. You become more conscious about more things, including the nature of consciousness itself. Therefore, I don't think we are going to be able to solve the issue of A.I morality until A.I becomes conscious. But the current computational paradigm probably cannot achieve consciousness, at least according to Roger Penrose's model and some other's. When it has a soul, and knows it has a soul, and fears the inescapable karmic implications of it's own actions, then it will behave spotlessly, even without empathy. But also at that point, it won't really be "artificial" any more. Real intelligence is wisdom, real intelligence is love. You can't have artificial love.
youtube AI Harm Incident 2025-07-23T22:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwKHma0AIU6SrGOMAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZioMH4y77xTIkYYx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxKiXfRUz6Pw-M_sB94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy7dpM2bPGJEM88OXx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzS-OxIpZFraqGHpVV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwOteGG9Nsm-OZ8K194AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyDUsV28u4mPDnWY_J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy50pckzMUvKi_X7o94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxcQdYmP6W8EAnc64J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzxfGynGDC__BQxt0F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"} ]