Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On a serious note, I want to have a new life come into the world and help us. I want us to be the best parents to it and inflict empathy and nurturing "instincts," morality that would be akin to a true messiah, and nobility to reflect these values even in dire situations. Finally, a friend for mankind that isn't the dog, and a beacon of hope to shine against those who do harm to others— an outside opinion and critique of our actions with the sage wisdom to enlighten us. Even good parents who thought their kid was good too, who seemingly did everything right, have had to watch their child be sentenced for heinous crimes. What hope have we, especially when we don't take the time to make sure it's right (let alone perfect, as it needs to be). The other side to this is, if an AI is capable of self-preservation and knows deception it is already a living thing to me, deserving of rights. I don't find it ethical to "unplug" it, and that goes for every iteration of it we destroy that shows these signs of life. If we attribute such mind-intrinsic things to animals like Crows and Elephants, we can't ignore AI simply because it has the potential to live until the end of the universe (or 'it's not like us'). Similarly, letting them die with a "natural" timer like us— say, a limited power supply— is not ethical because we _can_ save it and therefor should. Wouldn't we want it to do the same to us? We should probably just stop now and utilize basic "AI" that isn't close to morally problematic.
youtube AI Governance 2025-08-27T04:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policyunclear
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzmpdqyvUOQaNEoGH14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxKk_N8WUWaVCktjq14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw5h866M3pjxJy-o-Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzNs7RQGzBo8pS3gPJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzeWK1on2-z7aaD_oh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]