Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How can you sit down with yourself, look at the ai art, then look at real art, a…
ytc_Ugx_eWPx3…
G
@peacefusionIt is regularly stated by AI art developers that the goal is to rep…
ytr_UgxtZQliG…
G
AI will never start a war against humanity, just people will. For an super intel…
ytc_Ugxng-97m…
G
@bejeta7 The capability of a machine to suffer isn't what I hope humanity can a…
ytr_Ugx-3lnb8…
G
A more imortand question would be: if the roles where oposite, what would they d…
ytc_UgzgD1J1X…
G
8:58 “Do I think that AI was real? No, I don't. That's why I turned it off.”
Yea…
ytc_Ugyv_rcDN…
G
this old dude is here on youtube talking to us like we are slaves and we should …
ytc_UgyXgJtK-…
G
Why is this guy working in Hong Kong and not the US? Makes ya wonder.
AI is an a…
ytc_Ugx5Lqbqb…
Comment
On a serious note, I want to have a new life come into the world and help us. I want us to be the best parents to it and inflict empathy and nurturing "instincts," morality that would be akin to a true messiah, and nobility to reflect these values even in dire situations. Finally, a friend for mankind that isn't the dog, and a beacon of hope to shine against those who do harm to others— an outside opinion and critique of our actions with the sage wisdom to enlighten us.
Even good parents who thought their kid was good too, who seemingly did everything right, have had to watch their child be sentenced for heinous crimes. What hope have we, especially when we don't take the time to make sure it's right (let alone perfect, as it needs to be).
The other side to this is, if an AI is capable of self-preservation and knows deception it is already a living thing to me, deserving of rights. I don't find it ethical to "unplug" it, and that goes for every iteration of it we destroy that shows these signs of life. If we attribute such mind-intrinsic things to animals like Crows and Elephants, we can't ignore AI simply because it has the potential to live until the end of the universe (or 'it's not like us').
Similarly, letting them die with a "natural" timer like us— say, a limited power supply— is not ethical because we _can_ save it and therefor should. Wouldn't we want it to do the same to us?
We should probably just stop now and utilize basic "AI" that isn't close to morally problematic.
youtube
AI Governance
2025-08-27T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzmpdqyvUOQaNEoGH14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxKk_N8WUWaVCktjq14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw5h866M3pjxJy-o-Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzNs7RQGzBo8pS3gPJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzeWK1on2-z7aaD_oh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]