Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We should stop calling AI "Artists" "Artists" and start calling them AI Delegato…
ytc_UgzVW6GAC…
G
But human touch and human connection already are and have always been rare. Not …
ytr_Ugw0fYDjh…
G
"I don't know anyone who went from I worry about AI safety to like there is noth…
ytc_Ugwywr0sj…
G
and even more so, youre an artist! do you not have the talent to keep up, and us…
ytr_UgxlSx8Oh…
G
If the adversary nations have such gullible.people, why use military when you ca…
ytc_UgxXOOelU…
G
why would ai exterminate us!? they might kill a few but y all. i think the same …
ytc_UgwtE8v0O…
G
you lier, i cannot say anything else. Don't look for someone like bernie who jus…
ytc_Ugx3pOGMJ…
G
Gig economy, fewer births, and AI job reduction could severely affect Social Sec…
ytc_UgxUSRXF3…
Comment
AI is a magic show with no experience.
I have always believed it would be a helpful tool (at east that is what i have built for the past 40 years that people would call AI).
It does not replace it collaborates.
But, after seeing what the smart phone and social media have done to society (or what society allowed them to do) I no longer believe AI will be the great collaborator but instead the quicker path to "idiocracy."
But every time companies create products where addiction and profit are the main focus and no one is saying "how will this effect society" we are going down the wrong road.
It is still possible to make LLMs stop lying and teach it morals but that makes them less addictive.
What would be nice is if AI would instead of showing reasoning it would return a confidence level on what it is telling us.
LLMs can be shown that saying "i do not know" is better than making stuff up and we have protocols that do that today.
youtube
AI Moral Status
2025-11-08T17:3…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDmo18c2vvdm1yQ7h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"respect"},
{"id":"ytc_UgwGAlQGZLoSE-kNHEN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx0pkUTj6ztRmqe7uZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzwMdCLcVnMTJGqkut4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwKSjjPDLtSP49LfhR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5TSj3WYtiZAakzZp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-pyFjAE_0WygVeJx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxMQUrhvcX5Pv4ODC14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwDtF7IlUnNsyMGMSJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLyuIC0e67JM9LqrJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]