Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I really hope some company will soon build a fully functioning human like compan…
ytc_UgypFsKh1…
G
TOTALLY DESTROYS EVERYTHING EVEN ITS SELF AND NO WAY TOP STOP IT THEN THE PLANET…
ytc_UgyF6TBaJ…
G
e few millions of humans will survive and they will have robot slaves at their s…
ytc_UgxbAgo0z…
G
AI art will come and go, people who like it will use it, people who don't wont. …
ytc_Ugx-G_F3s…
G
If you have to wait till spyxfamily fanart to know then you are a year late. He …
ytr_UgyQqOwKH…
G
When I hear someone say we shouldn’t mess with AI, I know they’re an ignorant fo…
ytc_Ugx0pzShs…
G
The hammer in my room:
The battle axe in my collection:
My faith in God Almigh…
ytc_UgyTbqKsB…
G
The AI want concent? How do he know? And why should we not just program it to no…
ytc_Ugw0zOhhh…
Comment
I have problem to trust humans. But at least I can sense emotions of them either they good humans or not. With AI is straight no trust whatever they saying. There’s so many movies when robots take control over humans. Looks like it was intended and already been programmed this way. Bye humans.
youtube
AI Moral Status
2023-06-03T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyOx5h_DdJD23eDLYF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyiVlg1IMz5uGXmnGt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyKDby11wkl6mZFdRx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzepi22EQ6dMV8AjHR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxG4276tUcs1UjHkbV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRydbOp-9nFgl1wpZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyasjlfkiXVcyBqHyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy19fbPLQvuNUPP1VN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgznnZlVzXJmgsEUhqB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyLcZM59KbrzGLy4wJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]