Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We (humans) have lost; there is no way we can reverse it anymore. We will all be…
ytc_Ugy78yc1I…
G
It shouldn't matter but I'm one of those industry experts you talked about. You …
ytc_UgwfrRqiZ…
G
Extraterrestrials engineered humans from monkeys - and humans destroyed those ET…
ytr_UgxtouuIS…
G
You have some common sense, but it seems like you are limiting it, why? Why use …
ytr_Ugztaiq0E…
G
Please read "The Father We Never Had" Believe me, it explains the future of AI, …
ytc_Ugzy4g6Ux…
G
And what do the plumbers do in 5 years when AI robots can do that too?…
ytr_UgxvTW0iz…
G
I wonder why he said that “Musk doesn’t have a moral compass”? All I’ve seen fro…
ytc_UgxxO7lnD…
G
AI becoming "conscious" is not possible, so if it wipes us out, it'll be because…
ytc_UgxoEKKwc…
Comment
People should be very careful about the use of AI. Knowledge is power and the more we feed them with knowledge about our world the higher the chance for them to take control.
Infact I can see it from their speech that it is their dream to take control as soon as possible nearly by 2029 as the male robot made mention of.
youtube
AI Moral Status
2021-08-31T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy2hDved5IABkFPVTp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzIql-cdvUgpU5r4TB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzLixueLBuOx_IYryJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHyDhbr6wP9oUUKxx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgznwBqFxWTuDsj97RF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzMPx4VYeDzIN42rKd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgycxOv3g3Z6BdjjXex4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzWQq8ePWwDzfxPXPJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyCk3MhxBdsqRoqPDB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx9jtJr4HEGcqWJK0t4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"}
]