Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To elaborate, I am not very good at art and cant really find the time to make a …
ytr_Ugzw2Agx1…
G
If Nobody earns money anymore, how people should buy products or Services? A Com…
ytc_Ugx9mhR4n…
G
The elites should've given him enough fund to continue developing a very smart A…
ytc_UgwVYaJoR…
G
Sooner or later AI will become as smarter than humans as we are smarter than dog…
ytc_UgzvK_evl…
G
Worry if and when ( A.I. ) ... GETS THE JOKE .... THAT IS A DESTINCTLY ...
HUMA…
ytc_UgzeDP1K1…
G
So robots teaching our kids 😂 are you kidding me AI is a monster their creating …
ytr_Ugxn8OlhI…
G
5:00 I don't think telling people that AI might kill everyone is a good advert f…
ytc_UgwsUqOk7…
G
Interesting discussion, one thing that was not mentioned, when it comes to writt…
ytc_UgyxVzRaf…
Comment
Humans have all sorts of fear driven objectives and they think a machine will eventually behave like humans and will have existential fears and decide to take over the world . Yes if we deliberately program it this way, then... maybe. However our LLM s do not have self generated objectives, they lack long term memory and continuous learning they don't understand the real world and so on. So we are still far from AGI
youtube
AI Governance
2025-12-04T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzkaHi4i4WbTfYN07J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnJy42h39uZwlo5jB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXqIvlEBfeXrykYBR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxwdT4i8HUSmgV_svd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzm0F6DA5rfGzctsDp4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxlGobmdj7hFgQcBDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzC_ctZBKXWJZIxhW54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzh8pqiexJcFvZ72FN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxXDZ0Mb03n7h9LcJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5sO8zIEoVkp8POOx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]