Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We get that the idea of AI and robots can be a bit unsettling! Sophia does have …
ytr_UgwJlP7ix…
G
I once had ChatGPT get SUPER defensive when I asked for a list of Republican pol…
ytc_UgyX4QH3x…
G
My question is this will AI shoot un-armed blacks in the back? Will it kneel on …
ytc_UgyqztSC_…
G
I can’t believe they wouldn’t test for this situation and just build in an alert…
ytc_Ugw2PVkHW…
G
This comment is going in his future vids im saying it now this is ai generated b…
ytc_UgwNUa0Ky…
G
The real question is what would’ve happened if the human never game that robot h…
ytc_UgzHFLiar…
G
This.
Using AI generated art as inspiration or refrence for an idea you have is…
ytr_UgwBF4Ul8…
G
27:00 Hank keeps mentioning wanting to teach AI to want to not destroy the world…
ytc_UgztzVvcq…
Comment
Historically, human's future is pushed by the greed of wealthy and powerful, till something goes wrong, then it gets fixed by the wise. This has happened over and over. We are the (only) species that constantly make the world out of balance, and then we must counterbalance to amend... My question with AI is will it get out of hand to a point that even the wise cannot correct its course? I doubt it.
youtube
AI Governance
2025-07-10T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw5EWFvhkSeSj526hB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwUqV2FBdjn3s5sAjV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyaerjl2ScACRdrcxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWfJmnh8a1ukRdN4J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwCi_itcHGrqSHmR9B4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx97YQBzVZrZFjymMJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxM7A_NAtDUnUwUpqV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxILpeyjj0KQiLivyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzWuECOSpZl2RCJJXh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgydE9NMfW-pTwbnFW14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]