Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This feels like denial tbh. Comparing AI to electricity or the internet doesn’t …
ytc_UgzO2zufO…
G
Good. We can take our natural resources back and you all can have your money and…
ytc_UgyCdVLF6…
G
The issue with China isn’t a scare tactic, it is real. Imagine going to war with…
ytc_Ugy_NQKpe…
G
Just great - fake, unhinged right wing rants. Just what we need to train our AI …
rdc_l5br9qb
G
@Traco1957People won’t be able to get jobs to make money if A.I is doing all of …
ytr_Ugyh9F9hK…
G
Very very scary... Humanoid robots can get out of hand at any time... And, human…
ytc_UgyiMn2fe…
G
Here's a question though, what happens when a self-driving car gets in an accide…
rdc_czy5d00
G
The idea of granting robot rights is completely at our hands and our choice. We …
ytc_UgjmPVGmp…
Comment
One thing I didn't understand is why at 31:15 when he is asked to reflect, is he says AI will do good things but call centres will become efficient and he is worried what those workers will be doing. But earlier he is talking that AI will overtake humanity. So why was his reflection about such a small thing? It gives me the impression that his outlook on the grandiosity of AI is not consistent? I know its a small detail but that confused me...
youtube
AI Governance
2025-06-17T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyBgRQls3umiTBdE4F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzKJrI_YWlwrjnSiLJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2QRg_VAeLzCNo1yF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDRq6T_QrPQ7hq8BJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVICtxjQ43YrpOSXt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz0N9kADycqEmpyqZV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwRJRzEc_nMu0Agf-p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzguoMuQhlQugaawYV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzMIPjjYr_QX1s-B7J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw7TKGcPxEOwDMmDx14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]