Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Frankly, I don't see any evidence here that LLM has emotions. Apparently, there …
ytc_UgwqG7t1L…
G
Fake. In ten years or so, robots might be able to fight like this, but they'll b…
ytc_UgyLpM_8v…
G
The entire interview is very interesting and some points that make you think and…
ytc_UgwILZRuH…
G
my take on this is: i disagree on the framing that "it doesn't feel emotions" be…
ytc_Ugx1B5Fud…
G
I think AI has changed all jobs, not just security related ones. However, his wa…
ytc_UgzJ56np1…
G
And for everybody who thinks their job is safe because AI can't do it...ask your…
ytc_UgxDBynqO…
G
Google: We're gonna force you to use AI by making it so you can't shut it off on…
ytc_UgzP_TjEy…
G
* AI reference images are so useless! It’s theft and horrendous for so many reas…
ytc_UgzYM9jFz…
Comment
A well done interview on a very important topic. Now I (we) understand more about how neural networks are at the core of the contemporary state and dangers of AI. What was subtly under-covered is how (and who) is making the judgement call to strengthen a connection. The interview provided several of Geoffrey’s preferences, like narrowing the gap between the rich and the poor. He suggests regulating industry, and if I understood correctly, primarily by demanding that businesses must include “being good to society.”
Concepts are thoughtfully and thoroughly covered. Nevertheless, I’ll be looking for more coverage about how good vs. bad is to be determined. Who gets to define good, and what criteria will be used?
youtube
AI Governance
2025-12-28T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxd15BDlmAO6UgrL7F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy3p-KPUjQci_vs0-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwjZqSchsLXHRxLTV54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyZdmD-tIDAoWAMb754AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyJ4DtS-2fYJ0xcecR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugws2PRX31Df5aRBrUV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxxBdnd0OJXQoN1ItV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxST-pz6527WYtdDZp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy6IzpEtw-EVhDPd794AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzuXgh4c4TBcpwGi4d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]