Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In my opinion, the solution to this is relatively simple, we restrict what text …
ytc_Ugy4otCox…
G
Between AI and (A)n (I)ndian taking all the jobs here USA is getting fucked for …
ytc_Ugx5gFCqC…
G
Has anyone got a link to the McKinsey report Karen mentions about the water need…
ytc_UgyV3nb_S…
G
This development while You tube gives copy right strikes removing funds of socia…
ytc_Ugxm6dV9y…
G
My comment will focus on Hinton's "Final message" about joblessness being his an…
ytc_Ugy4v9-gO…
G
From what I know about child development and how the human brain works and grows…
ytc_UgxackYgs…
G
Oh sure he should have known anyway. Really old man against artificial intellige…
ytr_UgyUya2JB…
G
00:11:11 this is a valid point. The solution I / Geoffrey propose: good quality …
ytc_UgzzAjtTD…
Comment
A lot of the discussion is about short term risks: bias, harmful content, misleading information. We're missing the most important conversation that we should be having: existential risk -- what happens if Artificial General Intelligence is created, and undergoes improvement to become smarter than humans? Humans are the top species on earth because we can think and plan for the future, invent technology, etc. Tigers have sharper claws, but human expansion has made them almost go extinct. When AGI becomes smarter than humans, how to we ensure that it acts in our interests instead of perusing some goal to the limit, like turning every atom on the universe into computer substrate? Keep in mind, you are made out of atoms. These questions form the field of AI Alignment, and these conversations need to happen more broadly, even in the political sphere.
youtube
AI Governance
2023-05-16T22:4…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz-joppXaxvrKN_qaR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzbWdh0GPN6t9DLPUt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugze3YtvOPJIRlrmqQd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugzueyln2mYHF4vgZUZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxpYIclchsBHo1Yx6V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwveCX6rpZeuV8PmNJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQmeN99YEvjljCTjt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxEbeFcN4jvJBB5xdh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYWVcPamogzkuLI2N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwiPOOL7RWci31byqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]