Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate to say it but just subject the policy makers to their own deepfakes. See …
ytc_UgxtlATrd…
G
4:40 he says.."So, What does one do in such a world", Um how about NOT create AI…
ytc_Ugwcy-Zv6…
G
People asked the same question — “What’s the point of drawing?” — when the camer…
ytc_UgwUAgnCt…
G
The hardest stuff is the easiet to automate. Kind of looks like if the more proc…
ytc_UgzuFS1Uo…
G
Maybe if a few more people saw the movie "Colossus: The Forbin Project" (a FANTA…
ytc_UgyxHN_8T…
G
I has absolutely horrible at drawing last year, last year I was drawing the most…
ytc_Ugz9zAe_Q…
G
I like to use character ai. Now what do i do when i ofc need a persona and a cha…
ytc_UgzlQTZFk…
G
😂a tricky one ☝️ let’s make a good relationship hopefully we won’t be used as ba…
ytc_Ugy1cpKxO…
Comment
I'm glad you continue to discuss this important issue, but I can't help but notice both your guests and topics seem heavily slanted towards the "existential risk" end of the spectrum. It would be beneficial to hear from the other side (who I believe are a majority in the AI community) as well, who think AI is unlikely to pose such risks, but instead causes more immediate challenges like algorithmic bias, privacy concerns and ethical questions around generative AI. I'm thinking of people like Yann LeCun, Andrew Ng, Julian Togelius or Margaret Mitchell.
youtube
AI Governance
2025-11-26T21:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxqmiN8JSgZOpjSgux4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx7VaJ0f4fciCd9UkV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzSaqkb-Mn6W3USVCx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyHhp7wg1mjviXkSSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyCHX0l-b4Np2JtlTN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugw5yS81M19wpsAthpR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgxXkCIQVK7u9960gPt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzrdzLzWdUu0SyAkG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwnjB-9GKL-THzwuVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxdANNfWCGEqyOq7kV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}]