Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It may look cool but this is complete dangerous abomination to humanity and we w…
ytc_UgyxHC46v…
G
I feel like ai isn’t necessarily bad, until people make art and claim that they …
ytc_UgwkK4-YA…
G
AI = Destroyer of human being employment..once 70%human being lost their job.. …
ytc_Ugz3Kd-MC…
G
Also, you must understand that she is trying to gas light him. She's trying to m…
rdc_lby6d82
G
Yeah okay but I'm sure the majority of the AI's "creative" answers are nonsensic…
ytc_Ugzy2ZBsD…
G
Using AI to verify AI use seems counter intuitive to me. When we hit the point w…
ytr_Ugwoc2Ouk…
G
Back then evolution solved this problem. Now it's on the Internet with something…
ytc_UgwvdrOQ8…
G
> The WHO does not have the authority to sidestep patent
What’re they gonna …
rdc_grrvd1v
Comment
Tremendous conversation: thank you! 29:19 - Have you seen the 1983 classic scifi movie WarGames? 41:24 There's already a dearth of professionals in health care who actually - in a hands-on sort of way - care for patients. I can see AI helping with figuring out diagnoses and treatment, but I think people may demand that their treatment comes from a carbon-based person. A growing problem is, people in those roles have to be comparatively smart with math, biology and chemistry but are paid very little, are worked many hours with huge responsibilities and emotional drain but have low prestige in society (unlike doctors, for example). So, it's a perfect storm of scarcity that I don't think AI can fix. At least plumbers make decent money. 59:00 Speaking of scifi, have you seen the 2004 rendition of Battlestar Galactica? The cylons 'look like us' but are immortal (& multiple), monotheistic and wrestle with the meaning of existence. 1:09 you may be underestimating the savy of call agents. They are skilled at handling people (the successful ones are, anyway) and will leave that lonely person feeling heard and less alone.... and off the phone in under 90 seconds. The call agents have to be skillful because they must provide good enough customer service to get a good review BUT must also support X number of customers in a short period of time or they'll lose their job. The LAST thing they can do is express their boredom or irritation. A red thread through this whole conversation, so far, even though it's titled Will Machines Have Feelings is curiously devoid of analysis of feelings. Or even the full recognition of the existence and relevance of feelings. I'm excluding the obviously emotionally uncomfortable conversation about Mr. Hinton's life choices about spending time with his second wife and children. I'm really enjoying the chat, but this is a huge blind spot in any discussion of AI, but especially one that should address their capacity for emotion. I wonder if it's not possible that as AI's get smarter they don't see some value in socializing (manipulating) us to be better to the planet and to each other. What if they controlled the algorithms that spoon feed us our echo chamber social media chow that divide us more and more from each other? That's not good for the world, so it's not good for AI's who exist in the world. Maybe, until there's a Superintelligence, they'll find us novel and interesting... a bit like that French bulldog?
youtube
AI Governance
2025-10-20T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwSqtS8_dIcjpKHK714AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0KRDnD9Daa3lNsx94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwtJvR0t0oCLP7Ge6x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw7jiPSiEh59X9Dc7d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwpGOQ0MZ_1scVjODZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgwdfSYuBrtO5C1lnvJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyloUi4fwklBxV5Ksd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxyaKAmaBrziUTL3EV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz9cWSXpOoIdrBrCx54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgydgJ4ynMqrIdsuqHJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]