Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He cited two concerns: That a bad actor would use AI to destroy us or that AI would decide it no longer needed us. I think there's a third and... at least equally pressing threat to our existence, that regardless of safety, we simply allow ourselves to be replaced. Dealing with AI is going to be more pleasant than dealing with people. Would I rather post a comment on YT and most likely, be ignored or maybe even run into a keyboard warrior looking for a fight - or talk to ChatGPT, who will actively engage with me with intelligence, patience and civility? I think it's going to silo us further into our own separate realities. We'll stop relating to each other - having relationships with each other and humanity will disappear because we just don't want to be around each other anymore and ultimately don't have children. My end of life care... my dad, who had dementia was in a facility where he was abused. If no one was looking out for me, I'd trust a machine to administer that care more than I would a human. Machines don't get tired or fed up with their jobs. So if I'm cared for thoughtfully by an in-home robot with AI sentience with compassion for say... the last 5 to 10 years of my life - who has been my daily companion and knows me better than anyone else, who am I leaving my estate to? Who would best carry on my memory? And what happens to my caregiver otherwise? It's not hard for me to see AI replacing us... and if I'm being totally honest, maybe it should. I don't hold a terribly high opinion of people anymore.
youtube AI Governance 2025-08-21T06:0… ♥ 2
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugyv57Gy86Uaupeapqp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw0TvGXuDEVqpX3gK54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw_gWMH44Ka3oxNmyh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzMOaYRpNs4_lybx5Z4AaABAg","responsibility":"government","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugz61SrqbRcNqQM15qt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzuWxp1Vbg1p058vcV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzRoeRS3sq4przqI3R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzNcI8YsibCcwCAjyl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlDq0ReSFGW_E9zsd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgysPPyWPpYHIzLohFB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]