Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ironic to me that "they" are racing with urgency to make something not even need…
ytc_UgyQNiIbg…
G
Ing, I have been using several techniques with CHATGPT resulting in answers Ai d…
ytc_Ugx2DkdYz…
G
I agree fsd has a long way to go and I agree, cameras are not enough to do the j…
ytc_UgyTuAbb0…
G
Story of human enginuity and eventually, even a CEO can be replaced with an AI. …
ytc_UgwSDA1ZF…
G
the fact I watch an AI talk to people on the internet on a daily basis is not so…
ytc_UgzfOUkUh…
G
I HATE how AI is slowly taking people’s jobs with animation but I love how every…
ytc_UgyYvZw8M…
G
This is so stupid. He is not worried for one. And there’s no such title as the c…
ytc_Ugy6vmAxz…
G
blindly trusting AI and other 'high tech' is a bad idea, could be a very bad ide…
ytc_UgyNklskE…
Comment
Guest says we can't unplug AI, but it's hard to believe that our military or some malign foreign state or criminal actors couldn't destroy dozens of data centers with missiles, bombs or drones (-maybe for ransom -), effectively 'unplugging' our AI masters.
youtube
AI Governance
2025-09-07T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyJXuuz0ztFVESToHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyMkAR_6iiRde0dafB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz219IR8buATg-TOMN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxpZ-wUFlVBxH724RJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx0G-jUP5OPnctHWUp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwFDNFF3HC5xxHnw9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6SAoCU5EnnJXPbbl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwNGGJUIb2ABPnoNSh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgznlOnMI4F6_Hb5QeR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy92OLbQTAbjypnI6p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}
]