Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So if i used my self driving features and it kills someone, and i go to court, w…
ytc_Ugzh1BTWw…
G
We should be more worried about how governments around the world will use the po…
ytc_Ugwt9RMst…
G
The Center of AI Studies focused on AI safety. What was revealed, for my unders…
ytc_UgzqGQXmt…
G
@gyperman3751 so why should we care about you? You oviously don't care if our jo…
ytr_Ugy2xbJi_…
G
Robot 1:ayo where are the box WHERE THE FUK ARE THE BOX
Robot 2:oops the box fel…
ytc_UgzE4gbTC…
G
So, what you are saying is that the problem is not really in the AI but in peopl…
ytr_UgxWWyGsi…
G
i was trying to chat with alex brightman. keep in mind im only 17 :< the ai chat…
ytc_UgyCWAHHA…
G
Maybe Sam Altman will eventually be threatened by one of his own AGI's ....maybe…
ytc_UgyrZd_pO…
Comment
I've been wondering how long until you started having videos about AI assisted medical emergencies. I know you're saying it's not an AI problem, but a people problem, but like many things, it's about access and bad policy that makes AI so dangerous. AI is being marketed as an information tool but it has no internal means of judging information truth or false values anymore than a Magic 8 Ball toy can predict the future; all AI does is paste likely sounding word responses which can be copied from anywhere, including the most wacked out fringe sites. A few years ago, at least, a random Google search would at least start by prioritizing trusted sites or Wikipedia before going down into the Trepanation Enlightenment Forums or whatever.
Add in that AI is now in the search engines and being pushed at schools, it is DEFINITELY in part, an AI problem. People might be people and find ways to go way out of their way to ignore safety measures, but this is effectively like turning off all the street lights and and being surprised there's an increase in night time pedestrian accidents.
youtube
AI Harm Incident
2025-11-25T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwbmfc0z5jkkQBz3wV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwwZFyIMV3kevr_83R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzMyC4islCBGf3Z7ud4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyKE-V5wMzM1FP_AQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy5x-W9K4DFU0tdqyp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwmP6OPShB2VSG5IJ94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx5lDl13ee2wjh2Ee94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy8D7hB7AUCYB_Cb2B4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzeCz5ujbzmBkRsSTB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzJQBPXJcIul-x9CfF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]