Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But wouldn't being able to choose consciousness mean you are already conscious? …
ytc_Ugw27FpfK…
G
Na, I don’t know who made that list but it is basically just how good ai is in y…
ytc_UgxQ36r_7…
G
Parents shouldn't have guns lying around, parents also need to educate kids on s…
ytc_UgyUwORvJ…
G
As a computer scientist, and a fan of the law, I really find this question is as…
ytc_Ugw2GXjsR…
G
AI told me that zinc is healthy so I ate a pound of zinc shavings. It's AI's fau…
ytc_UgwPQovxN…
G
Thanks for encouraging video to fellow artists ☺️. I do have a question though. …
ytc_UgyN_quN8…
G
I think one government cannot stop or control this because different countries a…
ytc_UgzOvcVhL…
G
14:51 'What was said' and done in the past DOES MATTER since it created the pres…
ytc_UgxD8tuLF…
Comment
"If it's not safe, we're not going to build it" is a mindnumbingly ignorant statement. A knife can easily kill a person, therefore we are not going to build knives. Hammers, cars, piano strings, all can be used to kill people, therefore we are not going to build them. Nonsense. AI and all the other items mentioned clearly have vastly more useful applications than killing people, but they still can be and are used to kill people. And that is the problem. the 0.5% of uses that are harmful to people are harmful, and we need to accept that fact. The problem with AI is the scale of the application of that 0.5% is enormous.
youtube
AI Harm Incident
2025-07-24T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz8tos3_uAOTriLPjl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0j4sDCpyyC2ULb0p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxKuXTdplk23q0LBsF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx1y7u04sYOGxC0X3h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxmZBrTJEEGxrwRNnZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyLlk8haloC4WTi1994AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwUw-7CRVaZmP0TF614AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwz93NAJrYNr0F453F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgydIgRftN6B0npgGdp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzjg8A2szTCjfMnESl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"indifference"}
]