Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
🛡️ How Does AI Impact Human Meaning Structures?
AI's dominance would force a re…
ytc_UgyRz3hfT…
G
if a human crashes a truck. people say "yea well it happens, they are human". If…
ytc_UgxmMkcus…
G
Thank you for sharing your thoughts. It's true that AI models are developed by h…
ytr_Ugz_xOIQa…
G
I can't believe I just downloaded ChatGPT, and this video pops up in my HomePage…
ytc_UgyfYmo3D…
G
ai art is dumb but why give so much attention to it with this garbage? I don't w…
ytc_UgyYcXgKp…
G
But at least with some of those monopolies we ended up with decent legacy infras…
rdc_oi0jryw
G
I saw a documentary on a system called Skynet, it will take over the world and w…
ytc_UgwU242kw…
G
I dont get it, i can make all sorts of things in microwaves, still makes me a ch…
ytc_Ugw5fmL68…
Comment
artificial intelligence isn't dangerous in and of itself, it might become dangerous if placed in something like a tank but inside of a handheld device the worst that can happen is that your phone stops working. People are kinda similar. Your brain can't directly hurt anything, it requires a tool to do it's tasks (your body) and using that tool it can then perform either acts likened to a saint, or take the life of another being. In the end, it is all about the way that intelligence interacts with the world and beings around it
youtube
AI Governance
2025-06-21T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxHVyeheBhcnEMSjEx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxK8dt5g5CtG9tusMF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyi6DO9Ca6WpfJkq1x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyoVn277vDIxMMZxi54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy64kS0BCVecSoiMJN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx13Xgi1vSpytmU8BJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDi_trf9sYJX-rA914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJnuF-SYAb0ejmCml4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxQZOtXqDMbp-czDtp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwmFOU8zkOqu3MilRR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]