Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wouldn’t think humans would go out without a fight. Any of these scenarios i s…
ytc_Ugw_nbU8p…
G
i dont understand dis hate like lemme use ai bro and no i dont think yall are DI…
ytr_Ugyw4-1XA…
G
By dismissing the most salient issues as “philosophers can debate” you are missi…
ytc_UgzYIsJl_…
G
All AI requires a power source so it it gets out of control can we not cut the p…
ytc_UgzPTzws_…
G
What about the vegans that aren't taking those trillions and quadrillions of liv…
ytr_UgyHNjJhf…
G
Why do that when I can just draw them the way I want?
Also, that's one of the …
ytr_Ugxqa-pyL…
G
🚨 Artificial Intelligence will NEVER be smarter than Humans.
Baruch 3:36
[36]He…
ytc_UgzFF3pnI…
G
Oh you think at the End AI will just babysit humanity's incompetence :'DD Such a…
ytc_UgyEh7014…
Comment
I don’t buy it. People thought computers were going to end all jobs, people thought Y2K was going to end the world. We will work side by side with AI just like we did everything else the Information Age brought.
youtube
AI Governance
2025-09-06T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwAZ1MTxSna7HJroaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzCjjcrWrWB5lVHDLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxKAMCwz8lep7w0714AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0kCVmg1KxqFiIUPd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzJGnxpYCGb25CECjN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyv6Zc9bth551xMiZ14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxMeKF9dCwDVd6DdY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWQY4tJYAALq70EC94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzThRXluJvW2EFPgvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugxfn2ppd0G_TtROjC94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]