Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Everyone talking about AI, and I'm like It's Beneath the dragoneye moons!
I have…
ytc_Ugxm4eW4p…
G
The way to fix the AI issue is AI files should have their own file format that c…
ytc_UgzPnhHaM…
G
Guess what, all we have to do is stop using AI, stop paying for companies and th…
ytc_Ugw14aRZj…
G
I have been worried about AI for years, but they will never rule the world. They…
ytc_UgxTCxZn3…
G
Ah , AI is taking the best parts of most jobs , so humans will have to do the ha…
ytc_UgxhC0ifU…
G
you could also learn how to draw and make your own concepts and art and THEN com…
ytr_Ugzdjts3R…
G
She will probably regret this decision. The negative social pressure she will f…
rdc_cjovcfa
G
when she do in movie or song it's ok,its just acting,so pornography also acting,…
ytc_UgzIcqEwJ…
Comment
I understand the importance of raising awareness about the risks of AI, but I wonder if we’re focusing too much on the potential negatives. Since large language models learn from the data we feed them, could all this speculation about dangers end up shaping their development in unhelpful ways? It worries me that constantly discussing worst-case scenarios might even reinforce or inspire those outcomes. Wouldn’t it make more sense to focus our energy on highlighting the positive possibilities so that these systems are more likely to reflect and support constructive, beneficial uses instead of being shaped by fear?
youtube
AI Governance
2025-06-17T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyE2A-BpySV8YDcjUJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwvxw38GmFt5b1KBCx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwocJF30chK-3AwJEd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx4O2tW82yzu_s0zoZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKBK4BzIw1dPH34mt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgzdXL7gWUP6kynLNLJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDVDpOHyteAbJjB2p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyqT72OZ8u8pIIVjdR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwYsm57zeKFnN6vxJR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"frustration"},
{"id":"ytc_UgwxdLBrb_jPBFC37eh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]