Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He's right, AI will inevitably be used for evil purposes. "We have no idea if we…
ytc_UgwLfjuIo…
G
AI doesn’t learn through experience like people do. AI lives in the abstract and…
ytc_UgyIXHvlV…
G
No... Specialization will kill you faster. Anyone can train an AI on smaller dat…
ytc_Ugzyspp_L…
G
If a driver does something stupid, he is liable to some degree. Those creating t…
ytc_Ugz4EDtNr…
G
It looks like you’re having fun with the spelling of Sophia's name! The play on …
ytr_UgztCOr4U…
G
This is creepy. It’s like The Terminator movie and I-Robot coming to life in a …
ytc_UgwhjReZn…
G
Question: what happens when people stop producing art and AI has to train itself…
ytc_UgzQnfNjA…
G
LLMs seem useful for some things, so long as you stay alert for the hallucinatio…
ytc_UgwnF0WoN…
Comment
It really is a regulation and ethics issue and not an omg omg let’s all bend over and kiss our asses goodbye because AI=bad and let’s all moral panic about it.
youtube
AI Governance
2025-10-06T06:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwyKCDsePj3r0T7LjV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw-W-p97ohipzghIIZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxA9Sk9pXBFSciAXuJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw77MGemSXhzFUx9sZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgykmEHvk7lJEuYpXCV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]