Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The longer time has gone by i have gotten less and less intimidated by AI slop. …
ytc_UgynK_U2A…
G
AI gon bring alotta light about the crime rates in the black community too, can’…
ytc_UgyBPv0Mc…
G
Humans are constantly seeking an outside influence to solve the problems they cr…
ytc_UgyUOEoyA…
G
I’m not afraid of ai to take our jobs. Cuz as soon this happens data centers and…
ytc_UgzILDBHJ…
G
See all ChatGPT can do really is mimic the way people talk, it can't really thin…
ytc_UgwTwyiey…
G
Why does everyone get so worked up about Grok becoming Mecha Hitler? EVERY AI ha…
ytc_UgwTyr5Xz…
G
Art, whether visual, music, or writing, is what makes us human. Creating is perh…
ytc_UgzTzg-4k…
G
I cannot wait for the luddites to just die off. Electricity isn't going to end h…
ytc_Ugws9w1yI…
Comment
The real story here isn't just that researchers are leaving - it's WHERE they're going. Many are founding safety-focused startups or joining organizations like Anthropic specifically because they believe the current trajectory at major labs prioritizes capabilities over alignment. When the people who understand the technology best vote with their feet, that's a signal we should all pay attention to. The gap between what AI can do and what we can verify it's doing is growing faster than any safety framework can keep up with.
youtube
AI Governance
2026-03-18T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwsVuCA0Ep-9leZtOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwfl8mowoza0wyRupR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqAmQrBVJlomfWO_d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzmaxfHqz62hKrYR_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUa7tdq_WZL_-yeGd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx_PnjcgrUpTOe-7Pl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgykOz1z3pumU_oYN-B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzpUeeLbpPtaSXysUd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxDGPkvpD6Oj22qCK14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw3X7VaxOReI94Mw5Z4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"}
]