Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro this tech is inevitable also youre forgetting to factor in how many complete…
ytr_UgzEoVkMc…
G
the ONLY slim chance we have is to boycott every company that replaces humans wi…
ytc_UgwAUhAAi…
G
I doubt any of these are AI generated dawg. You can't get an Ai to generate such…
ytc_Ugw0M16dH…
G
Yes, these guys were stupid and deserved to be laughed at.
However!
Consider t…
ytc_UgxVcajFK…
G
@PeterHansen8 agreed, AlphaFold is the real deal, but Sam Altman is pretending t…
ytr_Ugy1x0_4D…
G
The “no AI regulations for 10 years” part if the bill that passed in the House w…
rdc_murp97r
G
Last I checked, ai disturbance was a prime feature, which is so stupid, like you…
ytc_UgxOoBJNl…
G
I appreciated this talk, and it is a serious matter evolving in the areas of cre…
ytc_UgweUL_c5…
Comment
Isn’t it amazing what you can do with learning when bureaucratic policies are removed, legal requirements to keep disruptive students in class don’t exist, and the standardized testing which sole existence is to make politicians money doesn’t dictate the curriculum? It’s almost as if removing policies created by people who have never worked in a school are keeping schools from being innovative 🤔 so strange.
youtube
Cross-Cultural
2025-04-06T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzz2YUk9zRTwZFYGrx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz4ytF-Z9vPZwn5qV54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxQ_JTZ9Z3Q86hFUIN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzl8jw8nDsLZdCnehZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwivihcqNVvVXPPRIh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzVOm6pTd1ef4_VxjJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4qmlBnz307DgPqu94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgznvYfX3qSsICyzkZF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxvqW6N7tt5ahUnAiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzS80pmemo16E3KHON4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]