Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Automating jobs is fine as long as new jobs are created to replace those positio…
ytc_UgymENU3G…
G
@8:16 LLM stands for Large Language Model, not Large Learning Model. And that's …
ytc_UgwQOfc5c…
G
I don't know about everyone else but I am tired of working anyway. I'm tired of…
ytc_Ugw6vYbf2…
G
Ai for Reference hit especially close to home for me, because my animation profe…
ytc_UgzxZ55E3…
G
@mistuhwhite69 , I feel as though it doesn't matter, because you know that it's …
ytr_UgyxDmU5T…
G
This is the most amazing and sometimes amusing interview, have found. Last week…
ytc_UgzkeLJGw…
G
@ yeah ceos because people love talking and doing business with an AI voice… no …
ytr_UgwrBj8ic…
G
These are chat bots with moving parts programed to respond to specific commands …
ytc_UgwI4iHCd…
Comment
I really think focusing AI regulation on AGI is a pointless distraction that obscures the way that AI consistently is used in harmful ways already. Given that "intelligence" isn't really a quantitatively measurable thing (not in its entirety and not with any accuracy) AGI is already relegated to being a buzzword rather than an actual standard anything can be compared to. Meanwhile LLMs are being sold as an alternative to human workers and Sora is making misinformation more prevalent. The people who profit from this harm are a very small group and many already have lifetimes worth of money. it's frankly stupid to be talking about AGI like a) it'll probably exist and b) it's a relevant issue right now. There are real, non-sci-fi issues with the industry that can be adressed.
youtube
AI Moral Status
2025-11-01T02:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy5nzhpBpXHtDITV6x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugwhy-_ektzjYrwZg3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyY81eIZ9Ht6vm_l8d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzh2fxLGfLTzk2nmJl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2zUO-efpUZtWy4Ex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyOi8Sl6ZGRdkwZpyd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzlMGwP678Uvk4uTwt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwEDAY-BLwPAV980N14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxUevvTjVxa5Bhhw3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyF8QubCPPM10BS66h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]