Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
THE AI FACE RECOGNITION IS ONLY WITH A $20 A MONTH PLAN. AND RING ONLY SHOWS WHE…
ytc_UgzOxrEja…
G
Evert single low ware earner is going to be replaced by a robot in 10 years. 1/2…
ytc_UgxaNKhAc…
G
I think it would be wise to read the book "pure human" by Gregg braden, this boo…
ytc_UgyqLk5U-…
G
It’s AI pretty much as his sister says. Where were his parents fight when it mat…
ytc_UgxZ7FfKc…
G
Corporations want massive profits, and it's regulations that allow corporations …
ytc_Ugw8i3YmZ…
G
i don't really care if it uses my stuff (usually only my friends and occasionall…
ytc_Ugzq0hGbv…
G
I have recently started an experiment with Gemini and I can say I'm shocked to w…
ytc_Ugx19c0nG…
G
@JohnMitchem-e2k well, it's the enjoyment of the process that makes being an a…
ytr_Ugz8ERwex…
Comment
1:16:20 Counterpoint to the "Majority AI View" article: The engineers, PMs, etc who are tasked with taking LLMs from the lab to the market are not equipped to understand the technology. It's a very different beast. I work with a ton of highly competent engineers at one of the big AI companies with large conventional tech branches and the paucity of calculus or linear algebra knowledge alone creates such a barrier for them to deeply grapple with it. They fall back to reasoning about it with analogy to tools they're familiar with, like search, autocomplete, etc. The result is a pretty myopic bias toward assuming it will be like previous technologies.
I do think the alchemy analogy is accurate, but if anyone is the scientist in the room it's the people working on interpretability. The "tech people" who are working on deploying or fine tuning are not the experts whose opinions matter. They might be better informed than the general public, but it's marginal.
youtube
AI Moral Status
2025-10-31T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwGCenfic0DffQynGV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMwUrLPPKGZc7N7gZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxYTqk0c1AMEO-Cn0R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugzvezki_UIzKiot7-R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwqT_qp2eypDr9Kwf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYZAS6C1uYHlECl894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyJjyR6omrJ_AWUSwR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwTHp--dd6C17hBoY14AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyv3k5O2BLJBDFPWJN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz1YYOzpzTlkFa9XrV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]