Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've seen quite a bit of hype around claude I'll give it a go next time I'm on t…
rdc_n3ktgan
G
Great video! Keep fighting the good fight! I find myself having a lot of the sam…
ytc_UgyA5ifYI…
G
Absolutely not
If AI was actually correct and unbiased maybe
But that’s not t…
ytc_UgyRmSV73…
G
People are just mad that their skills are easily replicated by automation. No on…
ytc_Ugx8ezmoE…
G
In my opinion . The problem isn't AI, fake merchandise, fake foods etc. It's us,…
ytc_UgxIrBarM…
G
Will AI recognize when established science information in a situation, proves, n…
ytc_Ugz7jjUbi…
G
Regarding AI please interview Jason from Archaix. 2040 then 2046 is a reset. Kn…
ytc_UgyFmxl94…
G
The comment that AI will get us better healthcare is dubious. They'll either mak…
ytc_UgyNUbox9…
Comment
Okay, let's try to think about it. Let's be optimistic and assume we reach 50% unemployment, not the 99% we're talking about. At that point, how many people won't be able to afford a car? Sales would collapse catastrophically. Therefore, there would no longer be any need to produce many cars, profits would collapse, maintaining a production chain would become too costly, manufacturers would have to raise prices to compensate for the lost sales, and so they would sell even fewer. Producing cars would become pointless and uneconomical. Because while on the one hand, producers are needed, on the other, consumers are needed, those who buy the goods and services offered, regardless of whether they are produced by humans or robots. So wouldn't it be an economic system that would ultimately compensate itself? Humans are needed, because robots and AI are not consumers; without our human needs, artificial entities have no reason to exist. So the problem doesn't actually seem that serious to me. Or am I wrong? Please clarify my doubts!!!
youtube
AI Governance
2025-10-25T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugyg6U2aA6M7auQodnR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgxKQUiPErPeb-aNtVh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugyi1GRimpd9fW7aX_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugw0q8J-LjqfP3vWEat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz9TMACLxJaV25b5Hd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgzTaQ3KyBMDLOO2eaF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyJnueEQ8KzVEWoXKF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxbW2k8OlXDWmGoXP54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"ytc_UgwElWTxLgiPk9MhxyF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgwIz96HpTIk1y1cIA54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"]}