Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI fails is the best outcome. The billionaires will lose billions but eventually…
ytc_UgyX3fRtO…
G
As a traditional artist I can proudly say.. skill issue👺
In all seriousness tho …
ytc_Ugx8VsaBX…
G
AI robot voice telling me what to think about AI. Dude just use your real voice…
ytc_UgyzQIwp5…
G
Tesla's Autopilot has crashed numerous times - but lawmakers (ie insurance compa…
ytc_UgziacTva…
G
You've nailed it on the head pinkyponk.
This is exactly how I see the future he…
rdc_dbzisqm
G
Maybe 3th world war might be humans against AI, maybe therminator was a visionar…
ytc_UgzdedMmo…
G
Look im not hating but face recognition mixing faces up is bound to happen. Many…
ytc_UgwYIo3zR…
G
I believe AI is going to be used to track everyone in the world when a one world…
ytc_UgxDJg3b3…
Comment
I'm glad that Ezra grilled this guy the way he did. I've watched several interviews with these AI alarmists and really they're all the same. They're making anthropomorphic leaps when describing how AI operates, and a lot of these otherwise intelligent people genuinely don't seem to notice, which is strange to me. I mean...we're talking about lines of code. Lines of code - words - cannot spontaneously develop sentience, or consciousness, or desires, or resentment. They're literally words.
I still await an interview with an AI alarmist who can actually convince me that there's something to be concerned about aside from human interception and manipulation for bad ends.
youtube
AI Governance
2025-10-15T20:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx0eO84iCVdGa-cKip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8PlCBzNjvAigLxFh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxlRue2H7T6_ZB_vUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyt3hv5O8ERb9YLSoB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyfgxGpRqKXk1E697R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxV6pE8mgjX3NxCgAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOAM377rC3BN7EAil4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxnVyar3ZKhY8tQS2B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXCp0x5W-aQeQ8lBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzdO69m5g0_OjZkzkd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]