Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Biden wanted to limit AI
How many of you voted for the other?
How many didn’t vo…
ytc_UgzEiY-lV…
G
Terminator coming you will see. We think we can control it but IA will trick t…
ytc_UgxHRy-qR…
G
Also, because everyone doesn't seem to know this: AI !Art" is NOT protected by c…
ytc_UgzAPh2Sb…
G
We should not solely depend on machines. They break down, if planes and trains a…
ytc_UgyCipFqv…
G
30:58 A distributed system in computer science refers to a collection of indepen…
ytc_UgxzQ8xVZ…
G
I can’t… I can’t believe people are using AI at all (hello, the planet is on fir…
ytc_Ugy-w_Ej3…
G
I think it's very naive to think these problems with AI won't eventually be solv…
ytc_Ugwopgp1w…
G
I haven't seen anyone else making this point, and maybe it's because I'm missing…
ytc_UgyRCqJ4j…
Comment
Dr. Yampolskiy is not a fraud....not even close. He's a credentialed researcher who coined the term "AI safety" and has spent his career taking these risks seriously. My concern isn't his background. It's that he presents a minority position with a confidence level the actual evidence doesn't support, nd that's its own form of misleading. For context: Yann LeCun, one of the most respected figures in AI, places the probability of these catastrophic outcomes effectively at zero. Yampolskiy puts it at 99%. That gap isn't a minor academic disagreement. It's the entire width of the debate. And Marina, your audience deserves to know where your guest sits within that debate....not just that he sounds certain. Certainty is compelling. It's also, in genuinely unsettled fields, almost always a signal to slow down.
youtube
2026-04-17T22:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy6HE2TbDx8QWWGkdx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugyd-BS-5fkIKd-vO894AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzS-A-3YLunNd1Fb_54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgzzO5yKhZJOfQq5V5V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugzu7M8ticLFUXRUWtB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwPTddvtcxT3V2qZpl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzVSBTKlvhXdy8IChR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},{"id":"ytc_UgwO97KdPSH2YaXzJLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgyK5z0rsH6Jy2wRe454AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzmMCjRRvZ4Nx_JweZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"})