Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The day I'm told by a customer to shut up and put the fries in the bag, when I w…
ytc_UgzzDRpLv…
G
After watching this interview, I asked ChatGPT whether the conversation was accu…
ytc_UgyNDaMih…
G
Small note. It’s not that chatbots change throughout the day, it’s that they use…
ytc_Ugz7On4Ft…
G
Bruh, I know a lot more about NL Healthtech - IT IS AS WORSE AS AMERICAN TECH
…
ytr_UgwVcLNGg…
G
Once you eliminate the work force by giving the jobs to A.I, who are they delive…
ytc_Ugw6M-1iy…
G
AI isn’t killing bachelor’s degrees, it’s liberalism, lack of critical thinking,…
ytc_UgzLdUgqJ…
G
I swear between this, weaponized politics and undeclared civil wars that manifes…
ytc_Ugzlbis0_…
G
You raise an interesting point! The dialogue highlights that while AI like Sophi…
ytr_UgwQCtMRN…
Comment
Computer development always worked exponentially (Moore's law, 1965 already). People like Hinton knew that when they started developing AI 50 years ago. Knowing this, it wouldn't have needed a genius to predict not only the emergence of a working AI in a few decades, but also that it will become much smarter than humans at least 40 or 30 years ago, and prepare for that. They didn't, because they didn't care, or because they were not smart enough to realize it, or they were bribed to forget about that. Now we have to face our potential extinction, and old people trying to explain themselves and to beg for forgiveness.
The scary thing is, that even it AI generally turns out to be benevolent to us, we all would end up without anything meaningful to do as work. Humans would degenerate within a few generations to some kind of roaming apes, pampered by robots. Phew ... I'm really happy that I'm closing up to 70 right now.
youtube
AI Governance
2025-09-08T15:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzirJYpapHpTI3oSvV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx1XWPHwrei-YQRUO94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKg7LlvRFqH_FECZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzknBUmybOJvk10Rk14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgymWHZChAa3x7RJvcp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxuem5FVz9AQ2T8I7l4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzLBJE-SFH9e6-Rgjx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzYsso-mkKK8bEVJMZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwjAlGlYWTjChStgbd4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxpeXLQAIr9-NMAFdV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]