Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI and Androids will destroy 90% of jobs, in a very short period of time, homele…
ytc_Ugw2k65LU…
G
I agree with the broad concerns but not the proposed solutions. Many of them cou…
ytc_Ugxg8v4Om…
G
Guy "What is the best way to microwave this food?"
AI "Do not bypass the microwa…
ytc_UgyPNnMo4…
G
The implications of this technology are going to necessitate implementing some f…
ytc_UgzFl-Sup…
G
0:03 yeah that’s bullshit the only reason I ever tried ai “art” was because ther…
ytc_UgxD2KNVH…
G
Your silly anti-ai argument crushes as soon as someone would not tell you that i…
ytc_Ugy6uqmJy…
G
Fascinating, if highly depressing discussion. I'd like to think humans will get…
ytc_UgwfI1vnL…
G
Waymo is dangerous and needs to not be supported. Think about it, this can get h…
ytc_Ugzu99oWv…
Comment
They had me right up into the Simulation argument, then the train came off the rails. Their argument is one of conflicting outcomes; live forever, but AI becoming self-aware means you can't earn an income? Well isn't that special? No one in government, irrespective of party or affiliation, is smart enough to figure out how to create a sustainable society where income is not important, and the private sector can't allow it, because how will they derive and drive wealth? Another interesting thing about Dr Yampolskiy is how he tilts toward a religious solution, or at least touches upon it in a non-destructive or detrimental way. "All we have to do to live forever is live long enough to cure the disease that is death." Or as he suggests (the religious solution), is to die from this simulation, and live forever in another attempt.
youtube
AI Governance
2026-03-30T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6O0jKnc-aFVb2cRN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylxtVI4V-sY4d2tht4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxvbMzY4drb4b8h-aR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzFNh0IiPUOY-OFGgt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx4C3yTpmNYCI2Er6B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJHT5RBWeqvKxss8F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxusFNv7Cw79Vlnwzl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxeWRsAB03c6aEAtyd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBwcGPU4x0eRukGCp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyT00TUtBYCbBsaGdJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]