Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just hide until they run out of power. There are things AI can't but humans are …
ytc_UgzmIQltZ…
G
We are hearing the opinion of aged or is it 'sage-d' godfathers of AI - What ab…
ytc_UgxhJ2oMr…
G
They say LLM are not true AI.
But what if we humans are also just LLMs? 😅…
ytc_Ugzc8ILNO…
G
The only way to take AI down is to act serious with it while asking very ridicul…
ytc_UgykfwZRl…
G
Literally every single commercial that popped up during this podcast was for AI …
ytc_UgxM-NpAH…
G
@mondohop713 The first one is relevant because I know people who went to art sc…
ytr_UgxD4ElVM…
G
Ok, how's this for a title: "If Anyone Builds It, Everyone Dies: Why Superhuman …
ytr_Ugy_nk2Ei…
G
If it's gonna take AI to kill off the dumbest of the dumb, then let it have at i…
ytc_Ugw_lfstL…
Comment
My job is safe at least for a while. I work in finance, resolving escalated complaints. It is legislated that a human has to talk with the complainant. I don’t see any government, in the face of giant unemployment figures, changing legislation to allow AI to take over. I’m about to start an MBA specialising in AI integration into the workforce. I have a partial scholarship, but it will still be a $40k+ debt. Hoping this combination of skills, a slow to act government led by fossils, will see me out for at least the next 15-20 years, then I can retire early and i’ll be okay. It is still a decision weighing on me, what if I’m wrong and end up with a giant student debt and no job? Tough time to be alive.
youtube
AI Governance
2025-09-09T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzzbx9vHzsE0VXqyQ54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzRIFFmsvdWwWl9YcR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaAHIgvku5oj8adxp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwHoQ7iALtVtNFaUPt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxiDDxIhEzMHFZfqi54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyu7QELa2d2YSmv0Ax4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwrMudl86G997DWrRl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjL4fSeerPRGbjkDZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyRJiiJV2_OtddlXq94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw250M2RVuRl9i-9s14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]