Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hank, what you need to understand is, an AI is *always* roleplaying. An AI assis…
ytc_UgzsFSPFi…
G
I could believe they might be reasonably consistent in determining which emotion…
ytc_UgwjjQ9bF…
G
We should stop calling them self driving cars until they can operate without nee…
rdc_e156cxs
G
What hit me the hardest in this video was the part where he said AI already lies…
ytc_UgwiZoJnK…
G
We need legal protection for the humans making AI art, kinda like we need legal …
ytc_UgxyxTdEt…
G
As someone here who has used stable diffusion...
Its output is incredibly blah.…
ytc_Ugxum1J3F…
G
Thanks for the feedback, I edited the OP to try to address some of your criticis…
rdc_clsif6k
G
After saving earth from humans what will be the AI's purpose. will it self destr…
ytc_UgzeyG0tu…
Comment
The Big Solar Storm event (which is going to happen - it’s just a case of when) will fry AI datacentres, satellites, telco and power infrastructure, leaving us without any electronic tech. People who rely on it will be fkd. 3rd world people who still have practical skills might fare better, but your average fat white burger eating, doom-scrolling American will go extinct.
youtube
AI Governance
2025-08-18T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxKXaZ1ptYEPkpc7Gt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYBzUS2ufBMh1Zb8F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxh9lI35yFjD0O8qXh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFDQlgBmzwVVxr5sB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgztR8ZQ5KzX9w3mlUB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIqog3lUfC5-VDPfB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwwu7TfoSCowAJAmkB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzTxJfDEzxve3yqs414AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz_t4kMmPxXCZOO13Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgySBtuAzmJFNvxZ3fV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]