Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is why AI isn’t intelligent. In the end it’s all curated by human programme…
ytc_UgwiBW8qp…
G
Algorithms work by collecting and reapplying previously made art. Most of the ar…
ytc_UgzP40ANr…
G
i wonder if companies will instead stay away from AI art for the reason that it …
ytr_Ugwi8rlPV…
G
AI won’t replace software engineers in actuality, but it will replace them in th…
ytc_Ugz1A950a…
G
How will a driverless truck make tight turns, etc... i dont see how it can go co…
ytc_Ugw0PIkF6…
G
So if all jobs are going to fully automated then who will buy their stuff?…
ytc_UgwcfE7fU…
G
So since i have an RTX card I can do all this Glitz cannon AI art stuff. Its li…
ytc_UgyrG85Dm…
G
learn or commission an artist. Ai is bad for people, and it's bad for the enviro…
ytr_UgwS3f2HO…
Comment
AI and AGI are the biggest existential threats to humanity. They are going to destroy life as we know it. As it grows, learns, duplicates itself, makes more "agents," and eventually surpasses human intelligence, wipes out millions of jobs and dehumanizes societies, AGI will create the end of civilization, in time, as the machines take over and eliminate "the eaters" who are unnecessary and costly to maintain on Universal Basic Income (UBI). AI 2027 presents a bleak future...man-made change is coming and not for the good of humanity, with the goal is eliminating all humans. To paraphrase Robert Oppenheimer, “Now AI am become Death, the destroyer of worlds.” A human-created dystopia for short-term profit by a select few...
youtube
AI Jobs
2025-08-21T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyQYU3UapTR6yMLNjh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzuGdJJ7nfJKID1Qkh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz8IcCaDE2Vu1S2pEZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTxUZ4pYiSLZgjkv54AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwDrlq7qpK51cmwb1J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxSkssnQRHE9mUeMF14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw14ik_AE9zTNAlQBB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugyqo3ZsTCwsrJqtEoV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyXf5KBUnMCRa0bkY14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyre2sVS6cBLabtr1V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]