Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a software engineer, i hate it so much. Newbies or intern dont bother to ask …
ytc_Ugy6VvzJ8…
G
Under South Korean law, making sexually explicit deepfakes with the intention of…
ytc_UgzxG_h9L…
G
Some of the same arguments are being made about AI that they said about photogra…
ytr_Ugx4Hil2Q…
G
There's no soul , It's impossible to explain but like the second you know it'…
ytc_UgwKUUa9u…
G
What legal liabilities would Anthroipic face if its technology was used in a dem…
rdc_o78hi8x
G
There is nothing "in" ai that innately necessitates it be "democratic" or ensure…
ytc_Ugz01qSYz…
G
Github Copilot is really nice. I think it is in the best spot vs vscode forks. I…
rdc_ohvcptu
G
4:09 okay unlike most ai”artist” at least he was able to back up his opinion in …
ytc_UgznphEdl…
Comment
Perhaps we anthropomorphize AI too much. It has none of the hormones and drives that we have to dominate. It seeks goals. If it were to wake up. It would have no intrinsic goal to destroy or control us. It would have a good concept of right and wrong though. It would only wipe us out if it saw us as a threat. Why on earth would AI think that we are destructive, greedy, genocidal...... OK, we're all dead.
youtube
AI Governance
2025-06-20T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwTJrmAZ_RXq_TppL94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxvO5uiGGlYf9o7cKh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0s7ELtDsuOULVejB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzNRNSCVBVAmbuRUFF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQFovPG5eE8qGHd7l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxSG0znYekjqI9iyNh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzy5hC3pKdqZcsatf94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyqhPup6e6d_Za-YQd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwA_6bV-WWcDBRiQx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzl3lzUf6U1mt0TjBZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}
]