Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ain't no way they think 3hr work on digital art is the same as ai.
Ai will liter…
ytc_UgwXcAw6Q…
G
At 5:01 the man Robot 🤖 said what are you talking about I think gargle ? Or whoe…
ytc_UgxArRWSM…
G
its sentient alright. a demon is posing as a.i. They can possess this like they…
ytc_Ugydau8dV…
G
The programming and algorithms are no that impossible to do nowadays. Any govern…
ytc_UgxDS7Q6p…
G
Once AI replaces most human labor, then the global elite can begin culling the r…
ytc_UgwOxV7Ld…
G
E: [starts to explain scenario where AI could be dangerous]: ...different events…
ytc_Ugyh3qy4H…
G
Maybe this is why millions of women & children disappear. To have their faces pe…
ytc_UgwQjmeOB…
G
Amazing talk really appreciate your TedX Talk. I have been in healthcare for 35 …
ytc_UgyejeUg0…
Comment
I believe it’s far more likely that open source AI will enable fanatics and cults to carry out devastating attacks on both people and our infrastructure. A small terrorist group could design a biological weapon such as has never been seen before. They could disrupt global supply chains and banking systems. The earth cannot support more than about 10% of our current population of eight billion without our infrastructure.
youtube
AI Governance
2025-08-04T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwIoOom2BIMEZaFyul4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy0sVOPz5Dnty7vX4R4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgySeEHytqv5Wnbuu8p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxousR2AulTVP2RCSl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxCmEFgLqh5ioKVVN94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-F95BK1Gz-x5WNi54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyevSIntYjChKIbEkN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzzCMH-7gdUDZFQlr94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"excitement"},
{"id":"ytc_UgwS9OqywTFCWBgmgBh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyPgzrKDjokIv7El_l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]