Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I doubt that anyone (who has some basic understanding of how software is created…
ytr_Ugxrma_9j…
G
This AI bubble rush is going to lead us down to another Horizon IT scandal situa…
ytc_Ugxb3ILGQ…
G
China even said they dont use thier ctv facial recognition system to catch crimi…
ytc_UgyNrNcJF…
G
Is the insufferable AI writing intentional here?
'It's not just X, it is Y' no l…
ytc_Ugwz4dpRM…
G
Some countries in the world could do with western cultural bias of fairness and …
ytc_UgxeW9kUW…
G
does a human kill ants to wipe them out ? just rememeber this is an AI making th…
ytc_UgyB4eHmJ…
G
This is a Really honest video. I’ve seen all of this first hand. Even in a tech …
ytc_UgwYYYr86…
G
"AI art is accessible compared to traditional art"... Me with the humble pen and…
ytc_UgxlkwJqx…
Comment
AI should be controlled in a similar way to nuclear arms. It should come with the same level of concern and severity. Yes, like atomic energy, AI can be extremely useful and it can help us advance, but it can also be extremely dangerous in the wrong hands. It's already spelling bad news for people working in the creative industries like voice actors, music producers and visual artists alike. The AI race is basically the new space race and the new arms race. If one country has it, then the other has to outdo them. In the process, systems are getting more and more advanced and there is little control over who can possess such technology. Anyone with a home PC can train an AI to do allsorts of things, including impersonate others, swap faces on images and create 'fake news' and false information. AI generated spam can be so convincing that even the most savvy people can fall foul to it.
There needs to be a global treaty on AI control and it's proliferation in a similar way to how we have treaties which prevent nuclear arms proliferation. These treaties should limit AI use in areas like defence, finance, medicine (although it's application in medical and scientific research should be allowed with stringent controls in place) and government. AI should be centered around human interests and things that benefit us, it does not have a place on the battlefield for example, where advanced AI systems could be used to commit war crimes with plausible deniability for the offending party.
youtube
AI Moral Status
2025-06-08T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz6iwdnKdcUE2DKv054AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzfCXo6_G3kF_LNXDZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyO3tUSXDuTYIK6iJl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyg8zIBwCUtYHEZYuV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxKNhKtPojWz-13TZZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYS6PtrkGJV17QiT14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwv3seHZKYuRorO2pZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydX5R84ERgbBnZeTR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6ppjIBSQqpeAINEd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_UgyK2RjAOqG-T5XItJh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]