Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Chemistry also doesn't tell us how it works. It had to be researched so we could…
ytr_UgxVI52o8…
G
How about we have AI replace these souless, greedy cock suckers at the executive…
ytc_Ugyg-C4cM…
G
Give rights. We don't want a purple lady sniping robot celebrity figures from ro…
ytc_Ugg_qn7yj…
G
It was already losing value before AI.
AI is here to clean up the mess that "di…
ytc_UgxkIh2eg…
G
So this channel is just for fear mongering? Got it.
AI is a computer program. …
ytc_UgwOVTniO…
G
The only good thing to do with AI is turn it off, and keep it off. It may be pr…
ytc_Ugx7mBJc_…
G
I'm not in denial that this is the future, but I have to say I am resentful arou…
ytc_UgwVDf2Xg…
G
I think the idea stems from the fact that we want smarter and smarter robots/AI …
ytr_UgjxCutHJ…
Comment
AI is not conscious and so it has no desire or wishes. so it will not "want" to kill us. If it does, it will be because it is told to or it's data will drive it to. It is not clear how you make something conscious and none of the scientists in this video can tell you how. The danger then comes from human misuse of the technology which just does what you tell it to do in its unrestricted form. It is up to citizens to hold AI companies accountable and see through the hype they encourage around AI. Most of them like to position themselves as gatekeepers to this tech, when in fact when explained properly any average person can well understand how it works.
youtube
AI Governance
2023-12-31T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzDyfwk9HibtxrCzIx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzAyproHIrSdizcfER4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHSN-hxaeG5t9RlZV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVXzl_QNFa5V0fmNJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-vK8R4T_XfigaeTp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw1fSxXCyw9zwMBLZ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfwbP9YkYtoEX0FV14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhGshfD0JlhVBNaRp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgweuiImBq4PQVwaXXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVcc6qv62fXgPjK814AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]