Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If I had a dime for every time I got an ad for something AI well watching this I…
ytc_Ugyi2sOtb…
G
Why is it not possible to automatically add a watermark to every YouTube video w…
ytc_UgxsLkL6y…
G
I fear that this conversation could very well be the defining evidence and justi…
ytc_Ugzm39767…
G
AI will not be who destroys humanity, it's the elites who are creating the ethic…
ytc_UgxxAeD-0…
G
There is no big difference between Ai and human mind on some sort of level,
Hum…
ytc_UgxipCxuH…
G
"Is AI Coming for Your Job?"
It already did. The owner was already retiring, so…
ytc_UgzgpzwS4…
G
Pretty sure this is going to happen regardless of various fears. Given enough t…
ytc_Ugie-lUnu…
G
Yea no this guy is an atrophysicist and AI is really not his field...i dont thin…
ytc_UgzqH8w4O…
Comment
I like the argument of large misaligned social structures in the debate of AI safety: humanity created governments, corporate entities and other structures that are not really aligned with human values and they are very difficult to control. Growing food and drug industries resulted in epidemy of obesity and deseases caused by it. Governments and finantial systems resulted in huge social inequalities. These structures are somewhat similar to AI in the sense that they are larger and smarter than every individual human and at the same time they are "alien" to us as they don't have emotions and think differently. These structures bring us a lot of good but also a lot of suffering. AI will likely be yet another entity of this kind.
youtube
AI Governance
2023-06-26T16:4…
♥ 99
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz-xaGPm3D8c0ixwBJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwpCXSpz_jjcNwXPVZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyWT2UpaskQUMAayqZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJMOcfjVCpVbEKq7p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw038Sm5-hO9QbDRQt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-SmocC08gAzk5kgp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzA8QT364rRklCbe8h4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx9jAOzKSBkQ2GH8K54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwt7LuF1KC8pyqZBbN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxF1j0N3Xrp1OOO34N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]