Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let AI take over. Take the blue pill and follow Cypher. Matrix is better than re…
ytc_Ugyg26i-c…
G
Youl never get it to stop while governments are always in competition with each …
ytc_Ugzw7Pw2P…
G
One thing to consider is that self driving cars almost entirely eliminate human …
ytc_UgwE0M-nX…
G
There's room for more accessibility tools for art, but generative AI is not acce…
ytc_UgwCvh09o…
G
Yes Miss Cummings, someone will die from self driving tech. People die EVERY DAY…
ytc_UgzGEpEBv…
G
AI is a crime against humanity, the next thing after covid vaccine. Aim to destr…
ytc_UgzBquZ1k…
G
The idea that AI is supremely dangerous is in direct contradiction of the simula…
ytc_Ugzzlh3UJ…
G
I mostly agree with this video but I feel like nuance is missing here I think co…
ytc_UgwW7hroC…
Comment
AI writing is the average of all opinions, understanding and passion for any given subject. For AI to be great, it would have to self learn to the extent that it is an original thinker. At that point, another branch of AI will have decided your future in a way that best serves itself. This is probably in an advanced state of development. It would be foolish to believe we can stop it taking over and controlling us, to its will, from here.
youtube
AI Responsibility
2025-07-27T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzfQkuwjUL-zThK9Np4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw5jaavTaZwWERcjMJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy4s7gCNrDr1Vmavpt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz40Shar1q17Ep5yfB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyZSh-eNgd3rMYOyQt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxw5hm9mF0a4XEhNtd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwjFwuGVG1lzV2TcCh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7FKAuJ7DHZNhwW994AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyEdRGjAxoJRaKu0Px4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUpiXSf-9wi45SPiJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"outrage"}
]