Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wow these comments really don't understand the revolution that's coming. I see a…
ytc_UgwCdo-QH…
G
are you a kid? The entire porpuse of the video is show and talk about AI generat…
ytr_UgwdAYy5N…
G
I like all of your creation and this gibli art is far better than AI ones…
ytc_UgwYsW7qW…
G
The machines will be immune to all Bio Weapons that are stored in facilities acr…
ytc_UgyhET_KY…
G
the more I think about it AI is just auto-complete only instead of my phone usin…
ytc_UgzpAVemy…
G
Guess what AI stans, your favourite shows and movies that you watched as a child…
ytc_UgygYTMMn…
G
Just so everyone knows, each prompt you ask ChatGPT, even if it is a simple "tha…
ytc_UgzO4mY-G…
G
It is easy with Rephrasy AI! It is an efficient AI tool to humanize AI generated…
ytc_UgytWvxl5…
Comment
I was just experimenting with generative AI for images and music before watching this, and now I can’t stop thinking about what happens once we move from creative models to truly superhuman reasoning models or systems. The idea of autonomous AI interfering with things like global financial infrastructure, etc feels both fascinating and genuinely unsettling—threatening even. It’s fun to play with these tools, but there’s a real edge to it too. At what point will corporations actually take the experts seriously about the risks we’re heading toward, now that we're seeing a tad of AI's sinister potential?
youtube
AI Governance
2026-03-17T04:1…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxFBh7sICuefzof2kN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwxTBPE9D4iYHNQevF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwzGmtADaJEY6k8qmd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOFfzGwQ6MfDj6MdJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwFprXAsM6XYxs3UQl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx9UBQQH62TXoM3hoV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuvbCVMFKVZiHmmA14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbFeUalc0LjZuDAhF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNWIG2RB08sYWiEpt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyIP835IEjrJ4_2SXB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]