Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Another option: AI won’t profile people and pull them over because of their colo…
ytc_UgyCUxWkq…
G
I totally feel the urgency around monitoring AI recommendations! It’s crucial fo…
ytc_UgyF7m8iE…
G
Robots assist with mining, oil drilling, nuclear, solar. Plus, AI is smart enoug…
ytr_UgwzjK8EM…
G
Spiderman: Into the Spiderverse, Spiderman: Across the Spiderverse, Kung Fu Pand…
ytr_UgxF_XAuY…
G
I watch videos while i eat my dinner and GOD this guy was like one of the worse …
ytr_UgxMN5rcI…
G
On the one hand I'm worried about artists getting shafted by ai. But on the othe…
ytc_UgzeiVzjE…
G
The scarey part is that this might work because most of the folks who control sp…
ytc_UgwJaMxYc…
G
exactly. Even if you aren't learning specific techniques, your still learning st…
ytr_UgyyDx3fT…
Comment
My question is and has been, to what end would AI do all these things. What would be it's motivation. Comparing it to human motivations like the need for money to buy food to survive, kill someone else so that they don't kill you in the case of wars etc, what would be AI's motivation to wipe out humanity? It does not eat, cannot be killed, has no emotion. So wipe out humanity then what? Unlike 'aliens' who have biological needs for survival (just giving an example) AI has no motivation other than what it mirrors of its users and/or creators. Please make me understand. Thanks
youtube
AI Governance
2025-06-16T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz3zzyEG5V68b3yGjh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCRgWpo1KFa49Zaj14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxLmHOY9xh-ckjFXrF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSUpib1hcRSwdrVQ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmoRo7KfUvI8YKBQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwbkmUCxPIA6RSM2OJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwocjrzEwsqLjn51814AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQ9iocCvn77xmtO3V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwp3htsGjG1Y9fKDph4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_tRF9MjH7Kx8szIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]