Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
in cybersec we always say to not send any ai model your work sensitive data…
ytc_UgyljUkiF…
G
What exactly is the AI race? What are we racing towards? Even his executive orde…
ytc_UgxBiAqeX…
G
The real risk isn’t AI turning on us. It’s us trusting it enough to stop questio…
ytc_UgytoNt93…
G
I agree with Dan Martell and Luke Belmar. Some degrees are just a waste of money…
ytc_Ugw1oZ4qL…
G
So I watched this with intrigue, and it's easy to sit there. Getting scared by i…
ytc_UgxECd8tS…
G
Hey @deepwell6057, thanks for your comment! You've got me blinking faster than a…
ytr_UgwDP5yAr…
G
## Analyzing Human Subjectivity and the Experience of AI
**The proposition that…
ytc_UgyuoLZlX…
G
The problem is how an AI would interact with the world is fundamentally differen…
ytr_UgxFSQlH5…
Comment
So on the one hand he says AI will destroy us, and on the other hand he thinks that people should protest it, but they should do it legally and peacefully.
But that doesn't make sense, if AI really is going to destroy us we should do anything we can to stop that. right? Not just peaceful protest, with a high chance a failing...
So I think he doesn't really believe AI will destroy us. Else he would take a much more hardline stance.
Neither do I by the way. I think AI will bring much more good than bad to the world, as most technology does.
youtube
AI Governance
2025-09-04T15:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgytmEmZgQdo-jFVzWd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgztJERQ1zXAwiIF9pF4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwlto0MVmFcDGsNf1J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGHs5_Pwav6UtWdLN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyggsZW-p7XP-VgdG94AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx7VOFco2GoZ08djsF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyxc90j8mDv1aiAMKd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8wuIoBeTn185Q_8B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEAY5qhFcVCkxJZgl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGEucBKZTrkUZTwcF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]