Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi there, I have used ai as a resource for inspiration. Not often, but sometimes…
ytc_UgzhJ2vrb…
G
I dont get why in first place is there selfdriving cars...
If you cant drive, le…
ytc_UgykFYWYy…
G
I've done art for a game studio. We have one rule about AI, "you are allowed to …
ytc_UgzVHSHzu…
G
so theyre mad Ai is "stealing" their stuff, but then "steal" the art styles the …
ytc_Ugx50Qfp5…
G
Always modeled as hot chicks. We know what the first application of the tech wil…
ytc_UgxTZJwue…
G
I’ll be honest, if AI artist want to compete, let them. Treat it like any other …
ytc_UgwXrKEZ6…
G
What LLMs have shown is that we don’t need CEOs and billionaires. Their jobs can…
ytc_Ugy9jZhal…
G
We get that the idea of AI and robots can be a bit unsettling! Sophia does have …
ytr_UgwJlP7ix…
Comment
I doubt AI could ever have emotions like desire, anger, fear. Thus I doubt AI would ever 'want' to end humanity. It's capable perhaps of rationally deciding to do so.
Wow, what a pompous statement, so glibly declaring that Elon Musk has no moral vision. He's repeatedly laughingly mocked conservative figures. He admitted he consumes the BBC, Guardian, & the NYT, & clearly he believes those media outlets are reliable.
He seems too glibly political, very far left. He's mentioned several times 'profit' in a disdainful way. Shouldn't a scientist be apolitical, not so clearly biased?
youtube
AI Governance
2025-09-10T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwEIZlzL0A-G4XVTzJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy6uTkNC3tF0QMvhg54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxeeJV8m6iAqlAd3Vl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxtyYtGLS1IyJxxGqF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgydmFg1susEVqLSxo14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwOQ9O1rYu1hZpEekB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz6OjOLJWYQj57-4zF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQJb9aRTsUYjYDaX54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwfr7x9Izo75P6ntIF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxIguoouo8_Ysf8WCF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]