Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A.I. will not benefit humankind, cannot create more equality among human beings,…
ytc_UgyATppEI…
G
While it is essential to consider these concerns, it is also important to recogn…
ytc_UgwvCzD5k…
G
Even with AI they can't help but lie and be racist? How to be a loser 101. Every…
ytc_UgxQBjnkA…
G
I’m sorry but why would AI try to take over the world? The real danger is if our…
ytc_UgwI_7_j0…
G
Sounds like this guys is talking about the future of the humanity. There is noth…
ytc_Ugx966xqh…
G
Doritos locos. It's no wonder ai couldn't tell it's a doritos bag when you see a…
ytc_UgwWULqHj…
G
There should be a law that all AI content has a caption at the beginning, statin…
ytc_UgwbhZOkw…
G
All experts expect that AI will be conscious in the future... When the future co…
ytc_UgzYBacwm…
Comment
The question is, who actually wants this future? People see the glamourous side of futuristic AI from movies where they assist in all ways and do the boring jobs while the humans go to work and so on and it looks great. But in reality, it will be the reverse. The bots will take the humans jobs, and the boring jobs become the jobs for us humans. Leading to a vicious revolution of job loss and poverty. Affecting those who have lost their jobs 1st, but then affecting the millionaires of today that rely on those people to purchase their products for their business to thrive. Alarming. We had a great balance, where IT was there as a tool created by us to help and entertain us, not replace us.
youtube
AI Governance
2025-10-08T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzbcXCuHnjr2u7JPFR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx1OeyMIHrBRkFpJfl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyymEPoHjznUjS7nQl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzUkDFg4EAeRWiK9fp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxL6f6ezyQGRdN4lKx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJrOjz91_1GnEUOtl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHaUoBzlPN-YFDVz54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzRp2PkNfa7FFqpcnh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGv9E33z5PqYXASKV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTtEkRTddB2gc--e14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]