Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am genuinely a polite person so I am always being polite to AI and message tha…
ytc_UgziC_Rcy…
G
It's stupid to ignore a reasons why people are so angry about ai, for sake of in…
ytc_UgwBDtf6d…
G
you guys are acting like this is proof ai is a bigger threat to humanity than hu…
ytc_UgzBiiL04…
G
We've all heard and seen what dangers films like the Terminator have shown about…
ytc_UgzwICy5a…
G
as a software developer I work with chatgpt every day, but I have a habit of ask…
ytc_Ugxxt7tXR…
G
Detroit become human takes place in 2038 and I, Robot in 2035. These time frames…
ytc_Ugx9g1XGv…
G
The people defending ai art are either jealous of actual artist skills, or too l…
ytc_UgzV9sGcQ…
G
So, there will be a need to implement a basic income. And we certainly don't nee…
ytc_Ugw0P2fAZ…
Comment
I was studying AI over 15 years ago in college, AI is not new but the development is new. The problem is no one wants an ethical person telling them what they should or should not. Capitalist countries are coddling the greed of the super rich to the detriment of humans and society.
youtube
AI Governance
2025-09-08T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwgeFiZW1lp2hAi6ep4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4K20UutP1SST9wkx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyWTk-utfFMepOEoIF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw574Z_STh8dwTTXTV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4kdh0YZcXO5wfwtV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxG8jMS0O9ykWX0ksR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxxtLxiGtMcCciWYzx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzK92t7MS9bMLkw0uF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw4KnaFCGlYfPzIySB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgySfIA8rB-10r3Qhi54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]