Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI takes control of actually humanity people can't work can't get paid can't …
ytc_UgzzKsXKn…
G
Welllll…if the people speeding it up were not around..it would slow down. People…
ytc_Ugy7NEQ5W…
G
Do your job, don’t call in sick, don’t take vacations. Just like AI does and you…
ytc_Ugy2g1BM8…
G
Share your thoughts.
What you think artificial intelligence will help us for bet…
ytc_UgyxFdQXD…
G
Scenario... AI ruled the world at one time, killed everyone (because humans got …
ytc_UgzptcmsA…
G
ive never met someone who uses ai art that acts ike this, and fyi, im one of tho…
ytc_UgzFTKzEG…
G
The problem with AI is that by developing it we would degrade ourself from the t…
ytc_UgjPJM6Jn…
G
I dunno...if we reach the point where AI becomes sentient, I'd rather have it as…
rdc_dy5cptz
Comment
EU always going against innovation. AI is gonna be everywhere and the fact they’re trying to destroy and regulate AI doesn’t mean in Europe will be better. It’ll mean that less people will make money to move to continents where AI is used and literally means the other continents are gonna develop MASSIVELY X1000 Compared to Europe. Literally always doing the worst for innovation ffs
youtube
AI Responsibility
2025-01-26T16:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx3i_ee2UwzIhjafE14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2BMRvO6c4vcZUwz94AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyndJx21DzVhCU6FdB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXFWugRwdi2rGIeex4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx9pkU3jkcgYCFtk094AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxo24lYjGLevusMH_d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzYwGZU8FJ-OGYtlIB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxb5HnWh1qsgj74oFZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzCZeGSdI6s0QcKsCh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgylgHjLdAuvUKS-OKN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]