Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thats how AI gives away. Emotions are ultra dynamic. The more you amplify emotio…
ytc_UgxBjvz1U…
G
I can understand automating things at a restaurant, the grocery store, whatever……
ytc_UgxXpxmC7…
G
So AI thinks like a human brain? I know some pretty dumb people.
What makes u…
ytc_Ugz_GQBFQ…
G
bro caluclator still need of human to operate it and if you talking about trackt…
ytc_UgyacZBbX…
G
If we all lose our jobs .. how we gonna afford all this stuff AI will make ?…
ytc_UgxWCVNFS…
G
I think capitalism’s (especially neoliberalism) existence and humanity’s thrivin…
ytc_UgweKuWFd…
G
tbh I dont care if I robot can make something better than a human and I like it …
ytc_Ugzt4_3yL…
G
No one wants AI. We need to ban these data centers and heavily regulate AI. Peri…
ytc_UgxUvZa0r…
Comment
So all im hearing is the way our society is set up where companies aim only for profit at the expense of literally anything and governments who set rules for everyone but themselves, and they are both in charge of how ai is being developed. Cool we're in for a ride ladies and gents, buckle up and tune out - ai's main reach is via tech, for now. stay plugged in or tune out. Choice is yours.
youtube
AI Governance
2025-06-16T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwmrm_v7xJWVqG3ogd4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhHJfZYvUNi9ttUGp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwtjN8i_dTUh_D-vrl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyCeOWyN8dixl8xRl54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZO9NdPBMtQGvT3jp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxK-HCZi8QJsrw_Ce14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxLuCmgRnV1WArzUTp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxYilfM7LRQp2m0sSN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxYDiRrPCZVK16zIft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyaMlPCTpC2DANnBjt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]