Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Correct me if I'm wrong, but there has been no protection made towards ai (such …
ytc_UgzPCf1W7…
G
14:27 Hey! im a little bit disabled!
i dont feel comfrtable to talk about it, bu…
ytc_UgyGE5lTU…
G
I have ADHD. My teachers would punish me for not being able to sit in my seat b…
ytc_UgzDuFTVE…
G
Id be interested to see a study on just AI teaching through apps and internet vs…
ytc_UgwFqUvHx…
G
Tesla autopilot is great, but it makes people loose concentration on the road as…
ytc_Ugy7L8wVg…
G
@benjplaysbass its an ai bro who hasnt watched the vid so they left an angry co…
ytr_Ugyyk-Rxc…
G
AI is faking it till it makes it. Just needs a few more GPUs brah.…
ytr_UgyLfTtZm…
G
I think the way that we are thinking about AI is just incorrect. There are a lot…
ytc_UgydzDZXI…
Comment
The state should have a leading role in democratizing AI, allowing human beings to judge where they would like AI applied and developed. There is no reason to trust corporations or our current government to handle AI in a way that isn't malign. One can see this in the WGA/SAG strike, where studios are attempting to license low paid actor's image in perpetuity while offering a day's pay. In fields where there aren't unions to mediate the individuals choices, exploitative standards can be set before society even knows they have happened. This will be a defining development for human society. AI has an incredible range of positive practices. But on our current course its only function will be to extract value from us, for the sake of its corporate controllers.
youtube
AI Governance
2023-07-20T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw-7ZEL9hrvztaPiMJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwgwdA86jUKrmJmWR94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwnBA39oCgbfTw7NbZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugyg-6LiBLujU_NI1qZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyhKEfqOIIhg1krpCp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyYcVM-pNzmi5mJZ14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyIACjU5tJLHxxqEBt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgywPNbhIXkqbD73kyF4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugwmcn2lyzFZ1-gKyV14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy-nNw8dGnaG71cyDx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]