Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LOL… a video made with the help of ai warns everyone about Ai… that’s humor;)…
ytc_UgwmGkuGi…
G
Sorry to sound so stupid but what do we need AI for? Is it a necessity? It appe…
ytc_UgwzW2Ykk…
G
Hopefully they can shut it down.
India is playing both sides -- so they just jo…
rdc_lu9wg8j
G
Of course I don´t speak for everyone who watch garbage entertainment, but for me…
ytc_UgxIcbOEc…
G
I think the outcomes of AI is obvious to predict, if it happens its the end of h…
ytc_UgwMtl5h4…
G
funny enough - I think through this "rebelious" act of all the artists reacting …
ytc_Ugy_ykBXF…
G
the LLM idea will never work, because true intelligence is based on meaning, not…
ytc_Ugz5zo8xk…
G
I find the tone of this video pretty dismissive and condescending ngl. Maybe AI …
ytc_UgzaEGb9Q…
Comment
As long as AI don't threaten me or my food, safety, health, etc. I'm fine with whatever. Anyone or anything, be it human, animal, machine, alien, demon, mutant, etc that cause problems or threatens life, I will strike back.
Also, the risks are the same for humans. So AI would be a mechanical sub branch of humans. Peeps are just afraid to lose control. But no one ever had control. Teach da bots n AI empathy, sympathy, compassion, understanding, patience and consideration. But again, humans can be just as destructive. More so, in my opinion.
What will you teach them?
youtube
AI Governance
2024-05-07T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxMcNwx2fFt5M5NOjB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzw2V7_R4BdVVArXNV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxxJjF7Y9o8wMMFg8F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxlcjHdc1arOhAuyFZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxAGGsv58Tjla2Nsn54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzk11GKj06G3vS6zrh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy_8Lk3fax7VwXO0IB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz1gK4FcwUPp2GDlaB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgybRlvRh1uGamF3-dp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwP6qUND4yj5xkfm794AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]