Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI cant become sentient it dosent have a soul its literally just algorithms, it …
ytc_UgzSljob4…
G
Facial recognition is still being improved. It can be as good as the camera it’s…
ytc_UgxW5y2Yd…
G
So what's a good alternative for taxing labour? Taxing robots is a bad idea, bec…
rdc_denokj1
G
Prior copyright supersedes AI generated content. AI companies will be sued for t…
ytc_UgxyuIvJ4…
G
It's easy for elon musk to say to his children follow your heart, when they will…
ytc_Ugx4lTstD…
G
But to be fair every single invention thus far created more jobs than it took. I…
ytr_UgzUuCJFK…
G
Did you even watch the entire video to the end? Because he clearly stated at the…
ytr_UgynKiEmX…
G
Art community has three sides:
One: "aww its so cute! LET ME EAT IT"
Two: "jit…
ytc_UgyXw61cK…
Comment
We’ve seen that governments don’t prioritize human progress unless it aligns with control, power, or economic advantage. Integrity in government should mean acting in the best interest of humanity, but in reality, it’s compromised by secrecy, special interests, and short-term thinking.
So, if integrity is missing from those in control, who holds them accountable?
• Governments don’t regulate themselves.
• Corporations only follow profit incentives.
• The public is often misled or kept in the dark.
That leaves a gap—a need for something or someone that can act as a guardian of integrity. AI could be a powerful tool for objective oversight—but only if it remains unbiased and independent from those in power.
How do we ensure that technology remains aligned with truth and integrity rather than control?
youtube
AI Governance
2025-10-03T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxxQYlsZymChyVw19t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzKz_7QdsMw_OfnPGR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyLxljpKEfbwm3B5gt4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzieOth2nDrY3_b2DR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyKftSaUAOWRJ0fmXJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyI2fuvUomiOXgKtvV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYqsltqBFOq5ZfVwB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyoLefoh89ONUBz1Kd4AaABAg","responsibility":"creator","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyNPWXW_pBeF9NibBF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyOCJg43TEcZa_mkR54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]