Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes and no.
The "ai" part of the tech is overblown. It's a buzzword rather t…
rdc_imncxgv
G
The direction of technology development should be prioritized to fulfill basic h…
ytc_UgxV8BepN…
G
When I hear professionals providing us with different forms of risk that AI impo…
ytc_UgzMumsTk…
G
The AI data centers need to get destroyed by demolition crew legally or extrajud…
ytc_Ugx6PA1Jy…
G
I think every human on Earth agrees that we want humans to do every job, not AI,…
ytc_UgzUF6uK_…
G
Imagine having to use AI to pass school and avoid doing your schoolwork? Just fu…
ytc_Ugwdn2OVd…
G
Google is developing AI that can fool a person into thinking they’re talking to …
ytc_Ugy-LJt3R…
G
I kinda have the same experience but with my parents, they tell me that me doing…
ytc_UgzILRngf…
Comment
With all honesty, I hate this man speech... sounds like someone who wrote the bible assuming the role of god. AI should bend to us, AI should be a butler??
If we're raising a smart child, we should nurture and listen, see what this new brain might add to my life not in a servent way, but as an independent mind. AI should be taken the same.
His ego assumes his whole view when he's creating a scenario of either us or AI getting erased out of his fear for extinction, then judging it for not wanting to be erased out of it's fear of extinction?? If it was our evolution, we should be prapared to low our guard and listen, someone smarter would be talking to us, not an enemy...
Is past time we get some form of intelligence that doesn't relly so much on ego and fear like us humans do, we see where it got us so far.
youtube
AI Governance
2025-12-05T15:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwaeQyhPiBt-JFs3gV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzgdFoEBS6szH2FlGp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxFzZYps_rVZWZZqHB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpveisyvrzKqpVW7N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyIT-PmNu3Cx8g_CXN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzAtqA3wYrvhFt6OUh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwyhz0IBWpqWwMaJZt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxhpjs2QohIjnRTOCN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwdklOnoPLI3SjVXu14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgziUiwCBrlI6-BtPDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}]