Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This just fuels the fear for change. Self-driving cars are coming and you better…
ytc_Ugx-hEmD1…
G
Perhaps sports will remain human in the AI world.
So, perhaps becoming a sports …
ytc_UgzMpJTR6…
G
The AI could lock/encrypt us out of our own systems. I also think its impossible…
ytc_UgzABmZvo…
G
I remember one of your previous videos and how passionate you were about your di…
ytc_UgyeoiN-O…
G
In China they have a completely different culture, where "safety" is the mantra,…
ytc_Ugx8_tMK5…
G
i hope you find more friends to talk to, you clearly deserve them! it's so consi…
ytr_UgxoBS0_R…
G
First AI is codes made by smart humans. 2nd, many doctor are dogshit. Like u pro…
ytc_UgxrN7Pp1…
G
Im sorry if deez peeps arent born lazy and had de dedikaeshun tu werk hard and g…
ytc_Ugyk2MIFJ…
Comment
The corporates have been dreaming about using AI or AI-related solutions that use AI to replace the bulk of their workforce. They have neither the financial resources nor the human ability to succeed in this, barring a few. To keep up with this false pretext, they exit people who can forsee what is happening and keeping the ones who cannot or will not, to keep up with this lie. Even before AI can usurp the majority of corporate operations, these organisations would have self destructed. One danger of AI is the excessive 'human' exaggeration of the potential of this technology in the general workspace and how far any corporation can go with AI, at this time, to either reduce costs or to increase productivity. The damage has already started.
youtube
AI Governance
2025-06-17T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzwsVxz7jlVBKWgxBB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzh0P7ZKanYm-20qk14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxP97lWtU-KGH8mMa14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwPNdOQFAUk4AumdlR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwiZFX3s6I5TlshOvB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyFdU4rebB6DN4L1mV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgywjLti0-nmtY6LnPR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyBNm8bsgTzH_y-4Mx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw04gAgC4apu6riHyR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyzbXHrCswjf-lghGl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]