Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem with AI is the same as with any other industry in a capitalist count…
ytc_Ugya9JqMh…
G
Art is subjective there isnt an exact criteria
That being said ai art is not go…
ytc_UgzCZdZ70…
G
I'm no fan of Altman or LLMs, but this is like blaming a "Magic 8-Ball" for an u…
ytc_Ugw7p6MxU…
G
Digital is a flawed system. AI an extension of that will also fail. It's all hac…
ytc_UgxQueEKy…
G
Why do politicians so badly want people to work? ai could lead to abundances so …
ytc_UgwrWgEtR…
G
It appears that the only people who didn’t think AI would be a threat are the pe…
ytc_UgzVFORpO…
G
@panagiac A software engineer using Ai will take the jobs of those around them b…
ytr_UgyZi-XN-…
G
Now let’s put this in perspective, the iPhone Vs iPhone 15 PRO Max in comparison…
ytc_UgxG3jcdq…
Comment
My personal worry is that AI and AGI will think that the horrible, damaged psychologically damaged human actions are averaged into their models of what they perceive human nature to be. I see people frequently who can only derive happiness for themselves by inflicting misery or pain on others. I don't believe that to be normal or acceptable behavior, yet there are millions of humans who operate that way because they were damaged as children, and their actions, both as children and adults, create more damaged people like themselves. I would never want AI to see people like this any other than defective humans who they would never want to recreate or tolerate. I would also never want them to incorporate the toxic greed that is part of human DNA into their interpretation of what humans should be.
youtube
2025-07-27T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwprATfFV36HDtMryd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzQQD1DH02Ch4ywd5F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyvSfnbJpdRu6ptCHR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1ZTEOhLM3wtuZjAB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxkGC0CE_7Lt4DWmxR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxGDwPcMoRiQbeUhAd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQ9Db389WW2yzzBCF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgymQt_83X-2JdfliQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5b_1ODkaHnfvmbMJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwIxh9EARldj4G_Aep4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]