Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Ai art is good art" is the new rage bait, don't feed it attention we all know i…
ytc_UgwvERMwT…
G
I don't quite get it, but Gen Z/Alpha seem to REALLY HATE AI... Particularly on …
rdc_n0kshvf
G
Exactly, but good luck trying to convince the majority of the GOP though because…
ytr_Ugw1EFk7G…
G
hmm lets see… why did I start art.. OH YEAH ITS CALLED A FUCKING HOBBY! MAKING A…
ytc_Ugx6iqUj6…
G
Only 1 job will be left - become cute and cuddly, so you can be a pet for your A…
ytr_Ugyvea0Pk…
G
6:03 its even more bold as hayao miyazaki has went against the use of ai art…
ytc_UgysD9p3x…
G
Just as Will Smith stated in "I-Robot"
"Lights and clockwork"
Life is special,…
ytc_Ugz-SybTU…
G
It all matters on how people define "robot", If they are just unfeeling machines…
ytc_UgxCHEe9g…
Comment
Has anyone considered that AI developers have to develop a set of World Morality weights and then hope that once developers use those morals in the development of their agents, what's going to happen when (borderline) agents work with each other and produce a combined distortion of the original task. AI Should NEVER make decisions about anything until the World comes up with a Moral set of values that is agreed to by ALL. In other words never. Systems can be corrupted by the average fault of many components but each component when checked passes the tests, therefore providing a blameless platform to do intentional harm like component viruses built to combine to create a threat but each individual piece doesn't pose a threat by itself.
youtube
2025-10-09T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyAMGLYBaoHJDVr3A14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwN-evorx6RHjXAU4Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyhNFOnH0AMhcnUpB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzS7gNgHADUEG5HNXF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzGIRYcO7M9fyTyEWx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwvM82dmJvjQ32kCHV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyTb_Yx8GWkiW8WQ14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwf8ONw1UL2MhuhvIp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzBHjG_fSCKe1ly1YB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPju0cCZqXN6oRYCd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]