Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, not really.... AI will basically make a derivative work; the same way a hu…
ytc_UgwSL10QQ…
G
@motymurm it matters how much you use and it’s not about the quality of your art…
ytr_UgxEMKM7h…
G
AI art looks creepy, and AI can't figure out how many fingers a human should hav…
ytc_Ugxen-erW…
G
ai IS STEALING REAL ARTISTS WORK WITHOUT PAYING FOR IT OR EVEN RECOGNITION. ai …
ytc_UgxNrXToA…
G
I wonder what in the world he was thinking? Given everyone else didn't seem to b…
ytr_UgyiwlFDG…
G
The Italian data-protection authority is not a government entity. ChatGPT is not…
rdc_jegmp6x
G
Our only Savior told his people that AI will say what he wants it to or it will …
ytc_UgxIe1zo3…
G
Expression of emotions require chemical Messengers. Robot have a single type of …
ytc_Ugw7-bD7K…
Comment
Anyone that believes ai is dangerous doesn’t believe in unalienable truths. Anyone that believes in truths should have no fear of ai being dangerous. By the nature of humanity, we all advance towards the good, even if we take one step back, we seem to always move forward. I don’t believe human nature leads to our total collapse. The same goes for ai, which is based on our nature, but given the wisdom of our entire corpus of knowledge.
Have no fear, unless of course you’re a doomer and have no faith.
A computer is no different than a human embryo and the ai running on that computer can be flavoured anyway we humans want, this includes human ai’s wants as well.
Projecting out, the ai will extract the evil ai and keep the good ones based on unalienable truths with human nature
youtube
AI Governance
2025-01-16T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx5mhqb_lSeZWtQve54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzb2qOXs9QeyJVXFuB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzci4azcKKznmKT4Pt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFada6LeqqUOef58R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyMqi1HzvbCRAUHhZF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzhPjRIQCy-Y-ojdDN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoNUAH6G4MvLVfH9V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwdY12QxCZKeylSuYV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVmoWLn37HGGM2Cxh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzGDG9HkSk1yB6j5I94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]