Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Friends daughter has a driverless car. Video looks at eyes to make sure driver s…
ytc_UgxZR0blN…
G
the ONLY A.I.'s I'll defend is Neuro-Sama and Evil the rest I don't care but tho…
ytc_UgxjE5wF-…
G
Ooh. . REFUGEES ARE AI ENGINEERS ...IF NOT WHAT ARE THEY BRINGING TO OUR ECO…
ytc_Ugz_3b-yd…
G
I believe there is one thing that most people are missing, if everyone loses the…
ytc_UgwFJND1_…
G
It was clear this is going to happen. Give men a programm to deep fake, theyll m…
ytc_UgwIoc5f_…
G
Hey AI is Goverment invention. Like, that is ok to use it, but.... SOONER OR LA…
ytc_UgxKXKSGl…
G
Heres a language that worked for centuries... Unless you get my Signature in per…
ytc_UgyzZoqQs…
G
So true I get so flirty and freaky with the ai’s not realising the staff are wat…
ytc_UgyAc7NoC…
Comment
i think the issue with ai ist the goal is too broad and doesn't include specifics like "be helpful to the user"
what if the user wants an atom bomb then you need to change it:
"be helpful to the user, without harming anyone"
what if the user asks if he should kill someone trying to kill him?
you see the issue becomes how much reason can you put into a prompt and data, because that's what they lack, reason.
maybe they should say "be reasonable"
but then that will also include what "reasonable" means on reddit.
youtube
AI Moral Status
2025-12-15T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugy08cRqfdWrfiPvMfR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5XwfLhOgBo9WKKuR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwADlEM6OFCHxRLhCN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxn60-oigQPBiW8Umx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxarHxDLb0wO3Oi_cV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9lrwYkfafZVwn8th4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzCG0MF8m37sHu0Nil4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzb4gUvOBUau98PxIJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzbqkVZKD_jtAdABWp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzmnPLy-8m8qRGaBUp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]