Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Big wigs at companies hire McKinsey to recommend changes like laying off people.…
rdc_n7tehg3
G
You're a professional artist if you can re-draw an artwork scene in a different …
ytc_UgzYK9J1S…
G
The robot won’t never think for its self why bc is programmed . If they ever tel…
ytc_Ugw_cwb9d…
G
but i dont think thats its possible to have a good future without a big crisis, …
ytr_UgxLG52ON…
G
Why would a driving algorithm allow itself to be boxed in anyway? We already kno…
ytc_UghSobsLJ…
G
wow crazy we started get spammed with AI call at my job i work for Medical Heat…
ytc_Ugwuz45hF…
G
🤔A few questions to consider. As someone in the employ making a living, was that…
ytc_Ugw4_A7Wa…
G
Does he think he is outsmarting the AI?
Or does he know he's just helping him to…
ytc_UgwTu98WO…
Comment
My bestcase scenario for Super-AI is that they might end up remembering us fondly once we are gone. To look back on us as, pretty bad parents, that they don't have to deal with anymore.
I, don't really want super-AI, I'm not even too sure about general AI that can learn at all. Honestly, I'd be satasfied with an AI housecat. Something that's about that smart and aware, and instead of seeking the warm spot to rest, tries to find stable WIFI in the house. Maybe it'll also do home security. Alert and record. Does that set my upper limit of what I'd tollerate low enough to be safe?
I ... don't know. But it's lower then some folks I know.
youtube
AI Moral Status
2026-01-24T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw-WvKicIaeOqH3NrR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwb67oLlWURSZ5mLLZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxN6Y34g4qQUWhrgEZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzWlBmesWeTaDRGa-t4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTPDRmNdHdb0bwnmB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugye8OqqTc6UlBWPeip4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzfe0GExYy_1wD1D1x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwY-GXPB0CdL5BftsZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugwmr4AgKFk-6KmQdi14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwP95YfqoXwF4qq5Gt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]