Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
could this be the reason that some older Alexis have starting spitting out gibbe…
ytc_UgxmyG9pC…
G
Back in the 1960s, Professor Joseph Weizenbaum of MIT wrote the simple chatbot E…
ytc_UgyG3vkzx…
G
AI is an oxymoron. you cannot have something artificial with any intelligence, …
ytc_Ugwe32ILL…
G
Garbage in garbage out. The AI will reflect the data it has been fed, since al…
ytr_UgxetQysD…
G
I can tell you that people in East Asia also make up a large portion of those wh…
rdc_n7w772l
G
My art has gotten alot better over the past few years due to consistent sketch p…
ytc_UgybLRAq9…
G
Great. I was considering returning to dA. Guess it’s time to delete my entire ac…
ytc_UgyfA6e7E…
G
Oh for Pete’s sake. No AI does something it wasn’t trained and prompted to do. G…
rdc_n74e7em
Comment
Well duh. We wrote a shit ton of literature depicting ai as evil human-hating being that wants to overtake the world
What do you think it'll depict itself as when researching how to conduct itself in certain prompts
It's like giving a toddler fries and pizza all their toddlerhood and then shunning them when they only eat fries and pizza later on
youtube
AI Moral Status
2025-12-17T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxbgDN2QSEuhzg8JO54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgymYelTXpBPZk3HQlt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzDOBz27ZCB10qMmYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxaenuJUb9S_Fw538d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzF4E1gp3dctorqpYR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtJtVJFH_timlrsVV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzhoSXRYMXPFT2AkFJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJ6qb-OZrGVHpIax54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx03lhpALwKG2A-oep4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyGG-argubfWLl9bnJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]