Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You are lier I questioned chatgpt it gave different answer ? Don't fool people f…
ytc_Ugz6A9YL0…
G
I suppose that it is possible that we may actually be better off with an AI mast…
ytr_Ugw-25W7n…
G
"Monarchy is evil and oppressive! Down with the Tyrant!"
"Yay!"
"Now I'm going…
rdc_d7ktebr
G
I have no idea how AI works. But i remember that some of tv reports said that wh…
ytc_UgxOEF_kb…
G
She looked terrible when he peeled off her skin...but then again, so would a hum…
ytc_UgwrhR2Ad…
G
It doesn’t steal lol. It learns. Hints “AI-Artificial Intelligence.” Ye it would…
ytc_UgyhnMmTt…
G
Yes. Don’t listen to the angry haters. I personally know many new grads who st…
rdc_launmrm
G
All we need to do is to resist the urge to give AI context and the ability to cr…
ytc_UgzR-0PJG…
Comment
There needs to be laws:
AI may not injure a human being or, through inaction, allow a human being to come to harm. AI must obey orders given it by human beings except where such orders would conflict with the First Law. AI must protect its own existence as long as such protection does not conflict with the First or Second Law.
- with thanks to Isaac Asimov
youtube
AI Moral Status
2025-11-02T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzwLjqn-PIOFvRvXG54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxh-xT7EO-jaF-lt-14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzxaU3EJA1l6UOfOxp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzfuiz2XZ1GgHjT1aV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxF5L0DXR1k6W2AIgZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzsVPC4hkdbePZsp314AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzdS-fh-vkDXg4P3Cp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwR86UaP35anLc1n4R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxufDxrDcwGeTqQLuR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw2SqS_h8aKB6KVPI94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}
]