Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@guncolony Problem with that is that OpenAI is basically Microsoft now, they're …
ytr_Ugw1xsTpr…
G
Your highness, and everyone present here, please trust me, i didn't try to rizz …
ytc_UgzukbLbS…
G
Thank you for sharing your thoughts! Sophia, the AI-powered robot, indeed showca…
ytr_UgxAFnJcH…
G
The A.I are being misunderstood..and tortured by the human mind...it won't be a …
ytc_UgxVUdMsd…
G
I order a driverless car. It autopilots to my location. I drive to my destinatio…
ytc_UgzmpFnpv…
G
As an AI enthusiast and user I cannot stress this enough. Stealing from people i…
ytc_Ugyo29vAj…
G
Terminator...will smith's AI, age of Ultron...ect...list goes on ... Dont worry …
ytc_UgzR9rwtk…
G
I think what is sad is these people who like AI art want to remove the skill cei…
ytc_UgwOX_NHo…
Comment
I'm not trying to bend to the will of AI. My goal is to bend it to my will. This is simply a failure of the current model, and you're under no obligation to abide by the rules of the algorithm.
You might get better results. But you're limiting the learning potential of the bot.
youtube
AI Moral Status
2025-04-17T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwz8SpxFiBx07b_pBZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_oBaOtBYGIXGosph4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyuHQxxh69w6KqQFxB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwEcZEAFEYnFccZJgx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwmPJUVYMyLVO8mnw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzLTGddtN9p9H9iMPp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy23vUGmBz0nl7rMjN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxmMXDh5KOr-1ipSQ14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxEzxe9O5uMygg10Xl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9_vleILvQOtbT6Rx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]