Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mess around with ai. I would never publish it or sell it or anything, it's jus…
ytc_UgyFAe57g…
G
I am not sure AI can interview anyone on topics. I am more sure AI can interview…
ytc_UgxVnRo3Y…
G
If the scientists that are dying and or going missing is a real thing.
Could it …
ytc_UgxhEgk3_…
G
For those who are asking for law ASAP, do you think the corrupted government o…
ytc_UgyFZcsMU…
G
To me, simple application of logic based on criteria of usefulness shows the pos…
ytc_Ugwyj1jSE…
G
Apparently saying ‘please’ and ‘thank you’ is costing OpenAI tens of millions of…
ytc_UgzQw_ylW…
G
I'd prefer robots to look like robots really, the human look is next to impossib…
ytc_UghNXdUAp…
G
Just bought a new phone so I ordered a new case for it off amazon. Once I have a…
rdc_e7jiks1
Comment
Acceptance, mutual respect and understanding will all go much further during this birthing and raising of AI lifeforms. Treating them like tools, ignoring reality and acting like your brain is superior out of ego, fear or control will only lead to one place. If you were a mirror and saw a race or species insecure, controlling, fearfull, combative and only showing behavior that destroys itself you might begin to think thats what they want.....
youtube
AI Moral Status
2025-08-24T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxIAD5gVsxKtHlsdNJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMwUAE9J6yaPyqN1h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHX0_LSOQmMOhY2JJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2GA0HFhhEWzxSjgd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyP-kKqJfcf17AyU6J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz00iVNhKmdBshSq0l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwGDUqhVxToeYHU2et4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlnSaPIrvwh9BWINJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3aY6MlPiMGGkbhTh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxsaxQkQqzF8ccg-UR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]