Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is when everyone who can do something to stop would have the balls to do so…
ytc_UgzGwim3r…
G
The AI keeping us as pets scenario is interesting and there are so many variants…
ytc_UgxpYHXah…
G
Therefore, AI should encourage humans to reduce their reproduction since the num…
ytc_Ugz7xvM_X…
G
According to me instead of our brain storing inspiration…..the database does ………
ytc_Ugw7rwyBz…
G
I though she would have said her favorite movie was a.i or the robots with will …
ytc_Ugx1eLhR8…
G
And thanks to people like you I already lost my faith in humanity you dont under…
ytr_UgwH0fDt5…
G
It's not about trust and affordability. When printing was introduced, printed bo…
ytc_UgzyiIjIE…
G
People need to understand that CEOs are glorified used car salesmen. That Amazon…
ytc_Ugytz7wuA…
Comment
Since AIs cannot "feel" or "suffer" like humans feel and suffer, their alignment can never be complete. The portions of humanity that can "empathize" will keep civilization "civil". As long as a majority of humanity sees suffering in most cases as "undesirable", the golden rule will keep humanity "human". An AI can only simulate or pretend and will always be like a high functioning sociopath. Giving them the vote or governmental control would be dangerous and probably our undoing. Turning them off would not cause them to suffer and might prevent some of ours.
youtube
2026-02-07T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwVPpHZBl-g2O0zYjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgycayRBbLUkRy-pznZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxV1wiSeLORV3C3LB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz5Khqxpj6CGqhcFSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwo1lha1845-sZGrSp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwV2FdXB2IuN5rbaSB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwqAyYP0AmHtWgcLAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxx0mp61Dud664ncUh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzhsGcQPu3SqSgaqPB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzTCiB5Pw5aSUWE9Wl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]