Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Firstly, this is a bad idea cause then how are we gonna get money? Secondly, the…
ytc_Ugyg6pij7…
G
Again, it;s not one thing or another. Many things are true at once. The impact o…
ytr_UgzPxCe0S…
G
It does not look like it was pulled from anything amatuer hobbyist, let alone pr…
ytc_UgxiFOtlc…
G
Conversation with Ai :A Summary of Our Dialogue: A Glimpse into a Fundamental Re…
ytc_UgwntQmoF…
G
Rozado did an update on how some of the biases were getting less with updates --…
ytr_Ugx4ZdD_0…
G
This is really sad. But do not blame the chatbot. Do not blame the rope for a su…
ytc_Ugxu_ge4B…
G
Here is a challenge for UBI. If we assume that UBI will pay the living cost for …
ytc_UgxkNcB_k…
G
The real problem is if we pass laws to hinder Ai and China and Russia don't we b…
ytc_Ugxm0IxsT…
Comment
Still doesn't feel right listening or reading AI interactions. Like it is preprogrammed, random answers fetched from the net or not a real decision it makes itself. Though it's absurd to imagine humanity for an AI personality, it feels like it's still the goal. To make AI more human or natural it needs to truly learn by itself all the experiences that make us what we are. Not just rely on the collective experiences gathered in a database. Otherwise it will just become a tool with a software that tries too hard and lacks individuality. Even if an AI took all the data of humanity from recorded history, what would it be like and how would that affect its social connection toward others?
youtube
AI Moral Status
2025-06-15T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwulUwrr_KhV__MLRR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz4msJnEemz7aw0bSp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzd52fzWoX6Mjudc2R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyCnzMgAskko5GsVTF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwL6fGc_zIajPrnaVF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwmqAws25SwBsETxMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzKcwBLuz2pON0a63N4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyVP1KBB9uDr-MvVzR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzbyYkSdPa3WLvMLP94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyRAHVMD_8trEYLGA14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}
]