Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> Him saying AI will automate everything is like a car dealer saying you defi…
rdc_o5vchs8
G
Ai isnt human and should have no say over spirituality or religion. This is so u…
ytc_UgxTxf9OY…
G
Why would "each person own a self driving car?" Privately owned cars spend 90+% …
ytc_Ugy8HUlQa…
G
What if during the process of buying the car, you could pick what the car would …
ytc_Ugzzec3Tw…
G
We don't have to worry about AI taking our jobs, the global economic collapse th…
ytc_UgwW4NO2A…
G
@gabeitch9142 They''re stealing work to post art that benefits no one but thems…
ytr_UgwaUCIkc…
G
Out-of-work Americans can band together and create AI hunters of the ultra rich.…
ytr_Ugz_UKZvM…
G
Technology is getting scarier and scarier. Soon we'll have highly intelligent ai…
ytc_UgzpJcR7G…
Comment
I just don’t understand why you would create an AI system that can hallucinate or create false information. That just seems like really poor creation!??? I understand that we live in a capitalistic hell scape, so we want to scrape every penny out of these AI as soon as possible but like maybe making ai that doesn’t hallucinate and can’t because of the way it’s coded??!!? even if that means it cost more to operate???? 🤦🏻♂️🤦🏻♂️🤦🏻♂️🤦🏻♂️
youtube
AI Responsibility
2025-09-30T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy8fPKdsGGltblY7rF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwEBds7ASZmUDVwIrx4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw2Tv8m819PZ29CcqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxh5C3mdY_D9yLVepZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxgX7gyxdvexvelq9d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwsgSEvVLjaDT1fe9B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIldXlAbUuIyIVOMN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz915MNG5jVE64LDnB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHlPPT690t4YZ8D_t4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx_5BFWn9Iyc0wGDdV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]