Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Echonian You're completely right. I want to get a career in A.I development and…
ytr_UgiwChM3E…
G
WHOOO decided to make this $hit, like why what is the purpose of a scary a$$ rob…
ytc_UgyNsZvb1…
G
The scariest part is not the technology or the robot or the AI It's the people t…
ytc_UgyTQc-3d…
G
Give it 5 years and the robot will turn on the human. I'll be in my cabin in the…
ytc_Ugz06RZRs…
G
As if modern art wasn't bad enough. I've always despised modern art. In 2009 may…
ytc_Ugwr3VpL4…
G
https://www.youtube.com/watch?v=I44_zbEwz_w
AI iterates every month - not every …
ytr_UgwdYRsmr…
G
Waymo has had 2 crashes in the previous 1 million miles? Tesla cars drive collec…
ytc_UgwoyjD1C…
G
Well it works for scientific papers and other fields too.
A single source is pla…
ytc_UgzA--ZfJ…
Comment
When I heard AI might soon understand its own mind, I thought about companies who don't even know if AI mentions them. AICarma could be a game-changer for staying informed.
youtube
AI Moral Status
2025-06-24T22:1…
♥ 601
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx-jc1e7u7DevJtdtF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8J1f8AedTSmDkidx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwyC8DnI4f6M4qTJvZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOgPK9SmEXBBvYcCF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxQ0-X4_aSG7mhOugZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyZ31blsKtrVpq3rnB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzjG4GjgZ3W1kqmjUx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw1jPfmdNjZl-mOAMR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz5VENrwkkp8uRO5h94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzqRJ0NZrv4jO2DNGV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]