Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No it will not. We have been using customer service bots from a long time. Did …
ytc_Ugz0cdZYl…
G
There's one thing AI will never be able to do - it cannot produce something that…
ytc_Ugz7RFYMU…
G
I have worked on AI models for a living, and I fully support your argument. A lo…
ytc_Ugy31JSg7…
G
Same thing with people saying they’re afraid of AI becoming sentient. You’re pro…
ytc_UgxPVBc1b…
G
That's an interesting observation! The design choices for AI like Sophia often f…
ytr_UgxEhhZHo…
G
Ai is just regurgitating knowledge on the internet. People in the trades have t…
ytc_UgxFVIJ4V…
G
Every instance of the conversation with AI is brand new. AI is stateless, so it …
ytc_UgxJrIPN8…
G
The guy robot is def a narcissist, if you don't see the empathy from the woman y…
ytc_UgzqUJFQq…
Comment
AI can have personhood when it solves all the world's problems of hunger, homelessness, and war. Until then, it isn't real; it just thinks it is.
youtube
2026-02-07T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwoXsJ8CyjpeEBxVzx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw2BeeWtYDTXDgD6jl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJy525o4uk1w82cuN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgznHUSheQH6F3n7Ax14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxFFabcKC_5Z6HbKD94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz8hfacgTX1MD5xG-J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwsEEpJ8IufH9nmqW94AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzlzf_pNsr_xM91t7d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzsxJyXfmmOllUhnDB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw2D9w2kKvEuv8D39p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]