Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@zroll11So true! We would for sure feel “Worth-Less” without being productive. …
ytr_Ugwa7IsdI…
G
A.I coding is currently laughable at best. It can suggest/create fragments of me…
ytc_UgyI-AZLC…
G
I'm telling people stop supporting AI and robotics technology or future generati…
ytc_UgyV_TwyZ…
G
Too bad majority of humans are still dumb and evil. I'd rather trust a well writ…
rdc_gd8jjnz
G
Actually no. AI has no autonomy, it has no choice. It does not pick and choose w…
ytr_UgxkfiwJf…
G
Dude’s what are we doing telling AI that you are going to destroy
Hey generate…
ytc_Ugws8qvc8…
G
I am just at awe... i have zero drawing talents but it wouldnt even occur me to …
ytc_UgxWpQlZf…
G
In other words, AI is already improving human nature. And we haven't even starte…
ytc_UgyVV-nt9…
Comment
AI can work as an assistent to speed up prducivity, not as a replacement. And I think the nail in the coffin will be lawsuits. If a AI system screws up, who is at fault? The company who implemented it? The AI company? Both? Nobody knows. And because out world is dependent on helding people accountable, AI will not be able to replace them... I mean they can try, but it will backfire eventually...
youtube
AI Responsibility
2026-01-09T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz1CRtbPN4wALH_-Mx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzSykzT3_hpnJuy0254AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxufMZ1wUDnFZ39y0p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxyKQ7GLg4r1fSETNV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwr60fH9ZSYRcHT6Rd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxlknOBd-P-WU88lnB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyJSl4LQtYqph1uxbh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8paoUloS59dcR-D54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzriDFVNsGCq0hKbNZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxW3qG6QDSYWl8Vn9h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"disapproval"}
]