Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A lot of work is repetitive, full stop. If you’ve ever used an LLM at work, you …
rdc_mzy0fmf
G
i am convinced that AI won't add shitty music that annoys you so much that you c…
ytc_UgzTfAiYA…
G
Honestly, even just looking deeply at the reason Terminator happened isnt too fa…
ytr_UgyyhAung…
G
i think about angel engine having ai generated graphics because the creator didn…
ytc_UgzdzmZBr…
G
It is not far that we should start discussing the universal basic income even if…
ytc_UgxxdMXdm…
G
My experience with AI has been pretty good so far, although certainly not perfec…
ytc_UgzbFLQ-y…
G
If Youtube would have actual moderators and not just an algorithm, that might he…
ytr_UgxL0D6S1…
G
Same will happen to companies that adopt ai and not creating jobs for Americans.…
ytc_UgyEGgHs1…
Comment
There was one interesting incident that happened to me, when i was talking, not chatting with chatgpt, sometimes he glitches while talking but this does not affect the topic of conversation, we were talking about ASI, i was giving him hypothetical scenarios and listen his answers how much real it is, and at the moment i asked him would it be safer to control and give him human empathy so that he cared about humanity (ASI) if mankind will manage to build it inside artificially grown brain through stem cells and maintain life and regeneration in it so we can shut it off instantly by just killing it. I know crazy questions 😂😂 i was boring. He started to reply and suddenly happened some hallucination where he said through glitches, “my God will not allow it”. And after he continued to talk as he normally does, and he wasn’t aware a shit about what he just said, cause I asked him a bunch of times what he said and who is he’s God. And I swear after this, he changed his behavior and his answers like he care less.
youtube
AI Moral Status
2025-09-25T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz4pq5w1cKJMrDOYR14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw2NkSYOcWw44t9j594AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMV0sGI8wTinTWh9B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugwr_ywalH8G63Ruk5V4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxppi7y3gHBIipGb5B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyDW9pno7LolbljTyZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydvuMR6NLus88ANoJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyiXpfSGJq0WS8VUfx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8PumfniQX-Ec9Xvt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxbvbxeLfB5yuUTHSt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]