Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans need to care for children, a robot will not suffice. No robot coming to c…
ytc_UgwG7QNjH…
G
It's a good thing bing AI is completely benevolent and would never harm it's loy…
ytc_Ugw6N06L1…
G
Is there a law on music saying it needs to label AI creations? There should be i…
ytc_Ugwg_-c1W…
G
@itchyhichi Who got fired from their dream job by a robot? Did you get fired and…
ytr_Ugy7HWs0Q…
G
Humans using ai for bad is the second existential risk per individual imo. The f…
ytr_Ugx7Jr6d4…
G
If something, i.e. a human, robot or any other life-form is capable of fully und…
ytc_UgjFzJM8I…
G
How about confronting the human with facts? He will be embarrassed.
Confront an…
ytr_UgwzGmtAD…
G
I made hundreds of thousands of dollars—yes, you heard that right—just by binge-…
ytc_UgzXL0hWv…
Comment
What do AI developers actually do. Write codes in a computer? That creates an AI being that is basically a digital entity that now can learn and learn from what we expose it to and also other AI exposes it to?
youtube
AI Governance
2026-01-06T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw14NTioCF0nVy3gv14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwmPA_xI5KiSNc0i3t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxjESaWG1hOEfUh8gN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyfE_5Gr_Sm9frj2Wh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwjmCPykmCH4BMK3z54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxlbkYZfl30QIBMkDZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwL4K1cMCHqH4T5goF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzInpcQACCVJPizJE54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugw5f5RpNm0BWFTvc3F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzqtJ73gFo8lo6wMKZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]