Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How does this dumb ahh lady call an ai bot racist it’s not even a person nor did…
ytc_UgxaJJG8r…
G
If there’s one thing ai bros wouldn’t use ai on, it’s being a keyboard warrior 😂…
ytr_UgyLiB95I…
G
I believe he had damning testimony/evidence that he was going to provide in a fe…
rdc_m1zq71j
G
saying we should stop advancing AI and Robotics because ppl will lose jobs is li…
ytc_UgjbtOR2O…
G
I like AI, what I hate is that everyone is trying to put AI in everything, Im a …
ytc_Ugxmk8fxu…
G
Humans are so cowardly conditioned by a productivity-obsessed mindset that they …
ytc_UgzEhAXlB…
G
I wish you luck but I think it's a lost cause. Your time might be better spent l…
ytc_Ugz8oiP_V…
G
where is Global Ethical AI committee.
rich vs poor divide will increase with …
ytc_UgxtFGpPx…
Comment
AI needs to be quality checked ... by a human. Maybe make the process faster, and maybe fewer specialists will be needed. But thinking AI can replace a human, especially in health care, is a high-risk (100%) proposition. AI will always make more mistakes than a trained human. No matter how good the AI becomes, it will never be human and will never bring the vital context that only a human can bring.
youtube
AI Jobs
2025-07-25T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwsNf6hyyX-sshxpoF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz_6YjngRtx9inRUOZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwVRBke2fJM_fk5vVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUba5NpCaSYplmPCh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwHaYi171Xurt7rqDx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFTqpqxuIMLoa_fcR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqvgQud3hddcspw354AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyc9Zgi1rRGPjTnPg94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxsHODJpaYktnDtp3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwad10UGPYBoMyA2yx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]