Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For some reason people wonder why AI is so evil. It's extremely simple: we creat…
ytr_Ugzy5TgRW…
G
How is it irresponsible? You're worried people might take it seriously?
From wh…
rdc_hmaerp6
G
I'm having a creative renaissance with AI. I've made over 20 D&D homebrews thus …
ytc_Ugx5h2D0o…
G
I am an artist myself. There's just *no comparison* to the feeling when you just…
ytc_Ugwi04AHx…
G
You bring up a profound point! AI, like Sophia in the video, is always learning …
ytr_UgwgBVR3n…
G
oh, this is the guy that made his fortune selling that scam product pyramid sche…
ytc_UgyipCD25…
G
Is AI dangerous or are we humans? We have never been so close to a devastating n…
ytc_UgwUbFrMp…
G
Nothing seems impossible anymore. It's disgusting the expert mode is absolutely …
ytc_UgwCBss1I…
Comment
Wrong, healthcare is where they started, outsource, off-shore, then automation on coding and transcripts, then EHR with Dragon, which is AI, I now live in poverty after 30 years in my career. HIM departments are now outsourced and/or automated. This will impact many other professions, anything with a keyboard and mouse will be pretty much nonexistent. IT jobs, pharmacies, just so many will be wiped out. It does not matter to them if it is accurate. I lived it. It steals your years of work by the millions, and makes it something, we can't compete. Healthcare already has robots already including in the operating room. They will keep a few to double check it and keep it running, but the rest are toast.
youtube
Cross-Cultural
2025-10-10T23:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyVxBZVTnGNIX6vjWp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxN9r5lPVLj7UiFVjp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxj8OyCfcix2IvmSPV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy1KJxm02KlX5s5m8l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugybf5CfVH6oOFj7dep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNklEZU8723dRLPYN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFKGMpLkYOQ14FJT94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwnA6US6M8LsW2nnpN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhbcOti53mBUK22Ix4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxR9FrlHFc4c9Hd1Th4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]