Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think the problem with wage insurance is that it still assumes that there will…
ytc_UgxIkhOUi…
G
@ neither I am, I just find funny people who say AI is worse for people when its…
ytr_UgxtIVfKa…
G
This video will age like 🍰, in a decade, AI will replay a huge amount of trivial…
ytc_UgxnnaqDs…
G
All the people arguing in favor of AI are the ones who want to say oh I’m an art…
ytc_Ugxy771jW…
G
AI runs on big computers in a room! They have power cords! Unplug the damn thing…
ytc_UgxP-U_X4…
G
> If a person studies another's style, then copies that style to make their o…
rdc_kyyy61t
G
Autopilot / FSD has existed for what.. 8 years? It's still a work in progress, …
ytc_Ugzwgbnz-…
G
I use AICarma's weekly email digests to keep up with daily AI mentions-it makes …
ytc_UgysIqlAe…
Comment
How come no one is speculating AI could do drug breakthroughs for mental health/genetic engineering? Imagine an injection made of a mix of hormones, enzymes, mRNA, and pharmaceuticals that is immediately euphoric and non-addictive, and raises everyone's IQ 40 points over the course of a few months? Maybe Benjamin-Button back to your 20s over the next few decades?
youtube
AI Governance
2025-12-05T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy6JV8I7_LTM8Iu1YN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgziKsYoYPlh2sXQjcd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwv64ZRQmtgeq0y0v54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzkbkpXalG6cxy1ilF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy8IKg8Gw90VZw6jZ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmLlalKgbtRnGFRY14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwBrU885sb0ZwQFVWh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyX-XktitTYcYS-U7h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwdFWt6Bfmig2qG2Jt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwIRhB3L5agL9Dm_794AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]