Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i love ai. 🤖 i will always also be an infographics fan, it will not be disrupte…
ytc_UgwlAL5sJ…
G
I'm only 11 minutes in and Geoffrey mentioned his friends with the two extremes.…
ytc_UgytFYuE8…
G
'Smarter than we are', not 'Smarter than us'. AI at least never makes that mista…
ytc_Ugyt-clNj…
G
OK I appreciate and understand the concern about the environmental impact these …
ytc_UgxJOwvYL…
G
LLMs are overhyped crap. I can't wait for the day when idiots will use LLMs to b…
ytc_Ugw6D-p7T…
G
I would say AI can become a bigger presence in schools/teaching, but in elementa…
ytc_Ugxq-MZ0h…
G
I always say please and thank you and I say hello I begin a prompt - and I alway…
ytc_Ugyf-xcr8…
G
That’s what happens when you create an AI girlfriend. Be careful not to get a vi…
ytc_UgxBRbXb5…
Comment
Also that's going to be your best friend in the future like Jarvis or Samantha once nobody gets off their phones anymore... oh wait..., we're already there... All my life every A.I. program imagined had a mandated "Prime Directive" 🤖that would prevent it from harming humans yet that mandate concept no longer exists. Military and police use has no governing safety laws built in yet they could easily be as dangerous as any regulated firearm. I'm planning on using it as virtual actors, but I dont want it on all my devices yet with every phone sw update I have no choice
youtube
AI Moral Status
2025-06-13T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw5CYW6jw_U-E4qH1x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxsL1RrunjGre4J-14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzu3Tsfh4ahnkXwul54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy7zi30SQBsT60Gm554AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0k2Nl0HfSLBTCA7J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyW4chDXDplgqo0IU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyFn4mfpawnqavsXRx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwK2CNEfJknfKoMUYh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1tPtwD31Ux5LHwdN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQPAiZB9jmMRyhfjJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]