Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
yep. today i spoke to one of my friends who works at morgan stanley and he says …
rdc_mox7wti
G
Duh. That's why they're working on vision models and incorporating sensor experi…
ytr_Ugy5nh2ns…
G
I'm learning CS50P now and will end next month, after that, should I learn this …
ytc_Ugwei4Tjy…
G
Yeah, but its almost 50k a year. This is only available to children of the wea…
ytr_UgyTIS-V7…
G
In 10 years every dude be going out with the hottest chick and be getting marrie…
ytc_UgxLNSoBt…
G
Cool? Lol AI is way too positive sounding. I'd rather talk to a robot that sound…
ytc_UgyYEAIM9…
G
When someone makes bad art they critize it because it's not good enough for thei…
ytc_UgxWgeP4n…
G
I dont see why we are trusting self driving cars. Theyre man made, thus not perf…
ytc_UgweGjhaD…
Comment
This is entirely overblown. We don't have anything close to "AI". We have machine learning algorithms which the best of can barely drive a car down a city block successfully. They failed entirely in making any level of useful predictions as to the course of the pandemic. Calling it Artificial Intelligence is marketing hype. And even in applications where machine learning is used extensively, its always under the supervision of a human operator.
For an "AI" to be a real threat, it would need actual human level intelligence, the ability to self-reproduce (including the entire chain of manufacturing from mine to factory), and for humans to have, for some bizarre reason, given up all control mechanisms over it. You won't see this in your lifetimes.
youtube
2021-12-04T13:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwB8e4SxvLEYFfxt2x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwhbYCiDCfNyOYD7u94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"urgency"},
{"id":"ytc_UgzOennDBTlMcVu_o894AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXKJtHap4eF-4WBMx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw_LtJkExu3qEnJPHl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy7mclkMWUgju_FTBl4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy-ahrHJS1eOzv0mdd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzIgjNGuaJtOlR0oeh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugz8HNjU1kpfpqI87sh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxl4myIhddW14VFHmJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}
]