Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks for sharing your thoughts! It's definitely a complex issue when it comes …
ytr_UgxJ7uIhN…
G
Always get a laugh out of ChatGPT just giving up on David. But it's even better …
ytc_UgwIKa7ix…
G
ai stells other art and makes it into one big thing for ai art idk abt it tho bu…
ytc_UgzlYwLXW…
G
I can't remember if it was Midjourney or OpenAI (maybe both), but I have heard a…
ytr_Ugx-5A8vI…
G
This seems a silly take when compared to your car example. Cars were truly awful…
ytr_UgzTcWf2k…
G
Hot take: AI art is IP and will be by law in the next couple years.…
ytc_UgxsLT0q-…
G
Next time you do this I recommend uploading them to a dataset site so you can be…
ytc_Ugz9duvX4…
G
She's smart like AI, only she used her knee to hold up the laptop. 🤓…
ytr_UgxorUebP…
Comment
what we have now isn’t true AI, it’s closer to a VI, a virtual intelligence. A VI is a computer pretending to be concious through advanced machine learning and simulation algorithms. it able to apologize to you because the computer has learned that’s what you do when you can’t do something for someone, but it cannot actually feel sorry or regret anything because it has no internal thought processes, its just a machine that’s been taught to produce similar results to a human in social contexts. It can’t use logic or empathy or produce its own results, only draw from vast databanks
youtube
AI Moral Status
2024-09-03T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxD_W9CCoiFAzVC_fx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxMwNNwdnUc7weW6hh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwH9_72Amm8BkZ-M9N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzdPRE89pmUn0vy62Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzt9D9V-wmzaQIhewR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxVCLRSrziJE0RVknp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxPBGUBYWFTAtSwqz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx5U3ulUpkIODH69lp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxuySTTr1GnNSMlbEF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwve7QHi3VWzh01jTt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]