Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Everyone stop hiring saying "it's because of AI"..
No it's not
Company just hir…
ytc_UgwJyYFMC…
G
The threat of people programming drones incorrectly so they bomb the wrong peopl…
ytr_UgxV0PqxT…
G
Soon enough AI art is gonna become perfect, it's already developed so much in th…
ytr_UgyJ6J6cV…
G
Look at how bad Google AI is. Today it gave false information on my vehicle repo…
ytc_Ugxf11Lek…
G
Based on my understanding on AI and jobs, is robot can produce food for human. S…
ytc_UgyVXKDdi…
G
Was at an anime convention earlier this year and some guy was selling AI art. Sa…
ytc_Ugwm3WrNl…
G
Telegram isn’t the problem, if they ban telegram as a solution they will still f…
ytc_UgxRU-sbD…
G
Thank you for sharing your thoughts! It's true that AI, including Sophia, often …
ytr_UgxU9e3Ex…
Comment
There's no reason to think there's a limit to how smart AI will become assuming we have the energy, materials, and computational science down. There's also no reason to think AI will be any threat to humanity in the coming century. Yes, many tech representatives are worried and many are not, but average people like everyone in this comments section are only afraid of AI because of sci-fi films and not because of actual computer science and machine learning.
youtube
AI Moral Status
2021-10-10T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw6VoP3vsK29_glx5x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvzvTgKmEyG6b6In94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5xqGIF1HZ83PyxWd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwVuytIPsYmpMGZX1h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzRe3a-zQnwcwDkfmN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxilKuZT_4YYpmXklB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6zhjlxve7QUC61Vt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxu7mTB2PoT9Jmy99J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwO6SJtPpLpkktJX9J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugzf_t0x57uxlfY0yNp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]