Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This video makes me want to try every model, but my wallet hurts. found omnely r…
ytc_UgwREN4Fy…
G
On a weekly podcast, which Shad is also on nowadays, Friday Night Tights. They …
ytc_Ugz1NquwZ…
G
The most impactful piece of art I have come across was a piece of contemporary a…
ytc_UgznVkEVw…
G
At this point I imagine 2030 as ZOMBIELAND
There are no jobs. Those who have wi…
ytr_UgzTvS1pz…
G
Our tickets look like this:
“Customer can not see his data for product x in pro…
rdc_n3lc6ej
G
Ideally a chat bot would be under constant scrutiny from experts from a wide spe…
rdc_jier2uu
G
The desire of power is just as simulated as feeling emotions is simulated,
AI is…
ytc_UgyCnzMgA…
G
Humanity has yet to create anything with life and consciousness. We can manipula…
ytc_UgjxCutHJ…
Comment
Right. After watching the entire video here's my take:
Even IF it never get's all that smart and intelligent... you can already see the damage THIS tech can do RIGHT NOW. It is very capable of creating and fueling division. It is already capable of fooling large groups of people. It can already create convincing evidence for things that aren't true. I don't think AI needs to reach a level even close to super intelligence to be capable of ending all human life on the planet. On convincing enough faked nuclear attack on a country like korea / russia or america and we will cause that extermination event ourselves.
youtube
AI Moral Status
2025-10-30T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxxuL0rIDRv6S4onAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxV2YgRxgdc1F1hK-R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxxMcFp938sqEB2x6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx51tCuxt7S0BiUp614AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwYohzxjxoYmuBkcrV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugy1pg6e_fFmqKOJTHF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxFZpLLvJEtoqFWd654AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwRwTJYJFvGhe5WBGd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxhB5pcpXVKzCtGOUx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-BFV-_V6K0ci-9zt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}]