Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like baking.
If I was brave I'd set up a bakery but no-one has any money to b…
rdc_m8978vy
G
“Oh man, this is going to be great! I’ll generate my lawyer by AI, they’re gonn…
ytc_UgxzFFj56…
G
Elon is also a fucking idiot, and this is a clip from a short film “slaughterhou…
ytr_Ugyb5ESQH…
G
🇨🇦 All innovations are created by humans for the benefit of mankind, and AI is j…
ytc_UgzyyVhuF…
G
watching this i start to understand why they think about mental health care...fo…
ytc_UgzYlIiks…
G
The key lies in the prompts used to guide AI towards tasks resembling human capa…
ytc_Ugz6BdS5C…
G
Such a well-done and refreshing video, thanks for sharing! Your counter-argument…
ytc_UgxnZwupW…
G
Crossing road like that at night with no reflective materials... I'm sorry for h…
ytc_UgwQSXbU7…
Comment
So this relates closely to my own sci-fi novel, Synthesis. I've always thought that the whole robot apolypse scenario was a little unimaginative, so what if AI starts to behave like an actual race of people?
Now, why would AI do that? Well, it wouldn't if the reason we make intelligent technology is to make human lives easier. If it's just a matter of means to ends, of making tools, then the more intelligent that technology becomes, the more tools turn into slaves. Slavery has a history of culminating in violence.
But if you're creating AI for its own sake, ie not to serve any purpose towards human beings but just to see if you can replicate human or humanlike intelligence, then it becomes a very interesting undertaking. This is the only way we should approach AI: we either make it for its own sake, or we don't make it at all. If humans create intelligent machines strictly as means to our ends, then we'll end up in a situation where we've delegated all responsibility to our techology and we'll be left only with the illusion of power.
youtube
2015-07-30T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgjSWtWNngVsjHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ughf1hqoTutyqngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugi5WLDTl3NlX3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggopqM2M_sbrHgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ughy4iVWinVhmXgCoAEC","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgiveUrRxNI0_ngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UghFF0zjhR0XSngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghurI4Ad49yDHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjXsosqvOJLJngCoAEC","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugi4o4GuPLIlcHgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]