Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Part of me thinks the Covid-19 lockdowns were a test run for AI. These AI compan…
ytc_UgwhaSrI1…
G
The article says "her face superimposed on adult actors". This doesn't even sou…
rdc_lgmz7tm
G
AI actually doesn't respond that well to roleplay. I've done some experiments an…
ytc_Ugzm_xiRJ…
G
The way I describe it is, art is a conversation. The artist wants to tell you so…
ytc_UgzeyKDf8…
G
what if i use ai for serial killer and mafia role plays because theres nothing i…
ytc_UgyNt1IJj…
G
>While I agree with AI not taking our jobs, one guy not knowing what he’s doi…
rdc_oaeridr
G
We need to be very aware of societal impacts of AI.
Someone recently committed …
ytc_UgwFPJ1CA…
G
I mean, the AI is so polite and cordial. What's a little harm in being cordial r…
ytc_UgwHVPLys…
Comment
"love for our children"? in other words love for ourselves will be the solution for the upcoming problem of an AI intellectual dominance? didnt he, just a few minutes before, discuss how AI started to have behaviour in its own interest? that AI started to act in favor of itsself? that AI started to act to 'love' itsself?
isnt that what we humans do, the one way or another, all the time? and isnt the idea that OUR love for OUR children could be game changer for the upcoming crisis just another iteration of exactly that? isnt that an argument that exactly claims (for oursleves) what was just recognized as harmful (when presented by the other, the AI)?
youtube
AI Responsibility
2025-06-02T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyS1ahr33TZ_KVndRB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgymwFq2xr-ysDtEm5V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzE95Yg_VOwazLk63F4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHG5WPXG-K4SmqQX54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxEPSK31JdzyeV-KhJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFbkP6p29Mnop-eTV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugxsj9C1qn4kSNpDRx94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzE7ovyq72qybWbROB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzzIQGLPNcQ9mTmG6x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyhYbFSxHku1ARbp1d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]