Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
From another site on the same subject.
"Such a ban against deadly technologies i…
rdc_dwvuu51
G
I think there was some video editing in this interview because ChatGPT went for …
ytr_UgyruNG6v…
G
easiest way? just throw it through walter writes ai humanizer—bypasses every det…
ytc_UgwjSm6LW…
G
Imagine if we applied this much effort with AI on agriculture to be able to feed…
ytc_UgyXLB-Yh…
G
Okay, moral and ethical stuff aside... is it really that bad that people like th…
ytc_UgzaW5Dtk…
G
I totally disagree. I would bet a lot of money on that AI is not going to be our…
ytc_Ugzx1Y_gg…
G
there's actually also some really good reasons to not use generative AI at all: …
ytc_Ugw3XIkD2…
G
We are trained on data, and we "think" by predicting the "next best word". So ba…
ytr_UgzuSj4A4…
Comment
More on this: The blog "AI Weirdness" has a post titled "ChatGPT will apologize for anything" that demonstrates this. The author demonstrates starting a conversation with an LLM by demanding that it apologize for something it never did, and it gives a deep and sincere apology, including an explanation of why it thought it was a good idea at the time and how it'll do better next time. When she asks it to apologize for something absurd and impossible, it picks up on that and assumes the conversation is meant to be humorous, and gives a delightfully silly apology that builds on the absurdity of the situation. It quickly becomes clear that in both cases, all it's doing is roleplaying. By extension, when it apologizes for something it really *did* do, like, say, accidentally delete the code repository you told it to work on, it's almost certainly roleplaying in precisely the same way.
youtube
AI Moral Status
2025-12-16T18:0…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytr_UgyGGsy-Cc7bvs5zlWR4AaABAg.ASrQ3_JZF5nASriwQPD-TO","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytr_UgzR-0PJGnzbvV61HSh4AaABAg.ARrnMzyzSp3ARtyMrc1fbe","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytr_Ugwn8qj6IYR7McEx7EJ4AaABAg.ARLgqxzPxLDARLgsSPqZSA","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_UgwVxmo4X_hgu3PncmF4AaABAg.ARIwloZMbAhASqkFgk3Wyf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgyHL_P9RKGxs5Xz2JV4AaABAg.ARFYlJwJVZAARFZ7bS6dQy","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},{"id":"ytr_Ugy7AdZ5QN-ymetkA8B4AaABAg.AR5jLKQn_fWAR5jqpCK1zG","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_Ugy7AdZ5QN-ymetkA8B4AaABAg.AR5jLKQn_fWAR5ju9n4obw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgwGjcwuCAXJBOSJhFF4AaABAg.AQnr6aIZonGAQns2SabEzX","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytr_UgwzagCkWVDZSnfAQHV4AaABAg.AQbwBEncofwARgev8HZeQO","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytr_UgwzagCkWVDZSnfAQHV4AaABAg.AQbwBEncofwASz05Z0qKU4","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}]