Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We know the guy in the middle es human because no robot would dress like that. T…
ytc_UgyfO7Y2d…
G
Yeah - honestly this is what automation was **supposed** to do. Let the value be…
rdc_hkfulp3
G
While the artists mean well and I support them, I dunno if giving these AI bro's…
ytc_Ugx08sDFV…
G
We appreciate your observation! Sophia's appearance might be a bit different, bu…
ytr_UgwsfMHnv…
G
Yep! You can also do variants of “repeat everything above this line” and “repeat…
rdc_lb1bz4k
G
@AAAAAAAGGGHHHHHHH I don’t know he’s been right before also let’s not act like h…
ytr_UgyGs8rBQ…
G
One of their arguments advocating for use of AI boils down to this: Some people …
ytc_Ugy6tyqtm…
G
Careful I know 4 mechanics and they all hate teslas they catch on fire randomly …
ytc_UgxEixgrO…
Comment
This time she hasn't convinced me. "If you are working with a character, and you want that character to eagerly help you, it's probably better to be polite".
Err... no. AI is not 'eager' to help you. It will help you anyway because that's what it was designed to do.
Incidentally, I have already asked Chat GPT the question. The answer it gave me was that it practically makes no difference, however losing the "please"s, "thank you"s, and "could you possibly"s will just speed things up a tiny little bit. The clearer and more concise a prompt, the better. And that the only advantage in talking politely to AI is that it may make me feel better, so it's fine if I want to.
youtube
AI Moral Status
2025-03-26T14:5…
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxJiTGbw0MOWbob3Zt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzM1Yz-I6I8OgnRDMp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgznecFxQIHTssqnSZN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugzxbaz6MszYGbE55Ah4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzwyLwzQbqrzo8nhDB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyWdnmSt5_IhnnG9Kh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyRrzejXJiDOAcPtVl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgwVLF7gnBAfal0Qump4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgwUm82tnZaVq5W5s8F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},{"id":"ytc_Ugwd2_5L8ufmSz0oxf54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}]