Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hmm.. 🤔 makes ya wonder how far ahead the military is on this stuff? They’re usu…
ytc_UgweXdT2Y…
G
The lesson:
Code AI to respond from the varieties of vice and we get BS.
Code AI…
ytc_Ugzax4ABD…
G
😂😂😂 6:28 …and also I’m an actor. Well there you go buddy. You lost your job beca…
ytc_UgyMKHZGx…
G
Bill gates...a major shareholder in microsoft...heavily invested in AI demands t…
ytc_UgxWGyHvS…
G
AI Artist is like calling a butcher to be a neurosurgeon for cutting lamb and p…
ytc_Ugx1ly_90…
G
His response with a "plumber" was in my opinion was just to make the point of ge…
ytc_UgytYiZrq…
G
Gonna be crazy when they realize AI can do middle management jobs better than de…
rdc_oac23n3
G
Seems a good chunk of AI anxiety comes from we human being's addiction to FALSE …
ytc_UgwAHtK05…
Comment
Here is a conclusion from Gemini when I asked if using please and thank you costs money. Conclusion:
While adding politeness words does incur a slight increase in computational cost and processing time (measured in tokens and energy consumption), the data suggests that it can be a worthwhile "cost." The potential benefits include:
Higher quality, more accurate, and more comprehensive AI responses.
Improved user satisfaction and a more natural interaction experience.
Reinforcement of positive communication habits.
Therefore, while "please" and "thank you" add a small, quantifiable cost, they often contribute to a more effective and beneficial AI interaction, potentially saving time in the long run by reducing the need for follow-up prompts or corrections due to unclear or biased responses. Sam Altman's sentiment of "tens of millions of dollars well spent – you never know" highlights this trade-off between immediate computational efficiency and the broader value of human-like interaction and improved output.
youtube
AI Moral Status
2025-07-03T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxWct-DMktSzO0FFp54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw-_AMeqC33hBX4PWN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBLX5twu4irnY15GZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzt_0dLnlIxoTy0Zex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwefRvn6ca3LMVTLYR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0JqEk16j_DQ14WKd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8ls1m8GdAiwOtOfB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwx53YVL2QHIHAYltJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgywAXWpYaInx54B-AJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAn2by2C84FDrz47x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]