Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Claiming AI art is legitimate art is like trying to say gaining muscle through s…
ytc_UgxMuxuGa…
G
Time not to have kids, our future kids will suffere for sure because of AI, stop…
ytc_UgwkQaMqW…
G
@aaronratliff338 I would agree with you for a lot of white collar jobs, but I th…
ytr_UgwIZUfCh…
G
Escalated to what?
They started Kashmir by shooting live rounds into crowds.
J…
rdc_f1ytw90
G
The difference is when something bad goes wrong in an airplane, your chances of …
ytr_UgxC9wr1g…
G
For me as a parent who wants to homeschool, this will be a game changer, revolut…
ytc_UgwYOAPub…
G
>When I call my bank for a resolution on an issue, one of the most grating pa…
rdc_jrp8zo7
G
As usual Krystal doubts Elon's sincerity. Elon is saying AI is going to be trans…
ytc_Ugxz552Kr…
Comment
I remember seeing an article talking about this idea. They said that it has been proven with tests that it can be far more effective, for example, to ask ChatGPT and similar LLMs to explain science topics as though they were a character on Star Trek. It was just an example, but it said that they got more accurate responses with prompts like "as Geordi La Forge, explain time dilation to me" than with the more simple "explain time dilation to me."
And the differences affected both the accuracy of the answer as well as it being easier to understand. It makes sense for the explanation to be easier to understand, but it was also more accurate. I know I'm repeating myself there, but that part is just... Insane.
youtube
AI Moral Status
2025-03-29T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugypz4OjZ0jW5IfO8aV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyNWKaZU_zQz4yZEfZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvBRDpl15SoOwoU5x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw-28Pguv3pTcJwMcd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzx5pBbD6AqIXSC_NF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmUu0611qPiG_CPg54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyuAgab2CWGhi9AkZV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7zchXWFTJIGkirXN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzBdv_eHpZDy3tMkox4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxxM6Wav-_i7irpT8d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]