Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@laurentiuvladutmanea wow I think you are not seeing things clearly, human being…
ytr_Ugy1ox4-F…
G
leftist idiot in the ssame breath says that the guy who left open ai left is mor…
ytc_UgzRXXKAO…
G
This is why we need to stop advancing it, there’s already been signs that it’s n…
ytc_Ugx7JyUIc…
G
@flashback4588 That is completely different though....
Firstly because PS and C…
ytr_UgzWjBSW-…
G
If ai technology go to much far and become like human their no douth some thing …
ytc_UgxK69Uk3…
G
He's "now" worried about AI? He's been worried about it for 10 years, and that's…
ytc_UgzerODfm…
G
Nothing will come of this and they will have no impact on the publishing industr…
rdc_lz7etxd
G
Only thing im guessing in ai art is how censoring on bing image generator works …
ytc_Ugx30x3Nc…
Comment
transformer models don't do anything unless you prompt them. You have to put an input in them so they can transform it into an output. You have to prompt them. There is no AI that is actually an agent. The true agency is the person prompting the model and the models try to follow the instructions in the prompt. This is fundamental and absolute with our current technology. That doesn't means someone won't prompt a model to do something destructive no AI model does anything unless it's prompted and you send the prompt through the neural network. Even if you set a model up on a loop to recursively prompt some type of sensor data (like a self driving car), you still have to prompt the thing to go somewhere in the first place.
youtube
AI Governance
2025-09-04T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw8mfHj-axWE1wdbFx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyj2E-nvG0Ss_XIip14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_UgxmNwO3VZ_IDZk1LSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz4qLMkz2g0I4TSrnt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYrb1RSGE4vIQrcit4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxa7rgw78mvS0mBtrl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzmSDdtYKEXTRbuEmt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxt7gnhGAsWvRWzLwt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyrTJC8jIaTzD3Va_B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz8uk8tM0NnteZMegN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]