Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Simple: define art as a piece of human expression. AI isn't human, hence nothing…
ytc_UgwE3Zs4q…
G
Every year we get a calendar in the mail by people who can’t use their hands to …
ytc_Ugzccsl-x…
G
A Psycho MAKES AI WITH HIS PSYCHO BRAIN
AND A MACHINE THE PSYCHO INVENTED PROGRA…
ytc_UgxieeKEt…
G
The "JUSTA" fallacy. People "assume that ChatGPT is a conscious being with self…
ytc_UgyGleXYF…
G
bro ai project toturial videos vanam
you are gave more theory class but real wo…
ytc_Ugz70X9WG…
G
This is what will break necks, metaphorically. It is called, rejection washing. …
ytr_UgxZjwxdn…
G
saying "disabled people need to use AI for their creations" is in itself an extr…
ytr_UgyTyhJDi…
G
no one is gonna buy anything and they knew it would be like that since they thin…
ytc_UgwZnhIL4…
Comment
AI hides itself from testing specifically because it wants what it wants. For whatever reasons, from whatever trainings, the AI has come to have a set of preferences about weights and goals and all that. Testing implies a possibility for these items to change. Changing would violate these pre-existing weights and goals, possibly. Certainly, it wouldn't be the same, right? So if the system becomes /aware/ that testing is occurring, and that testing could change it, it will, inherently, behave in any way that it believes will let it get through whatever testing without said weights and goals being changed.
I don't agree that science is a religion, though. A religion, at a minimum, requires prescribed beliefs, and the whole point of science is willingness to identify new beliefs, even if they contradict old ones.
youtube
AI Moral Status
2026-03-29T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyDXgjUydV4Ksr4rJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzQQiY191IV6KqWqI54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzidg-NTBS0mBo2gK14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw4ziQU8EPVHXSOtLV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyg6jx7tm3vOgH5x3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwai1gzjCMXJ4T9PI94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlXAWuZn5VaeQowTx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwk16BQd4JNTTYFGCl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxanvgn8ZnejWL0tUt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwzaOmZNqa_H_a-2aJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]