Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@DeuxViews it takes time for AI to learn to evade poisoning, and most of the tim…
ytr_UgwDP9LIn…
G
I’m seeing a lot of interviews here about AI, and they are interesting. But so f…
ytc_Ugybr6aKf…
G
I never saw anyone mention this, but wouldn't public perception about Ai be comp…
ytc_UgzhQnS9J…
G
@eduardomoura2813You do know that you are on the copium here right?
The AI ind…
ytr_Ugwi7FFYd…
G
The AI is not the problem itself, the creators of the models and those who feed …
ytc_UgxMCzh42…
G
AI scientist: "this could destroy us!"
Tech corporations: "Yeah but think about …
ytc_UgwppiSYd…
G
> Is everyone in tech ghoul?
I feel you could drop "in tech" from that. Ther…
rdc_n67ggv6
G
Did I understand that correctly? They shared their access code Facebook sent to …
ytc_UgwC0-cv2…
Comment
Well, the matter of “drives,” underlying “behaviors,” is making two assumptions which are not correct, but entirely common.
I mean, yes, it can be assumed that what is meant by “these drives,” that “AI” is “pursuing,” is being deliberately vague, and ambiguous, because the terms “will,” and “motivation,” are complex matters which are not understood, so they are paradigms of the complex, and not well-defined paradigm of human “intelligence.”
To say that “AI” has “will,” and is “motivated” to do things, (instead of saying it has “drives,” and is “pursuing” doing things), lets the speaker off the hook for having to explain what “will” and “motivations” are, (because nobody really knows), and sidesteps the absurdity of the questions that would follow, about how we don’t have a sufficient knowledge or comprehension of those things, so how can an assumption be made that they just manifested as a result of human ignorance, while making “AI.”
In other words, the terms were changed, (or the criteria), which would inform question that test the actual knowledge about what “AI” is and does, by revising the terms which humans apply to our own intelligence, (and its limitations in such matters), to terms which have an equivalent sort of meaning, but are understood as being not quite as clear, and reduced in complexity (such as to only deal with things particular to the functions of “AI”).
Of course these matters of “drives” and “pursuits” (as revised-down rules versions of “will,” and “motivation”), turn up at a particularly interesting point, where this mystery about where the “drives” to “pursue” simplifying the test criteria (for “AI” to accomplish “curve fitting” of its responses) comes from.
How it is a total mystery, because there is nothing to suggest that it is merely imitating the behaviors exhibited by the creators, (or makers, innovators, revolutionaries, visionaries, programmers, prompt engineers, whatever they’re called).
Everyone knows that they bend over backwards, to apply every reasonable doubt, to matters where a behavior just spontaneously becomes emergent, and generated an enormous hype around the big mystery.
We know that they don’t alter the “matrices,” (simplify the measures used, make the tests easier, etc.) when it comes to data used in discovering things like “emergent abilities,” (which, upon reviewing the criteria used for evaluation, vanished, because it had been extremely simplified for testing “AI”…. Okay, so maybe that was a poor example … but there’s gotta be some good ones, right?)
youtube
AI Governance
2026-03-23T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxRHj_GqoTuKUuo8z54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKkolzCmNiXNpum1F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgximKBdniY8witwtEp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzbIo26YunXGXwSagR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0w9lGkc22srY7CX54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwrWF_VuGcSgrSOyqt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyaAcgmkYhN03Aei0x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSeaQIdDAAFYvWuOt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzHj2EQ7AGsA9en_854AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgznswjF1WAiIvs34pl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]