Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm a Teacher's assistant at university, basically I'm a laboratory instructor. …
ytc_Ugz-euLHb…
G
@mr.mikesart7111 „ I haven't found the ai to duplicate anything other than style…
ytr_Ugz05qpL-…
G
I suspect that AI models are padding out conversations asking unnecessary questi…
ytc_Ugy-hHVeQ…
G
Could? It will. The gap between rich and poor is about to become astounding.
AI …
ytc_UgwhpR-1z…
G
Art by definition is the personal expression of beliefs, thoughts, or emotions--…
ytr_UgyezDmfn…
G
Mark my words ai is very useful for future because this generation is able to do…
ytc_UgwcMerdt…
G
i did not expect a comment about gooning to be one of the best ones going agains…
ytr_Ugyk-8b5F…
G
we can program our own ai to be good ... put the constitution in it and see what…
ytc_UgxnLYBBB…
Comment
Thanks for sharing, but I think you are off the mark. VR and blockchain don't really contribute to productivity; I'd argue that the internet and excel are comparable technologies, but LLMs have an even more direct effect.
You could make some sort of comparison to the dot con bubble for what happens when the hype goes bust, but if you want to make the argument that valuations are already overinflated, I don't think you understand the implications of AGI. If a company popped up with AGI, it's realistic for the value to be on the scale of the entire global economy. That's the promise of an LLM bubble, I'd say that the market is still within the realm of reality at this point.
> serious issues regarding security and accuracy that cannot be easily fixed because they're intrinsic to the technology.
As for this criticism, we are successfully reducing the impact of the problem. For example, context engineering, tool usage, deep thinking, subagents, and foundation model improvements (e.g. gpt5 hallucinates less and says "I don't know" more). Not to mention "problem engineering" (lol) as people figure out appropriate usecases for these models.
> OpenAI is going into optimization mode
I'm sure that profitability is a motive here like you mentioned, at the same time there are other reasons why gpt5 is what it is. The big one is that reasoning is a lot bigger of a deal than people thought. The o1 preview is essentially the big jump from gpt4o to gpt5. openai seems to have been pushing in the scaling direction for gpt5 until the success of reasoning models, as indicated by 4.5, which was probably intended to be gpt5 when they started developing it. o3 had to be released to stay competitive with other companies but was not polished or optimized enough to be called gpt5.
Essentially, this seeming slowdown is actually caused by companies increasing the pace at which they launch models to remain competitive. Gpt5 is the consolidation of 15 months of improvements; it's a
reddit
AI Jobs
1754772623.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_n7u47ul","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_n7u2g5m","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_n7u6q8o","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_n7scvyy","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_n811k2h","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}]