Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s sad and most certainly frustrating for to see people say all this kind of s…
ytc_UgzplsGCZ…
G
anyone who thinks auto pilot in this day and age is true autopilot that negats t…
ytc_Ugxr3KETI…
G
Unless we can envision ourselves as a singular humanity on this planet, the end …
ytc_UgxtX0Wzh…
G
they need to stop now and figure out all the answers before they bring ai on ful…
ytc_Ugyg3wXqd…
G
13:16 Yea man, totally ignore that access to devices and technology are expensiv…
ytc_UgxhK4p6n…
G
Most people don't even need AI to control their brains. Propaganda will do it fo…
ytc_UgzVVYzie…
G
realistically the problem isnt the program. its the people. were looking at it l…
ytc_UgykFpIr8…
G
I think part of the push AI gets isn't just from AI bros, but also big companies…
ytc_UgylK15XT…
Comment
... decent chatbots can use metadata to stop hallucinations. And then it can make sure the info outputted matches the reference data (that just costs more tokens per request).
So kinda gaslighting ppl into thinking that the systems are not as capable. Yes AI models by themselves hallucinate but structured outputs and AI don't really hallucinate as you can simplify the information the AI needs to process dramatically.
The labs have stronger models and agents by the way. Why make them release faster due to inaccurate press?
youtube
AI Jobs
2026-03-23T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzq9a9jZkhuk-8-hSF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwPtz384PEC7Gs7mkl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyvGq7hsBYuRzxXsUh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlWWQgRRtfVNXbVA14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxckQxh9xgl5ruxQGR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzeweMvak4uit-XQc94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyI9M62YLOAuyOZEvV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx85i8uz7Rq9h0X0s94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjzprzZ-873V8QH9Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx3RY8M0Wk-Pugbfd54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]