Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel like my profession will be okay... but this is really scary for my young …
ytc_Ugx3-YmHn…
G
What if in the future there is a girl that makes amazing art. She becomes the mo…
ytc_UgzezS08D…
G
I'd love to blame the Tesla, because after all if you are going to pass, then do…
ytc_Ugy9ZekYx…
G
Ai won’t replace 50% of white collar jobs. It will replace 99% of white collar j…
ytc_UgyOuyGBJ…
G
I'm surprised ChatGPT hasn't just finally told Alex to stay away from train trac…
ytc_UgxevPi79…
G
I wouldn't even file something that my intern thoroughly researched and wrote wi…
ytc_UgwxpE-rD…
G
This is the problem with the masses talking about AI. You don’t actually ever wo…
ytc_Ugxtb3Fz1…
G
tried one recently, looked natural but Winston AI still flagged a few weird tran…
ytc_UgyvUD-kg…
Comment
I read the article you showed. That article is literally a fork found in the kitchen. The participants made ai wrote the essay and then the judges asked questions about the essay. How the hell are they supposed to know if they haven't read the essay?
they say in the abstract they say it took 4 months late in abstract “ The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months
but on page 23 it says “The study took place over a period of 4 months, due to the scheduling and availability of the participants.” the time period did not have any significance.
clearly you haven't read the article much because in session 2 they asked them to write an essay again guess what happened? Lmm group was able to quote. do better.
In fact on page 46 the article found out that LLM and brain group people used the same word distance. “The averaged distance showed that essays generated with the help of Search Engine showed the most distance, while the essays generated by LLM and Brain-only had about the same averaged distance” here is the quote.
Anyway, in my opinion, the article is biased and kind of empty. not good enough to cite it as a research. and the crazy part? The article was not even published. It has value but not peer reviewed. It was not enough to enter a journal. Still the article is new. Maybe it will.
btw only 18 participants were there for the 4th session which is quite significant. So for you to say it reminds the same 18 participants is not enough.
Also, they did not measure the participant's skill to write an essay.
they also only measured cognitive not other cognitive stuff. Cognitive measures vary so weaker connectivity does not mean worse cognition. they noted this too btw?
206 pages but 90ish figures of tons of analysis but no clear explanation however they are almost guaranteed somehow.
btw in abstracts there are no limitations.
They also randomly decided to do a side quest and ask them if they “own the essay” .
oh and the researcher's job? it goes likes this
Eugene Hauptman: “Eugene is a faith-centric technologist, a serial entrepreneur, angel investor, advisor, and mentor.” I took this from about.me
ye tong yuang: math and neuroscience student at wellesley
jessica: designer
Nataliya Kosmyna: ai researcher
xia hao lia: designer
iris: data scientist
pattie: media arts and science professor at mit.
ashley vivan: i cant find anything about her
what a terrible video, do better. at least actually read the artcile not just abstract
youtube
2025-10-24T14:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzEZxg3VsjNhx1fr6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyEM5UovISn5iIU95l4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrDUY5LVE6L12wfYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1GFvPPlFPfjGCEfh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzHEK-gw8WqSvSC3ql4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7HN_wzRokp4nJux94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqUvPCNk8ZGsI5CO14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwoeyALO1fRlSWGrVx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxh3juXUez34d-iDhV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyq0VIhvyOLVmRt07x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}
]