Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Certain parts of the us are now enacting laws like this. Anything the government…
ytc_Ugxt72jGL…
G
@Agente13840 "cry more" "luddite" "cope"
finally decided to learn the concept …
ytr_UgyrU2Xbm…
G
I am an artist and Im
Really scared of AI i dont want it to take my art……
ytc_UgyIZuUPg…
G
We were so preoccupied with the thought “if we could”, that we never stopped to …
ytc_UgyIj6Yn8…
G
In mere seconds, an AI can spit out art that looks exactly like a specific artis…
ytc_UgwMt4VDZ…
G
@Waysed733090% of everything is going to be garbage. Total AI generated slop ev…
ytr_UgywOVb4B…
G
Humanity is always looking for a way to destroy itself we started with clubs to …
ytc_Ugx_4tfvf…
G
sounds about right. i cant wait for A.I just to go full troll.
idiot: how do u…
ytc_Ugxo_xfir…
Comment
I think this is a general problem, if funding has bad incentives, management gets bad incentives, staff gets bad incentives and students typically go path of least resistance. Before AI, they copied stuff or got someone to do it for them.
AI is just a tool, and LLM-s are kind of stupid, because they don't understand or verify stuff, it is pretty frustrating and can be verified in an area you yourself are an expert in.
I like AI that automates repetitive stuff, like yes I can manage any kind of citation format, but it is a chore, and you still have to verify, because it can mess up, but if it manages the reference list with the numbers, you'll appreciate it when you have to add something to the 22. place from 60+ (with the et al. format this is a non-issue). It could be very good for looking up things from lists, like filters in databases do. AI can be good in pattern recognition too, yay for cancer research, airport screenings, and many other applications. It also can do a lot of specific stuff way faster than humans, but just like mechanical failsafes, it needs properly managed.
On the other hand I think current AI, especially LLMs can't be used to write articles or review them properly, so they shouldn't be used in this way. Could it catch bad grammar? Sure. Could it catch plagiarism? Yes, but it is stupid, so the same authors with similar references can take up a big part of the similarity budget. Does it understand stuff? No. Does it understand math? No, with some caveats. Can it translate? Yes, it is getting pretty good at it, with some limitations. Can it get you resources or give a summary of data? Yes, but it can hallucinate and straight up tell stuff that is not true or does not exist. People should stop trying to use the advanced version of autocorrect rolled together with a huge library as an expert. Often it is at the level kind of like a telephone enquiry call center, where a random, possibly underpaid guy looks up stuff on Google, but without agency, and tending to make things up. AGI on the other hand is general intelligence, that would be able to do a lot of stuff, but would pose a lot of interesting ethical questions. Like will you get JARVIS or HAL 9000. But it is way cheaper than workers, so I expect AI will spread in the academia, call centers, wherever, even if it is worse in many regards.
youtube
2025-08-01T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwbmOxkNk15GEiihgt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxUIjFewDKnLZmqRKV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwNC0fZeicw9bSY5eF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCk5qutZaOwEJ3k8J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz_Pd63NZAhbhwUetB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlzhePXTGdBg4McWt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxqAVqpP4eXN_JMpr94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxlZA-AgN6e7CBzUUd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw_tfL0RAoW_mnpXfd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAwV8aCD2L4JEI-J54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]