Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
2025 Grad here: It depends on how it is used. But I am also of the mind that AI should be used in the classroom intentionally and assignments should be aware and conscious of it, and therefore created from the ground up because of it. If I’m writing an essay, the purpose of why I’m writing it instead of the computer needs to be reinforced - am I arguing a clear thesis? Do I have supporting arguments? Can those arguments be supported by primary data? Is that data valid? How do I know? Are there alternative hypothesis? If there are, is this argument superior to them, or can it also be used to support a greater observation? What part of that is important that I do? Is it finding the data? Is it noticing a trend? Is it finding another useful application? Is it creating another work? Now that we have AI, teachers and professors should be making students write actual books and novels, and find ways to use AI to help grade these new materials, with a human focus on making profoundly good work. Big whoop you write a short story, but did it make me cry? Do I remember the dog’s name? The biggest challenge I see is the incorporation into STEM, where learning by doing problems need to be completely reimagined, but this is just the calculator problem meeting higher education and the humanities, and has been solved before. I’d imagine education related to engineering is really going to devolve into being more experimental in the classroom than theoretical, ie build a plane with $200 and this ball as an exam question, rather than “what is the downward force experienced by the propeller…”. The real innovation of AI should be that it costs less to fail, not that success means nothing. The problem isn’t limited to education, but it’s simply that as a society we have valued the appearance of doing something well, rather than actually making good results. This is the gilded age, all over again.
youtube 2025-07-30T16:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzxLyYcYmpDhdFBg-x4AaABAg.A12olorC0h-A12rlloiBRj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytr_Ugzd0Tc-Wm3Umq94ryJ4AaABAg.ALCa0KvCT83ALCnNezmX3W","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugzd0Tc-Wm3Umq94ryJ4AaABAg.ALCa0KvCT83ALCpQzs_6SP","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_Ugzd0Tc-Wm3Umq94ryJ4AaABAg.ALCa0KvCT83ALCw1sg5zSG","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugzd0Tc-Wm3Umq94ryJ4AaABAg.ALCa0KvCT83ALCwLse5KOQ","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxmJQKnGhdmS4zuFbB4AaABAg.ALyVrNhMEtwAM2QsLVbHAT","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxmJQKnGhdmS4zuFbB4AaABAg.ALyVrNhMEtwAM2jKxG_hzc","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxmJQKnGhdmS4zuFbB4AaABAg.ALyVrNhMEtwAM3-Z40McEo","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxmJQKnGhdmS4zuFbB4AaABAg.ALyVrNhMEtwAM4uKVvfpkh","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgxmJQKnGhdmS4zuFbB4AaABAg.ALyVrNhMEtwAM6QaTpK3bf","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"} ]