Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This guy seems to know what he's talking about but one question though.. if we a…
ytc_UgyLeGgnE…
G
The best part with call centers is that customers will do anything possible to a…
ytc_Ugy7glpcN…
G
They wouldn't be investing so much in AI if it wasn't going to replace people…
ytc_Ugz2MhW6z…
G
KILL AI ..............while you still can.
Or would you rather wait until it de…
ytc_Ugx9KqPZ7…
G
training your own LLMs for such purpose would need much more than just company's…
rdc_mjz1dio
G
@jeffcard1A I agree that, ultimately, it's a "people" problem but someone, some…
ytr_UgyhoHKm5…
G
@prakharupadhyay4564 “Yes, you’re absolutely right. We all know how the Industr…
ytr_UgynIUIkg…
G
The irony is Anthropic probably had no idea this happened. Their web crawlers an…
ytc_Ugy05vLzW…
Comment
I'm an English professor. I have some suggestions for students for how they can use AI to be helpful without simply using it to generate text: my favorites are instructing ChatGPT to respond without using full sentences (it'll make suggestions in phrases and words in a bulleted list rather than sentences and paragraphs, which can be enough for a clue but not enough to copy/paste) and prompting it to ask questions that will help students where to go next without the model generating the work itself, like "Ask me questions that will help me figure out how to start this assignment" (we work on coming up with good prompts to do this in class, so AI can help us learn, not help us AVOID learning). I understand that sometimes students get a little lost and just need an idea of where to go next when they're writing, and besides, there's no closing Pandora's box, so we'd better start building a way forward. That doesn't mean I'm okay with generated writing... I read student writing all day long, and I use ChatGPT myself, so I can generally notice the difference. When I do, I don't get upset and I don't accuse students of using it, I just ask them to help me understand how they wrote their paper and I explain why I thought it could have been at least partially generated. I know there'll be times when I'm wrong, and I don't want students to think I believe they're not intelligent enough to write their own papers--it's not that I think they aren't "smart" enough to come up with something, because ChatGPT doesn't usually generate BETTER papers than my students do. It depends on specific circumstances, but usually student writing is WAY better. I just want my students to be able to do what ChatGPT can't, because that's what's going to secure a good career for them after graduation. We don't have to worry about AI taking our jobs if we recognize and nurture our own originality... it can't replicate that.
Examples of ways I've used AI to help ME, though: "How can I improve this writing prompt for clarity?" "What are some possible questions students might have about this grading criteria?" "What are some ideas for ways to make this activity more interactive?" I did once ask my students to let me know if they would be okay with me trying grading using AI to see if it would be useful, and most students gave me permission to try, knowing it would supplement my grading and not replace it... but I stopped testing after two papers. I hated it. I think that was a year ago.
youtube
2025-08-01T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy01fFLthOtS85caax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx86YNRzl0Gmk8OyV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyUf2Sm-hv_UWpgE8N4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwwaCrZ6AL5ipN2kft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwAdfQejPdy4tHKlF94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxxntC3ReW-b4h3vLh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwHjsxNipHHcF_BZjt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxjrs-J8Pxhf9BPB8h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz_BYE9LiJAeAh-Ill4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdvSQeR0r7DM8P5lZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}
]