Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
With all due respect, it's hard to talk intellegently about AI if you don't use it. I agree with much of what both of you say about using AI to "write" and also the ethics of how your work can be used to train AI. I'm a software developer and an aspiring writer and I burn through vast numbers of tokens/day doing my work, both software and to a lesser degree writing. Do I use AI to write something for me? H#!! l no. That's my favorite part. But there is so much more that AI is useful for. * Create your ideal editor persona: provide instructions for what sort of editor you would like to work with? You can leave it as an AI, which I do sometimes, or tell it to be a person. Sometimes I make mine something like: A professor of philosophy and comparitive religion who helps writers... * Or, I could download all of the transcripts from your (Brandon Sanderson) lectures and say: here's my book. Help me break down the plot and character arcs of the major characters to see... * Or, you're a science fiction and fantasy nerd. You're favorite books are LOTR, ... Here's my book. Tell me what you think. Now you can have a discussion with a thoughtful persona who can offer surprisingly insightful comments on your work. The bonus is that they are there 100% of the time and you don't have to wait a month for your friend to read your book and give you vague feedback. One thing that I've found, is that you need to give the AI *some* persona and really drive home that you want fare and critical feedback. The one weakness that AI tends to have is that it can be too much of a "yes-man" and just praise everything you do. This has been the hardest thing for me to get right in the system prompt. But I think I've finally figured it out. Just recently the AI pointed out a flaw. It was something that I may never have noticed because I have 100% knowledge of my universe. I thought that I had expressed enough that my character's actions made sense, but after the AI pointed it out, there were gaps in what the reader could know and so there would be confusion. This is the sort of thing that beta-readers, if they are really good, will point out. But, at what cost and how long. I was able to fix this flaw during the draft phase, not a year later when beta readers may, or may not notice it.
youtube 2025-06-27T21:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzSLJe6eYx7Zb1kI2l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxcYkG_hpBL44Md7cV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzoOwY7e1vuMdDy0gR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxS_Jsre_2lX7nD-Ud4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzWjJqw5GJj2CzdYrx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw58VDZoEd63XqLeBt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8WnjpgLhGnttb49Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwCb_c4Mee6KR4lDlx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwEZRtcd_uEj1crGKR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugynk3668n3mP1Z7Htt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]