Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ai models can be amazing but there not all the same and they do not all behave in the same way. Its a bit like walking into a library and you want a book but you have no idea where to look so you ask a librarian for some help and guidance. She is what you could call the expert, but that also depends on her expertise, she might know history but not politics, she might understand English but not French, or say she might be a mathematician but she might be absolutely useless at writing novels (creative writing), playing role playing games, impersonating a character (being a bot), there are so many variations. So you have the front end of a model, the experts usually a model has quite a few and you can customise how many you would like to use then you have the data (the library) a larger model might not have the data you specifically want so being larger is not always better. But you can supply a model that specialises in your field, like creative writing and supply it will examples you would like it to use, it can then use your data as its reference library but still be the expert who is specialised in the field you need. Its complicated unless you understand the fundamentals of AI models and how we teach/they learn and ultimately how we interact with them. I read so much bull about Ai is crazy, but make no mistake, they are the future
youtube AI Moral Status 2025-07-05T09:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyZJi4ZtG3LIco3r8J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyfR5hMl6EEn31KKsp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxJ6jqHKQC0StBCjfl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxKWRt6_TkOBZ1Ag054AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzf-oEotI7oowA5gqN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy5y0O6b2JAtdX6mkV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0l4r2pO1Ww6-55kV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzEtJ7qemS8gkBUfZ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugz1cVNzD-Yim7yNexV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzmJOhUuTEWGidP2EJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]