Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with this is the P in GPT means "Pre-Trained". Which means it doesn't use current data to train itself. It uses past data. And you can ask chatGPT when it was last trained. Try asking a GPT model about a world event that happened yesterday - it won't know.
youtube AI Moral Status 2025-06-04T03:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzMLYgHuhNBBnbIaU14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxB2sPCpjG2Vc070qx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwUzG8buSjJoJnm9fR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgymCddPjGQRuKlVZiZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxRZZ6NdMirjEBAxYt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxaCHztzQjkaH9UoG14AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyxkAvu_1ZnWDNB3VV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkTJUw6wQkJNduW5h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw9VcvYL4cT3UGe_Zp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzMlWOKfNAdKXjOuQR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]