Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He assumed the AI program, knew his trick question was a trick question. He gives no basis for this assumption. The program simple detected subjectivity (no objective answer). The program answered with the most plausible answer, for which it was programmed to do. So if we got programs asking questions, when their not programmed to do so, and if their being funny, when their not programmed to do so, then we got something to talk about. Jedi-ism is tagged as actual religion, true religion, .... then the answer might actually make sense to a program.
youtube AI Moral Status 2023-04-17T04:4… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgxoPRd9t0nOZJZiuoB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxQD0SVIts0wCZ2_CJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyOBM-yyDR_kVQCbxB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw9O7ZNzNzlt_kbzUt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyHpNbGJL0Sx7yhVcB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"outrage"}]