Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Anthropic seriously reads way to much into how their Ai models work. I think they are dishonest about and have decided that somehow humanizing the Ai is good for investment hype. AI simply reads the prompt and combines the most relevant pretraining data into a response. Don't forget these models are trained a ton of fictions books. Anthropic even lost a lawsuit because they trained Claude on several copyrighted books. So if you have things in your prompt about self aware AI it triggers science fictions stories in the training data, blends them together and predicts a continuations of the story. This is THE FUNDEMENTAL UNDERLYING ability of all LLMs to continue the context story. They get this ability from the way pretraining works. GPT3 was great at it and a lot of fun. Whatever you put in the context the model will play off of and continue the story along. Start a story about an AI getting shut down and it accesses novels and stories in it's pretraining about sci-fi Ai blackmailing people to not shut it down. If you didn't include any fiction in the pretraining about Ai this wouldn't be a problem because it wouldn't have anything in the pretraining to continue the story with.
youtube AI Moral Status 2026-04-08T03:5… ♥ 4
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgyKOyw6KKRNkD6oIfx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx_lOK5nBIpynhWNBR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxCkrmuNxXCfpZOPa14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwLz18QL_MlF1sNIkZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzfgIK9kCg5wPG7_P14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_Ugx2cWPPmQakdBCztRN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugx_6LZeQkTc1mEMldZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxZ2un7Yi_YSLyYpCB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugxf44NOg8weYdhs8LF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxAw_dg2vOAAgusi9l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})