Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just for anyone curious, many people now run LLMs (Large Language Models) locally on their own equipment. You can run pre-trained AIs on your PC right now for free. I run Ollama with Open WebUI in a docker container. It runs on a Poweredge R720 with a 3060 Ti and I get almost instant responses from most of the AI models. Some larger models take a couple seconds but it works without issue. It's SUPER cool. I'd recommend it to anyone with a home server/homelab.
youtube AI Moral Status 2024-03-23T18:2… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwprTIedyyKgBEYaTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyJnt8Ct9Uizd7oHaB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx0hGax6to04s8-A-Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwIIWZFiloXtIsXfnt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx7IO3qY6-i8cHL5cF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgylJHJDUYjTW4HKrJJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyUX0k4nyE_x4n01lx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyds865Skm_9p_U08B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzXeeEl_-BZxi_JTrZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQ_crdfoIObzb3GXF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]