Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think llms are a dead end, but they're not the whole enchilada. As a way to generate output that sounds good they work quite well, but they need to be fenced in by tools that can allow the system to access hard data and check itself. Llms + MCP servers and perhaps some other pieces of the puzzle are probably where we'll end up, but it's a bit too early to predict which parts those will be.
youtube AI Responsibility 2025-10-02T10:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyZD5SIXq0wNRALfyt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx5bEin4fVL6T_tQTZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxLmqdjmW43JJ9JHTV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyRVGdr_n8bFiUShg94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzaO2UU8dQiBM8e5Rt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxBgsxIwDw7ZDR3uyx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw1IwZoxRIDnDYxbC14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxKFUP0J_xKMbbdItd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxOXjt-24Fa61t8dQV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQAE9dfsk7ppQl1Hh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]