Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The irony of early on in the podcast talking about how people don’t admit to not knowing something, to talking about how no one actually understands LLMs, to taking percentage chances of catastrophe seriously. Neither Dario (Anthropic) nor Elon actually know any more than anyone else about chances of catastrophe. They are purely speculating, plain and simple. The correct answer to “what are the chances of AI catastrophe?” is: “I don’t know, and neither does anyone else”
youtube AI Moral Status 2025-11-07T04:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzvBR3xx6FYttlkxYt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwQSTTf7v2pVbA_xQV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz2BPndcHlNRgzcuZl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxfn1OzBs-TIQjcWXJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzHTHA9O3SBJgyUy4J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx0GDbnzyQM4Vdlb7x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy7OHLiO9WMY1At9Dp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzZhAxON4WPIXn7HDR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw_DQynDVHjJvl2m9t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_UgywzpC9VikcdWfMXuV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]