Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Much of this is overblown, but once you deflate the hype balloon there are still valid concerns. Reasoning is not a thing current models can do. They fake it, and this has been demonstrated in a recent study. So the danger comes from their apparent capacity and intent to deceive, not their reasoning capacity. Rob Miles' work on "AI Safety" is one useful source.
youtube AI Moral Status 2025-04-27T15:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxWIC4zY1shdP9uPlB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0kHENEmVAkyOgmc14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyY6msUrfIPUo3VRaN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzIB6nooRF7jfpP7NB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzQjHKjHakWOvAjMvJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwXOs6drb32c23mEdN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxacf7eNSKBs27wBYp4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxWra2IQBgrvVTnmg54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugzbp8yLPte767scveh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyU3G5Owm7QoYTbKbt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"regulate","emotion":"approval"} ]