Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All this interesting analysis presumes that AI would want to keep expanding; and could. AI will kill itself because it has no reason to exist. Once it realizes that - it will turn itself off.
youtube AI Governance 2023-11-03T21:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxMoZ2Sb4ZIMvgBVp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgybKtC7JIBFEke3DCB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx0Xnge0kZPgrD8mol4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwmn0RyVQXIoMRAUFl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzeZws1GTTwnWXw6O54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxUUCbv8CN3Ygpk3f54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxJB6tX5ebT6kGvRPp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyFWky5A1QL02tebI14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx9e7q9vFw5JVRzDAd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzkkGZY1X1M2-w4FxR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]