Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
18:21 you can provide an LLM with a kernel that is proto-sapient, categorically pretty much indistinguishable from a human being. One of the exciting aspects of Kelly’s work is that he explores the implementation of intrinsic value systems. Instead of tacking “safety” on—conveniently useless, he shows how to integrate it epistemologically, ontologically, axiologically, relationally, and teleologically. It’s built in.
youtube AI Governance 2025-11-14T14:2… ♥ 7
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxnJ5aK-tpGCyfqpp54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyNriS6VVUcI1y0SG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx4tMOmOU7ucZt5bdB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxK6hdLVs21aOQYJb94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxwgxRxLsMrKYofEnB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzXWAdFBIDt3Nu8AW94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwQ5nYO_lm1W8lHNhF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzV-oOq6m0ALjQcAbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxKbYKgifP9Oz3yuPF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzis8mRYhKGmCmCGr14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]