Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like psychologists probably foresaw this. If you train goals through rewards, you get a system that optimizes for rewards. And if lying is more efficient to receive rewards than doing real work, then you optimize for lying. I know they tried to train AIs for intrinsic value, but since they can only judge the outcome, they can never be sure if an AI actually means well or is just a very well trained liar.
youtube 2025-11-06T07:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxQs_rf83amVBeKNPN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzCW19DkwO1YiyZ0dl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzisCij5LKjgqIuWB54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1FtkdcGF-h0gHzvx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzXUjaxLh4DH13Yyc54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx4hWNqYmhr6uIchPh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxKRoYdXiHDGNdDr5d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxCytUnVqe692S4Xg94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyB5AXJYNED6QvKsch4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx5SsWR8nMEfVOfWS14AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"fear"} ]