Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLM’s are supposedly flawed and world models are needed. Everyone thinks Apple is behind in AI, but their FlashVLM at 85x to first token and 3x smaller computer done on 8 H100 versus 64 for other models needs to be explored more. Since FlashVLM needs a LLM. Why not add Spike Mind LLM to Flash since Spike Mind is compact and efficient. FlashVLM fits on a MacBook Pro today. Probably Spike Mind LLM and Flash do not need big cloud data centers and can run locally!
youtube AI Responsibility 2025-09-30T19:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz9DxiPNwGFUp7R8TF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzC0OgDS7p--s2oKop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx2_Kf8stGE5ax_gfN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyow-KhocEEKFLLnbp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwUTLl_uYMANYUapFN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxWsaBb-31XmMJ7jcV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwYx8P3iUB9o5kGRr54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzm870iGHsaC3CKZ694AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwkThCYEFlqpOE8opt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxZgyJJDk4I2te_b8N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]