Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don’t think this gets to the biggest issue which is lack of continual learning ability. It will often make a mistake A, after you correct it, comes back to you with another mistake B, and when you further correct it, making the same mistake A again. For AI to become AGI, the whole pre-training + inference framework has to go.
youtube 2025-12-12T16:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz_LJ03pNgjiU6vlRF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwMzZsJvJtSMz7TH254AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQA1fP8Bd3l2BgDBR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxLFOMJLLgemi1WaXJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6W-p7Q4LjkJXD1EB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwoXtUs7_GfHor2z054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzUogT43Tyn59QHGWV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgypoiMGTAmzOL08ed54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyk-cyt0p5K_7xJ9El4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyBfw8NmWELtt1SSTp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]