Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You forgot one thing that has stayed the same for 40 years. Artificial neural networks (and by consaquence) LLMs basically cannot _learn_ addition. Since it is a completely un-statistical concept. But that means it cannot come up with abstractions, let alone new ones. For a random reason human brains _can_. We are a far cry off of what we should call "intelligence".
youtube AI Responsibility 2025-11-15T08:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx0RwwXySOsH_1xpBR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxnA3KPL8BAnr2wiBV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_mYik8njTgtUuyRt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzQsa5oatAo2lyABht4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwkQnoPL-kSlvRccfR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzZOMDEuLZF6coSayJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx07qJMUF3G1E1rfAV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzawyYQ32Lnq7QrWx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz6WHuDAISruKoq9XB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwEGkkK8RySHDzadKh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"} ]