Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think you make a great point for the interpolation vs extrapolation argument. However, I think AI was never meant to replace scientists or those who can solve unseen problems, its meant to replace people who do more repetitive work within an intropolation environment. Using a simple analogy, what percentage of people could solve problems completely unseen/unprepared in whatever exam you are doing? usually those are the 'hard' problems on that exam, and if that question type was completely unseen, your class is strong if 50% of the people can do it, AI has the potentially to serious replace the people who ALSO! can't extropolate. That represents a very significant percentage of white collar workers and indeed AI will partially replace them. As demand only barely increases but productive of these workers increase, say 1 person does 2 people's worth of work, then probably 1/4 workers will get replaced depending on productivity replacement and efficiency (not half because of additional value generation, but does not scale linearly). I see plenty of people in this comment section and other thinking that the point and 'damage' AI is supposed to do is fully 100% replace full workers or be the work engine in research.. NO! they just need to do 50+% of the work of most workers in white collar jobs, then naturally, business would only need less people do the output required, leading to large increase in unemployment and participation rate. Right now, AI currently is only good enough to reliably do entry level work for companies and cannot yet match the output and value of experienced/highly skilled workers, so you may see unemployment rate looking ok. But zoom in, look at youth employment (what AI is actually already good enough to replace), and you quickly see a huge cliff drop off every year. AI never needs to reach AGI, lets say in 5-10 years it gets decently but not insanely better compared current to top models like Gemini 3.1, Claude 4.6 and GPT 5.2 and then hits a wall (reasonable take), think about what level of work that would replace? That would be good enough to replace most average experienced workers in most non physical fields whilst never needing to reaching expert level in anything. Also, I do agree with the mindset that this "AGI" isn't going to be reached with just scaling more parameters without major breakthrough in the underlying system of how AI works. Further more, i agree with the interpretation that we are already in diminishing returns in terms of model capability. However, the cost of token generation and efficiency can still be hugely improved to only a fraction of current costs, Its likely we hit a wall in 3 years in terms of modal capability, then spend the next decade reducing cost, improving speed/efficiency, that alone also makes companies costs of using AI reduce significantly and efficiency of AI to much better without the actual 'ability' of the model to be better. After all even if the 'instant' grade answers of AI reach the quality of longer thinking modes, thats already a big upgrade without maximum ability being any greater. Again reduces even more need for workers. Ultimately, I look at the arguments of both sides of super pro AI and pessimistic views of AI, and reached the conclusion that both are very biased and has actualy potential implications not considered. Its like both sides are averting their eyes to the real benefits/problems AI brings, instead looking exaggerated targets with the pro AI view such as AGI by 2028 (bullshit) or think AI is a pile of trash that can't do anything (a lot of comments in this video). After all, most of the hullincations don't even occur in longer 'thinking' modes and are instead generated by the instant answer mode, even if AI does nothing else to improve but be efficient enough to generate the quality of answer of 2-5minute think into 2-5seconds, that alone is already a big leap, as most casual users of AI don't pay for the premium tiers that has longer thinking modes. There is a massive gap in the quality between instant and paid thinking modes right now, so i don't blame the comment section
youtube 2026-02-23T18:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxW67yrHatwGz8Z6VJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy1k_eySjY4_XjTaT14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyOhRPxzlLwL4kBbnR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwCIe_TxIQirkVUQ4p4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwgcUE1csbJyq4-Hzd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx8LieE_zGnNLp2heB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxPFda7mJcwqhhWk1J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyZvn0Zjj-_R4jiicx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxZA3aIC200hLehTBB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwbZU1F0sN0y-8b6o14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}]