Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Have a look at the below videos for a different perspective on the matter. https://youtu.be/4__gg83s_Do?si=ZrEZRmV1WsxYbpH2 < Yann LeCun (Top researcher on AI/neural networks) says AGI won't be reached by scaling LLMS) https://youtu.be/7-UzV9AZKeU?si=52drk8Qyv2RGu0aH Prof Michael Wooldridge (professor of Cs at Oxford) concludes that while LLMs definitely useful, they are not capable of "logical problem solving or reasoning in a deep way" Also... Not sure what he means by doing it's own research. What does that mean? Is it going out and performing experiments, interviewing people or collecting data? How can it add anything novel without humans feeding it actual data. Unless it just done secondary research of course but still, how do we know or woild we even have the capacity, to actually validate all this infinite research?! Be good to hear what other people think also Kind regards, Ethan
youtube AI Jobs 2025-11-18T20:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzMFALZ_2qZ4aowNU94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxpJNJsM34bBRfyRlF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw7ozTKranuO8gKUYJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzuLVchwyMGh8x6U1t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzcqFSaix23N_AgSwd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw7eNnFTG0ekG2z1wN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwRt40bab4WcqSo0vJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgznstEQQ0zs2lv2dNt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-uysrXzKaTM1yB8V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwGwAjE_Vxt2M8S0Mh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]