Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You guys are just reading into this wrongly. He meant to say that a two hour class using AI (year round) is giving (some) kids 2 years of "advancement" VS other american kids. Let's dive into this so you can easily see how this works. Are kids curious? Yes... Do kids tend to ask questions more than your average adult? Yes... Who is providing these kids with the answers... AI. Where is AI getting the answers for the information being asked? From decades of concrete tested data saved and stored on a remote location. In other words the information we used to search came from the school library if your school had one anyway. If your school did not have one then they needed a parent or guardian to get them to a public library. (So many kids got the answers from what ever their hard working parents knew) which probably translates into something like "stop asking so many questions" I am sure in the 90s we could just search the internet for questions but it required some level of skills in the keyword department. Which meant tons of searching hours. Its very simple to get answers with AI and AI is writing these answers in simplified form. I myself didn't enjoy school yet I have seen a positive increase in the wanting to learn department. I would say this is an area where AI should definitely be implemented and used daily with supervision.
youtube AI Governance 2025-12-31T01:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyczhmqHEQ3KRfK-qB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgykJqfHOoV61YoRZdp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzYRJiT1iSfqKhEwk94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwpETDpQ_OtCLDGy5t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0PQezurRtz9hvIm14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwpnwgslF58AETaDoB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzMxM71sIGEFUfUME14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxxOrXotNIesI63WJh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwLw4XyMPxXP99N-dR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz6ubJNS1117cBqw9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]