Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
48:11 “Predicting data that humans created often requires you to be smarter than the humans that created it” seems either disingenuous or wrong to me, given that we are asking of an AI that it produces an output that fits a range of responses that we seem as acceptable, performed in such a way where it specifically doesn’t need to understand any component parts or anything at all, it simply has to conjure something that sounds reasonable. Extrapolating theories from a set of data is difficult, but popular word selection from aggregates of other extrapolation is bullshitting, not extrapolating. Creating a prediction in this way doesn’t require that one understands anything at all, I’m reminded of the “Chinese translation box” thought experiment, where someone has a set of symbols and instructions to give certain symbols out of the box when they receive other symbols into it- in this way the box could seem to speak a language even when the thinker in the box knows nothing about it. My point is- it’s not a sign of intelligence that it is “predicting human data” because that could be done without any of the processes that we would ascribe to intelligence or being smart.
youtube AI Moral Status 2025-11-17T18:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzygqCafbRLsp9Xr194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugywgk6du9hbvl99LO94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyl3QgrWOTtl6hKe3R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzPh9ySYWWVptvVjrF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyyxM9y89cm6W4WC954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy4E7InsIdi_3w7hNB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwyn9yX1AMEJtOc7114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy9JSmCZTyTbp2N4NZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyfrNfhl5S1I770on14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwmRXUeGPtQkWYsN-p4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]