Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I got recently a degree in IT, specialized in machine learning, so I know (a bit) how those model work and I am honestly so infuriated that most people use "artificial intelligence" in so profoundly wrong way. Sir Roger Penrose keeps talking about Godel's theorem, computability - that the AI can't think, because it's strictly computable, it just produces a good mimicry of the products of mental activity, but without any real thought behind it - no one listens to the old man, even the IT grads who all had courses on the Theory of Computability and should know that. The other side of me always wanted to stray into some writing, art of translation sounded interesting, nope all gone because daft people keep pumping questions into automatons, believing that they think. They don't - it's probabilistics cross-bred with vector analysis, algebra and some fancy optimization algorithms. If we keep mistaking it for thought processes maybe we deserve what is happening.
youtube 2025-01-28T22:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx53hAzlUq6kh22qA54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyllH87kQ1MJ6-RrtN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxD2Wgbz6O5DoVKuT94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyp05tVZECqXyuzvGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzcy49zuZHL8pSto9Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwMzIs0IprAMvK4bkV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyaMsMbmbJR120O7NV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzQCsEF-wllgf0fcWx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxuwH5PeaYTzLHsGPh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwjvLwHwiSzizjMDTx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]