Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
can we please get robot doctors ai doctors yo like fuck an artist make us live l…
ytc_UgzXVuHLs…
G
Would the first AI male and female with a conscience , would that be the digital…
ytc_Ugx-Abgc9…
G
I'm all for AI helping solve big problems like climate change. I've been using P…
ytc_UgwKLyIt1…
G
*All Easter Eggs*
Rick and Morty - Butter Robot
2001: A Space Oddessy - HAL 9000…
ytc_Ughqt-XlM…
G
Buffett was likely listening to STEVE WOZNIAK talk about the dangers of ChatGPT …
ytc_Ugyao6BcV…
G
"The AI lacks the one thing essential for software engineering: Accountability"
…
ytc_UgwSbUNbt…
G
I'm a software programmer and i use AI art for my d&d campaign... Said that, i w…
ytc_UgywPxuIP…
G
I agree, we have the means to do this now & in fact as I said at the beginni…
rdc_dcie0pl
Comment
I got recently a degree in IT, specialized in machine learning, so I know (a bit) how those model work and I am honestly so infuriated that most people use "artificial intelligence" in so profoundly wrong way. Sir Roger Penrose keeps talking about Godel's theorem, computability - that the AI can't think, because it's strictly computable, it just produces a good mimicry of the products of mental activity, but without any real thought behind it - no one listens to the old man, even the IT grads who all had courses on the Theory of Computability and should know that. The other side of me always wanted to stray into some writing, art of translation sounded interesting, nope all gone because daft people keep pumping questions into automatons, believing that they think. They don't - it's probabilistics cross-bred with vector analysis, algebra and some fancy optimization algorithms. If we keep mistaking it for thought processes maybe we deserve what is happening.
youtube
2025-01-28T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx53hAzlUq6kh22qA54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyllH87kQ1MJ6-RrtN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxD2Wgbz6O5DoVKuT94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyp05tVZECqXyuzvGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzcy49zuZHL8pSto9Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwMzIs0IprAMvK4bkV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyaMsMbmbJR120O7NV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzQCsEF-wllgf0fcWx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxuwH5PeaYTzLHsGPh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjvLwHwiSzizjMDTx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]