Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't believe an AI can pull wire and terminate cat cable anytime soon. Hell, …
ytc_UgzFPrL2V…
G
In general most truck drivers are not the ones looking for progress in our count…
ytc_Ugww3NFMA…
G
The office of 20 or 30 years from now will be as different as the 1970s or 1980s…
ytc_UgwNDBMSo…
G
Donald Trump is the result of what you describe AI is doing with unregulated cap…
ytc_UgwFed3cs…
G
@Nerdonardo-1 it's actually insane. I've actually had multiple AI users harass m…
ytr_UgxlnZ4hp…
G
Thank you for bringing this to light and using your platform to reiterate this f…
ytc_UgyzWnEDL…
G
@manVSgold No, it's not what this is. The ones who want to replace artists with …
ytr_Ugx078nhT…
G
AI is already integrated and being integrated with defense systems. They say its…
ytr_UgxXczATO…
Comment
I don't think we have a choice with regards to progress. We are neurologically hardwired to grow used to our content situation, no matter for comfortable and luxurious. So in order to get a possible feeling (happiness) from our life, we have to IMPROVE our situation. To say in other words: Our happiness is proportional to CHANGE in our lot in life and not the absolute value. And we are litterately addicted to happiness via neurochemicals (endorphines, dopamine etc.). This is why we seek economic growth blindly, it's an addiction.
So if AI can become a viable way to improve our lives, we will do it. We can't resist. So this philosophical debate is interesting in theory, but it is irrelavent in practice. The more interesting issue is whether or not we can control the AI we create. See this video (and the ones leading up to it):
"Deadly Truth of General AI? - Computerphile"
youtube
2017-01-24T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxQw7YfMcOhg6zyCbR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyZKKVQOOweXnuzyGR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgxMvnr5ixkjJGjQTgp4AaABAg","responsibility":"elites","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy1DPqIDMxmnKwbdc94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzQi1gAtvrINJhUugx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzA4QfdzSS2WxK1u6l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz3r5ONYiHca-oWIdd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghmHBsOLD4fY3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"approval"},
{"id":"ytc_UghA24C7Vxvn43gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjnnBXlqmRuLHgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}
]