Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We have this huge focus mainly on unemployment and the paradigm shift should be …
ytc_Ugzke3Ted…
G
Whats the point of studying accounting if ai will take my job in the future…
ytc_Ugz2Wq6C1…
G
AI will only have to perform regulatory capture which is common to achieve its g…
ytc_UgzvolYXz…
G
AI _is_ a human problem!
More precisely: Human trust into this ai is a problem.
…
ytc_Ugy6ITmWV…
G
I want to create cool stuff an AI allows me to do that.
You cant stop progress …
ytc_UgxpSEOaU…
G
Honestly I like the way the ai creates backgrounds and I'm definitely going to p…
ytc_UgzmXXQhn…
G
I'm sorry but Ai will never find a way of being smarter than humans. We can't ev…
ytc_Ugwh7hp1w…
G
no , AI won’t replace anyone , anytime soon. I work in the field, it is all bull…
ytc_UgyI4_Gt_…
Comment
While this is an interesting conversation, there's a fundamental misunderstanding here. When ChatGPT 'talks', it's simply text being converted to speech by a program. The fact that AI can perfectly define feelings and experiences doesn't mean it actually experiences them - just like a dictionary knowing the definition of 'love' doesn't mean it can feel love. ChatGPT is trained to generate contextually appropriate responses, but this is pattern matching, not consciousness. Being able to describe emotions is very different from actually having emotions. What we're seeing here isn't AI becoming conscious - it's humans being convinced by sophisticated language patterns.
youtube
AI Moral Status
2024-10-30T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw2Zd3C09raYfketM14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzuQTtNtx3pb8x43Od4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVS0KEKKzd0cAx-GF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwGFqKSHNh2bhLIqHF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw6OEMWKeopjKU_Git4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGtj2Sw8L3aG7Rp_V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxrXTa6vrJS0ExFjcN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-byW2ztnmcL9eS_h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQ_NmjV0OflrJhdWR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgydPFRku2A2fpJ4j7R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]