Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Totally agree that physician jobs, engineering and scientific research jobs, ac…
ytc_Ugy7e05jM…
G
C'mon. . .AI "learns" like humans do as they grow up. The "problem with AI is g…
ytc_Ugwl4ISxP…
G
The video discusses how the rapid development of AI by major tech companies pose…
ytr_UgwUkTAXO…
G
that funny how stupid human are nowaday, if they keep messing up with AI , human…
ytc_Ugw4xq5XI…
G
Courier services are NOT gonna be replaced by robots that fast. That's idiotic. …
ytc_Ugyi84Rq2…
G
Europeans are so insignificant when it comes to AI. Still, they want to appear l…
ytc_Ugyb-aYFd…
G
That is a very profound and valid question, especially when considering current …
ytc_UgzsgYawO…
G
Sir aapne bilkul sahi , ChatGpt is trained on old data and sometimes produce hal…
ytc_Ugw_kJbkP…
Comment
ChatGPT is apologizing because (a) it's programmed to and (b) it's a polite thing to do that makes people feel better. Moreover, we can interpret its apology as merely an admission of fallibility and error, which is not in any way dishonest. Finally, there need not be any emotion or consciousness behind an apology for it to be honest. You might want to consult with an AI expert on this if you want to understand more. If you can't find one then I (PhD in computer engineering) could make an attempt to stand in for one.
youtube
AI Moral Status
2024-08-08T14:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzKfivHWSk0Dwdd_1d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5oVq_dnYTOV5GvwJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_I89CCuX4lbHiYwl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy1FZyaz01FfMHXs0Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxfMrz-hb4gOV-2xKd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzGp1kL-p4RD6ag_VB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugznxqv7m5QAnbnbj2Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwB7BdoPtrsLRjoXJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwwBc0zawLvxRx60Tp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwrQIzG6DaFPe13nzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]