Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s hilarious that he thinks Elon has no morals but Sam Altman might. Edit: all…
ytc_UgzhAmX8u…
G
Why can't we develop AI which is FOR humans, especially on a moral basis: feed h…
ytc_UgxN-7k2n…
G
These people make some bold claims that are largely unprovable. When one of thei…
ytc_UgwDUGcik…
G
There is nothing human like with AI. I cannot believe the developers don't under…
ytc_Ugyvr3tc8…
G
1000 tools to read emails, 1000 tools to watch videos, to listen to music, to ma…
ytr_UgzUuqmdt…
G
So I just want to say that first off, ChatGPT can't read the Bible, it can only …
ytc_UgwEgl_mp…
G
Prediction: Post the complete takeover by the AI era and the Quantum computing b…
ytc_UgxeqtrZN…
G
AI will be smarter than the smartest human at some point
AI will be able to impr…
ytc_UgwijC6xp…
Comment
AI has access to all information, can combine all knowledge into new insights. The assumption that AI will dominate us implies free will, and self awareness and self preservation. I’m not too sure AI has those needs or will have those, as it doesn’t really think: knowledge isn’t an opinion, it wouldn’t put a value on human vs AI efficiency. The solution is one where we enter Isaac Asimov territory: AI must be equipped with an equivalent of Asimov’s laws or robotica.
youtube
AI Moral Status
2025-04-26T17:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyTThOyF5177Ay3-Kh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1IIB8GdfbtJ_ZhLl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZH0D3iRt7Y93aVy94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnQ2C3Kdy1BorxKDx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxBUawbw4n-gMyZZt14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyM0V6OrML9YFIzBnZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6az3va5_WocKConR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwTGeKub3L3xHvjq2Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJlIL39GINbho3gnx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwwETUOIy6hIhtvi-J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]