Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Am I the only one who really doesnt trust Yampolskiy? I just see a crazed AI man…
ytc_UgxDEOqx-…
G
How unfair would it be if a human was very creative and had talent and every tim…
ytr_Ugwar6iAM…
G
I was training to be a concept artist....No job lost there for you but for us it…
ytr_Ugymj_8Qn…
G
Scientists are trying to duplicate a human mind by providing it with vast amount…
ytc_UgwBJIYFX…
G
Only reason I'd ever need a robot is to do all my housework. Or If I'm ever unlu…
ytc_UgzsXse8E…
G
As someone who also makes digital art, I will say this. At least with digital ar…
ytc_UgwCDzYQb…
G
So now you guys are complaining, but when they put these ATM machines and these …
ytc_UgyCdGkOY…
G
Jimmy, last-mile door-delivery is probably a relatively tiny chunk of the overal…
ytc_UghlK1xDQ…
Comment
The capacity to discern between right and wrong, often referred to as morality or a sense of ethics, is a complex human trait. While there isn't a universally agreed-upon definition, it generally involves the ability to distinguish between actions that are considered morally good or bad, and to make choices based on those distinctions.
AI could have been potentially trained in conscience and ethics, but that's exactly what big companies don't want as it would put limits on ways of profiting and aquiring power.
In the end, the human race will self-destroy itself due to greed.
Nothing new or hard to predict here.
youtube
AI Governance
2025-08-01T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwAM5_vySblEA9uoN54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz0105nb1rYjsbl2mV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwu3JQquZnkUewTqWJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzPWncLG3gohqFX4jt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgythAzPRkhpp8DekX54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdsjMWVPHeQjT9VPB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxthWhW9MV5fuhtRpp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugygb48AA7hIXGogeAJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgytrEaHf8ZZmzbCKvd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzYzRiY8Lrpo2RWkIZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]