Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The prompt-writer is the client, and the AI is the "artist", because telling the…
ytc_Ugysh6wy8…
G
We can get 5 times as much Healthcare for the same price. I don't think that's h…
ytc_UgxBtfqpT…
G
lost most of my desgin jobs thanks to ai. which im dumbfounded about copyright l…
ytc_UgzEUrzN8…
G
This is all false. Production costs will come down but that does not mean consum…
ytc_UgxPVQOV2…
G
A robot that doesn't know it's "bad" be broken will end up broken and useless fa…
ytc_UghBsb6B-…
G
I think there will be machine for that also.. Its much easier than operation on …
ytr_UgwaVgevk…
G
Wait so, mile for mile autonomous cars are just as dangerous as non-autonomous c…
ytc_UghJNA0FS…
G
They're focused on making AI better at coding so it can be used to design better…
ytr_UgwKJRxDw…
Comment
I love Neil, however, I hope he doesn't make the mistake Jordan Peterson has made: not taking social science seriously and having public opinions about disciplines he ignores. You see, actors, writers, psychologists, directors, artists, musicians, astronomers are interviewed often, and when questions explore social issues, instead of saying: I don't know, most give their uneducated opinion. As a scientist, Neil should just stick to what he knows and avoid talking about social science. The impact of AI is a question for an anthropologist, a sociologist, a political science graduate, not an astronomer so naive, he believes he lives in a democracy. Wrong opinions about astronomy might have little impact in day to day lives. Saying that losing your job to AI is a problem everybody can fix with creativity could really hurt people and prevent the creation of urgent public policies on the matter.
youtube
AI Moral Status
2025-07-31T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDwJxsviz873aqH-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzMAIbiee_l3jFVEjZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzU5jflk0VRHvPYeDt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwEZnAwT_ngVx1ahIB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwtFThDM9gSq1FbW8R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSnfxVBB6Jj3nLBuB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzVJw3dmB5dftqfhj54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqLpIumeTYlfwoiFh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzIlHya3EIHHQRHJaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6YXAJHzE0jPyb3gB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]