Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Self-driving cars are a bad idea because imagine this....A pregnant woman is wal…
ytc_Ugzi-6ZOd…
G
No. Asking Chatgpt complex questions regularly enough has shown me that AI can't…
ytc_UgztMfTOt…
G
I think alot of developers are making claims about what they want to be true, in…
ytc_Ugzv8Rcvz…
G
Une pause? Joue tabarnak, on vient juste de commencer! Et dès aujourd'hui il y'a…
ytc_Ugy7Rps04…
G
This channel is an example of how to do propaganda. I use AI at work and it’s no…
ytc_UgzxPpuDj…
G
These AI companies and AI enthusiasts know what they're doing is unethical but t…
ytc_UgyAPuzyp…
G
Isn't its kinda strange how ChatGPT is glitching a bit like its actually gaining…
ytc_UgzUQsiHy…
G
If a person or group of people had ingrained bias in them, AI will merely reinfo…
ytc_Ugy6QwZWp…
Comment
Perhaps if Penrose had more "understanding", he'd realize that humans aren't some magical fairy that has a monopoly on understanding. They're just massively complex biological computers.
It's only a matter of time before AI reaches and exceeds human capabilities in every way.
youtube
AI Moral Status
2025-05-30T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwMeihC1a54PHDvegx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzFxLRxiZhYylZMAq94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwKGcsA_P0WfZwia3l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxysP1mM2rEuoAc8j94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYnLqbc1V5WkC8N4R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgziJc7AHEFXkB2Lr3t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwL655vyL6BBDETDjF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwinSEoqk214h7P8QZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxIwL6yg_RQypPM1NZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwO8vT2Omz8ha2IJqx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]