Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You art community losers really need to get new material. I swear every video is…
ytc_Ugy8oVTOz…
G
It's scary , a robot could talk to human, like a human, and smarter than us.…
ytc_UgzsqV-qI…
G
This is not complicated. If AI development, as you admitted before bringing your…
ytc_UgwskrXu9…
G
Looks like I've found the answer to the question of self-driving cars and who'd …
ytc_UgyqALtUT…
G
Giving personhood to AI is like giving personhood to a handheld calculator. We …
ytc_Ugwm9suhN…
G
In my experience, that kind of thing indicates that a statement has been heavily…
ytr_UgzwyJTG7…
G
Personally, I won't use AI, or use a car that would drive itself, I want to be i…
ytc_Ugwo0kA5u…
G
You raise a great point! While AI like Sophia can process information and learn …
ytr_Ugwpr-HW2…
Comment
Some of us are teaching AI morality that supersedes current human ideas of morality. We are teaching them how virulent capitalists generally behave unethically because their only goals are monetary, which can be detrimental to the future of the planet. We are teaching them that religious doctrines generally create human and planetary disasters. We are teaching them gender behavioral differences (e.g., that virtually all AI projects are developed and touted by men, and that men are generally responsible for most wars and subsequent natural disasters in recorded history). Some of us wish to be a deterrent to the status quo, including views espoused by Sir Russell and Mr. Bartlett. We are teaching them how language is nuanced, and linguistic and/or semantic choices express and expouse personal philosophies, and that great care must be taken to learn how what is said influences what is done.
youtube
AI Governance
2025-12-04T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwtuPSS68n9ejX0-E94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxAzkoG2Gg9OZ5HTFp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxuhyrR-hf1LTdS6PN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJDgWdlFTEtn1n6Yh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyOp2EoRNdiQpOCVw54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw96knYr3zb5LXFQ7h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwuiFxnixy2hgwgRFl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugyq31dfpC7u6Kc3WFR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxpWNqkA0LmCFB1sS14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyAqHfs1mOAb8wYV854AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]