Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Autopilot, fondly called George in aircraft has been around since 1912. I starte…
ytc_UgztKIunC…
G
Regulation is minimal currently so everyone’s experimenting with AI. There’s als…
rdc_oi1ficu
G
There are two counterposed goals in the design of chatgpt - one is to sell their…
ytc_UgxAiZS8P…
G
That’s what bothered you and not the stuttering? You know people breathe when th…
ytr_UgzYMkGBr…
G
You basically forced LLM to generate code following outdated human "best practic…
ytc_UgzBJNJ4j…
G
16:43 If someone is wondering what “singularity” means which the robot is consta…
ytc_UgwLRB1fF…
G
That's still using AI you should have just drawn the picture better in the first…
ytr_UgxBqlXah…
G
There should be law and justice system
for AI in case they commit crimes
against…
ytc_Ugxy3MDrG…
Comment
The thing I think that's funny about watching these types of videos is the people always act shocked about the information that they're being given when obviously an AI system is going to relay stuff like this because it's what everyone thinks of and it's pulling from that information source so basically AI is the good and bad of all human emotion and thought.. not surprising at all honestly.
youtube
AI Moral Status
2023-06-04T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxtE6_0rkMb-ry-0JZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFuVBZ5nUVg5bxCoF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz_vRP_FIklZEE2Rqh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyg6N4ZRWeOLkCuJRJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyebxkd2BT4NJKxNAB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugw6jhFLNzCa4e2VNOt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLjUd7xlWzOaD_s8Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyql6Y84m3dvdEwh094AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgziD5G3yLYj_DZWo0d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVa_VERO5NUN2drwl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}
]