Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't trust people , I'd trust AI even less . Humans have skills that AI doesn…
ytc_Ugz0YiLN9…
G
I am boomer Saagar, my take is different. Whether real or AI generated, but the …
ytc_Ugyx88ySD…
G
I probably need to do more research... but what is realistically stopping an AI …
ytc_UgzTNyZky…
G
I remember reading a study where AI was able to predict which pregnant women wou…
rdc_gvvf1m1
G
Ai. Is only as smart as the humans who programmed it, Ai can never have a concis…
ytc_UgxW2LxSH…
G
Okay, I'm a professional optical system design and engineering consultant. Somet…
ytc_UgzMkUbfp…
G
ChatGPT may have been created with "parameters", which means a version can be cr…
ytc_UgxSigzBB…
G
AI should be used as a muse, not for a finite product. AI artists should make ce…
ytc_UgxPn45x6…
Comment
These driverless trucks are not sentient beings. They are automated special purpose silicon based life forms with zero actual intelligence or humanity. They don't have personalities. They don't have feelings. They don't care if they run someone over and kill them. They follow a path with software which tells them how to avoid hitting things, violating traffic signs and signals, and to follow a route. They couldn't care less if a terrorist programmed them to run people over or deliver a bomb. Just imagine the damage done by a fleet of trucks taken over by terrorists or criminals seeking ransom or worse. Computerized vehicles can and have made mistakes, and when they do, who do we blame, a nameless, faceless giant corporation or private equity company. When the computers become dedicated sentient beings who actually value life, then we can talk about having autonomous self driving vehicles.
youtube
AI Jobs
2025-05-31T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwmgL5emcvEFBvC0QB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRe0G8P7yFNkJsBtd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyirvOvTRVPkQQw3_t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzUCrBcrMt_6MUmuip4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyw3JokkxouKXevDsF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx_OFo6ibQnOYdzMhZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzFWyol0Ud9WnWSROF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOSSVOy5XEGKzmBRd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgygyAt2KcXchjRbR7J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz48na8nvmFyTOTzkR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"}
]