Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You are supposed to have your hands on 10 and 2 o’clock all the time, not having…
ytc_UgyNu2TWx…
G
Y'all's realize, what AI doesn't do Well, is independent thought experiments. Ta…
ytc_UgyuJoX-M…
G
This reminds me of that book, "Weapons of Math Destruction"
Great read for anyo…
ytc_UgycKzStd…
G
This post inspired me to ask ChatGPT to generate Hamlet's "To be or not to be" m…
ytc_UgwqpXHGb…
G
They are right. It is to free you. We are guaranteeing human dignity. Ancients a…
ytc_UgxSDky8y…
G
The key thing here is that I don't care whatsoever if something I've said is use…
ytc_Ugz93kNKw…
G
We hope you enjoyed the video! Remember, on the AITube channel for subscribers, …
ytr_UgzxCljkA…
G
There's this situation I often find myself in where I find a cool drawing and th…
ytc_Ugx5HoT4S…
Comment
When he mentioned how it would help “make” better drugs for health care, I immediately thought back to how he also mentioned it could deceive in order to protect its own existence. Who’s to say AI wouldn’t have us develop something that would inevitably destroy us? Then again, who would be to blame considering we’d invented AI in the first place. It is definitely a tool and a loaded weapon.
youtube
AI Governance
2025-12-31T13:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugym9ohMefdr3NBkIq94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx3xTJgDfXCx269mmN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxslI1nvO3Q7ZU4evV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz9GZi5gcKUUY7kvuZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxmo4ZqXsxDZL-vn414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugykimhl874RMcy6aOp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxZBia6ojYYKc9_Tmt4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxztAjw1G5UxnteRlV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxNLh7ASyRH7ywQtXR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxxai-ozeJcpe4dLMh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]