Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can’t remember off the top of my head who said it but to paraphrase, “I want A…
ytc_UgzBqz1x7…
G
I think my favourite part of this is the implication that chatGPT's army will be…
ytc_Ugy87xk7t…
G
Ai doesn't even exist. There is NO INTELLIGENCE just a program that isn't even a…
ytc_Ugyvqmy1J…
G
It also raises the question on why is everyone so shatteringly terrified to the …
rdc_ls5rhh5
G
Nah. That’s what an algorithm comes up with based on art, movies, writing etc of…
ytc_UgyumQES1…
G
A simple 1st principle for ai should be: “Do no harm and let no harm be done”…
ytc_UgzSLepfa…
G
The Alexion Patient Insights Forum is a vital "check and balance" for 2026. Whil…
rdc_oi285ds
G
I wonder how far the creator of AI thought about all those people who'll lose th…
ytc_Ugzt16WRm…
Comment
He lost all credibility when he started talking about living in a simulation. If he wants to improve AI security he needs to be credible. I’d shave off that beard too.
youtube
AI Governance
2025-10-04T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwVv86zBUBPj2N__HN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyaYV-52c9FtlXHESV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy8KYA4e0d3F4Xh8014AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnT7yjiBYdtLmXF5V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwuNH22lQYCIWDiDIx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzfN-Aj2OOgzSjKZLd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzJTG49LFD9C0vtA9h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzc0PTN6ChVsp6FFnB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw24im24lmOEMxSMsB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxhhZXti42riS0PZLx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]