Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As long as humans are beneficial or at least commensal, AI will pursue a symbiot…
ytc_UgwmrJWdA…
G
I don't think people need to worry about AI long term. We already getting bored …
ytc_UgzsztxId…
G
YES, it's crazy. I tried calling Amazon AWS because MFA wasn't saved - locked ou…
ytc_UgxaANFhG…
G
The governments agenda and purpose for the one world order is to put the chips i…
ytc_UgxbapsLN…
G
@johnmadlabs this is the same argument as what people using AI to create art are…
ytr_UgyvU2_Ih…
G
Man, seeing some ai pictures feel like I'm looking at a cognitohazard. Especiall…
ytc_UgxypNv83…
G
Same. Who do I trust more Neil the optimist or the actual AI people building it.…
ytr_Ugxmdp33p…
G
The lawsuit in America needs to change. The fam most likely going to win since C…
ytc_UgyVJFOXc…
Comment
I love your videos but also I think this is a W AI take. Obviously AI has a ton of issues we're still working through but it's not going anywhere despite the pushback. There are tons of cases in recent news of people doing dumb things because of AI but when you read about what they did usually the gut response is "really?". I think we need safeguards but at the end of the day I don't think AI is all that different than say a self help guru or a nutrition TikToker spurting out poorly sourced information. In today's digital age everyone has a duty to themselves and others to be vigilant and be skeptical of the information they consume. But people act like AI is the first of its kind to give people poor information. People were doing that long before AI 😂
youtube
AI Harm Incident
2025-11-27T15:5…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAEKpl_fOcnZAmyOZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugw5XHJ-dWqHlBEvRUh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw7UMTb-17CmZRx0854AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyuZpRHxs6PyR5e0AR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-tNP3vKGmn0KIBVN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwoMuH04IcafBQpuTd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxPCtgCsDCgOWr3oQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwMooJeCdmKO7bVrS54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyIhjJtWWYeZf3z0UB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzpeOSP2PXzW4GxZUl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}
]