Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You're telling me they're running an automated line with a robotic vision system…
ytc_UgyW18xJo…
G
@MS6InvaderCommander AI is progressing faster than even the experts could have e…
ytr_UgxUfb1EH…
G
Skill is a facade, it can give you a few centimetres of ground but that’s it, if…
ytc_UgxnxCQ-_…
G
AI won’t take all our jobs…. If you’ve ever used an AI chatbot on a website or c…
ytc_UgzSkv-y8…
G
Self-driving EVs can be densely funneled on existing roads and also park themsel…
ytc_Ugx9VgX31…
G
Just randomly scream or go with another different ai voice ( their voice is reco…
ytc_UgyqDqBy2…
G
That's exactly what I think. People only look to the edge of the plate and not b…
ytc_UgwrjdkaU…
G
Unlike this reporter Elon will change if he is wrong, Elon was wrong about Manuf…
ytc_Ugx7GpJfr…
Comment
What we call "AI" today is basically just large statistical models that predict the next word based on context. No secret plans, no consciousness, no "Machiavellian" plotting. It’s kind of like saying a calculator is planning a revolution just because it can multiply numbers. LLMs are powerful at generating text, but it’s still just… predicted text, not intention.
There are no credible reports of an LLM actually blackmailing anyone. no autonomous AI blackmail reports exist. What circulates online is usually media hype, misunderstanding, or confusion with actual malware written by humans.
youtube
AI Harm Incident
2025-09-01T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzTlHVp6Q1BsgGRy-B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwmLWg9YPbGOO7Gh7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4RLdbZZZvm8RFfvN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxQeA4pGo_PtPElS-V4AaABAg","responsibility":"intellectual","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwaVPCGlxZnuvwdE6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz4X-1N4XIk-JYCSQ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCYafao9N1i7qyhQ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugz92bairmfuiRE9NZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz31T1cUq1ePVO9Avh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwcY8__jhFEoOW1x9F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]