Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The thing that is threatening Humans is Humans vs Humans ...
Fear Mongering Ant…
ytc_UgwjaL3zf…
G
Can't wait for Asmon to react to this, take the discussion out of context, and v…
ytc_UgwLPJC_w…
G
AI is a business not a consciousness. Like he said, “to maximize short term prof…
ytc_Ugz4dmHyl…
G
I think the thing that we artists often miss is that most things in life don’t n…
ytc_Ugxon0T_X…
G
There is one good prediction about a smart AI in Hitchiker's Guide to the galaxy…
ytc_UgxZHkFWe…
G
This is exactly how I felt as COVID unfolded... the behavior patterns around tha…
ytc_Ugx_K-bUw…
G
That’s not valid. AI has its uses and more and more companies will incorporate i…
ytr_Ugy0W-EA-…
G
How can they do autonomous AI weapons when they need a connection to data centre…
ytc_UgxXt3x7W…
Comment
Honestly this is why You ahould always actually look at what ChatGPT says instead of just drinking all that shit up. Lowkey Iirc I used it for math one time and it got so much shit wrong... Like holy crap. Honestly, I think I'd recommend good old google over ChatGPT. Atleast your more likely to get more helpful information. Also AI in general literally acts like a damn Nanny. It treats things without nuance and either says something is safe when it's clearly a bit or very dangerous. Or say something will slaughter you when it's not even gonna scratch you. It literally infantilizes people, hell I have a hard time just making it say the word "Bastard"! Making it say "Fuck" or "Bitch" is an actual challenge, holy crap.
youtube
2026-03-07T09:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwXPKe2Hfz4SCNesCd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx62OURm80I-YPdON14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzf9BFFEKryPapeLcR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx29VNSLmoGNH-T0214AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwjGbNeSZn_SvN1FMx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx9kVz3TrKnJchvtlV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxnGCkbitp3dHK4SwZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy3fDbwxdPUQKCuNjV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugwlys9i8V2p0FQhJup4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxewxEIMlHg8bDTNlJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]