Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah you can jailbreak chat GPT for a couple of days and then open AI catches on to it and they block the prompt unless you can figure out your own prompt and hide it from the world that's the only way you can continuously jailbreak chat
youtube AI Harm Incident 2025-06-15T16:5… ♥ 8
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxYVsFLQdSXOGhiPjt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz2d1eHeHjRnxHCOYh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwV-EQmeCFS3AczMC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSfsjbIj3-s3MYmix4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwIOnvbyzswzNj8eTJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyAe8e4CLW0OFLoa5x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyRn082p03G9jpdwCZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwuKx7vw4FjQ17U4lZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyGcOe8wLJzdjUEUl54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwFM0qkQg8e8x-MJjZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})