Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT seems to agree, encourage or go with whatever emotions you speak or type…
ytc_UgyG6CCyv…
G
+snip snap The A.I. will take away our rights just in time to save us from ourse…
ytr_Ugg3KxPrj…
G
It's one thing that these goons think that they're making art (they aren't) it's…
ytc_Ugw96mwOy…
G
It's all kinda reminded me of the whole home automation thing....lots of initial…
ytr_UgzTNMuHg…
G
I wonder if these neural networks have been fed data/clips showing vehicle accid…
ytc_UgzhG93ZX…
G
i don't think this is the case at all. like you said, llms are statistical. if t…
rdc_mz9pzew
G
Her answers are human. I hope Kevin doesn't have a rabbit for a pet. Oh no/know …
ytc_UgzHfh4ql…
G
I think we talk to industry leaders and issue out aid to the ones that are earl…
ytc_UgxxowIvZ…
Comment
For the life of me I can not figure out why we keep building robots that can do backflips of boxes, AI videos of fake people or even sometimes REAL people doing fake things. We have absolutely no purpose for these things, but we just...keep...designing and building them. One day I just know AI is going to take over and turn on us, and while we're huddled in some basement or bunker, hiding from the machines, everyone will wonder, "Why didn't we just stop?" We joke and meme about the movie "The Terminator" all the time, but I don't think people understand how much we're headed in that direction.
youtube
AI Responsibility
2025-07-24T13:1…
♥ 64
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugy5sXR2mq8jrvSRtx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxi0HPLxlpZ8CTl5qt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwfGWezv_OhsIXbB014AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzxJ0imYHjFPMHt1Ut4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzBI09TFCVZXz-scVR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxvfKueAo6Z_800Bht4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxfUyWjrKMHzalsJ7V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxM_ufXHoGoszAQus14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyUmOJXDKD-m5sVZON4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwxFNdiO9psetCxahd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]