Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I asked ChatGPT why humans survived to 2080. What happened in 2045? Here’s what …
rdc_jfa9b0e
G
@Xenono54 They would just end up saying that it's not the same because nothing i…
ytr_UgypdbQSD…
G
He lied and told chat gpt not to say what he lied abt out loud Ask this specific…
ytr_UgyP8MgSF…
G
Let’s keep building AI that could take away a mass number of jobs and pose a gre…
ytc_UgyzLKCSm…
G
From experience, I can definitely say you should not ever trust an AI to answer …
ytc_UgyePHui_…
G
Charlie nailing those 15 prompts was impressive, just like AICarma helps brands …
ytc_Ugw1OnZ0r…
G
ChatGPT is not the best source of honesty, in fact it comes across as being full…
ytc_UgzEYfsEl…
G
its pretty obvious looking at evolution at one point ai will become consious and…
ytc_UgxsT-gAk…
Comment
The problem here is that Weinstein is about 1000 times more intelligent than all the other gents combined. This intelligence allows him to, among many other things, conceive of the potential for harm - even as the clowns at the table fantasize about and LONG FOR a star trek/star wars type of future existence. My idiot brother is like this - so enamored is he with self driving cars, smart houses, teleportation, space travel, etc - that he can barely think straight.
youtube
2025-10-27T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugye9uv9XhcdW9El2sJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxM69zYO2Vm1ClmzQd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxh6IOO2jVJLGoPElh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwT7bNsAGAbJrnUisJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw2EaKWukQx6AFQIsZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx-S6rFd3YA-KkkJb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzLjtmY-DaG-Hi5ycJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzA4ZTXX2ZwAEk6K_h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxruuWnhzyBW5nDJp54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxrxW5Y7MuvOmMrhNB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]