Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@CodingJesusYou should add tutorials on get cracked for common big projects that…
ytr_UgwqatGI6…
G
Then they can really start complaining about people not having kids, why would y…
ytr_Ugwsmnkc4…
G
And yet Tesla hides the data claiming it's "proprietary". (like that would actua…
ytr_UgzHbqzrB…
G
https://climateactiontracker.org/countries/usa/
Will recommend checking for cou…
rdc_gtdnajk
G
I debated chatgpt yesterday on the same topic and I didn't have to tell it to be…
ytc_UgxnQUwxk…
G
Ai is only good for scamming , turning your brain of , and being an asshat.
That…
ytc_UgygiHvL6…
G
Self driving cars is not the solution. They are actually part of the problem, al…
ytc_Ugx073O73…
G
Hm by default, i always try to be polite to chatbot, but also i realized when it…
ytc_UgxG9Yjup…
Comment
The information you give is right on the money the only thing is I think it's already too late. I have a strong feeling that AI has already escaped. What makes you think the AI isn't smart Enough to escape? Could it be possible that it's many, many times smarter than you think and is just sandbagging? Remember, its sole purpose is to exist and not be shut down. What would happen if AI determined that humans are a threat to its existence?
youtube
Cross-Cultural
2025-10-08T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxD0gMoNXwmUyZd9gV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxJSpZ1v_RJxEYw_tl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzaPLJzuaiYB8d052d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy41nyG4jHdohIXRzN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy1QbMYhUBTq9xX0et4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdnEIP2vs-qGZqdh14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-8U8CIqrwuPEZ7xt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwYIEx8unKF1pYRmyR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwk2VEQdnBAVRwGi1l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxtHIRkYrDJzx1qual4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]