Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
thats probably bullshit since most AI are actually leftist likie Chat gpt, for e…
ytc_UgyVgSuRp…
G
The massive, pervasive and sophisticated surveillance activity and ability to bl…
ytc_Ugx6ly5qk…
G
Did you see that one robot that was eyeing up the camera that was definitely hum…
ytc_Ugybxt2mm…
G
Honestly having test drove multiple tesla's due to the FSD updates. The cars do …
ytc_UgyUHNOlV…
G
Perfect way to explain this, very nice, i knew i could count on this artist to g…
ytc_UgxWfyYy_…
G
I’ll be doing the same. I’m exporting my data and backing things up right now.
…
rdc_o1v62dv
G
There's a huge difference between LLMs and "traditional" AIs that have been arou…
ytc_Ugz3VCthC…
G
I’m starting to think we will be able to predict with great accuracy which human…
rdc_mym4rsu
Comment
This is an AI problem. Because the it's still called AI even tho there's no intelligence in it, but the naming convention keeps fooling people into believing the machine is giving them "thought out" answers, and isn't there just to keep reaffirming their existing biases to keep them interacting with the chat to bump up stats-__-
Chat GPT isn't a technology that does anything new we couldn't do two years ago, and what it does, it does in a worse ways than the tools we already have available. Gen AI cultivates lack of research and curiosy, so yeah, it's at fault in this case.
youtube
AI Harm Incident
2025-11-29T19:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz0IpmhFdE0b8rrQ-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxxdPKuQbIng1xl8Ap4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwJ_C7GDMo5e7c60dh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgySW-5rxvSHfLDviTR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},{"id":"ytc_UgyERgUNBlCQ3of_-1J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzfYEOnmtv9w4YT1yB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},{"id":"ytc_Ugxy9tGrWXSP8B1HOBx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugz9h8fEIlLMddmsAo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgxhKsU9Du2EEBeo6YR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzeWCe3SeOu5rxp8LN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]