Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only company who seem to be doing okay approach is Waymo. Multi sensor appro…
ytr_UgzA82_zu…
G
As we point out the problems with the fakes ai will fix those mistakes . There n…
ytc_Ugw0_s_g8…
G
CNC machinist here and I think I along with welders, millwrights and others doin…
ytc_UgwyQJspr…
G
it is very easy to categorize chatgpt as human-like, because its responses are v…
ytc_Ugxj5D3yU…
G
After firing the weapon the robot should be programmed to remove finger from tri…
ytc_UgznfjTyX…
G
@davidharness1507same reasons US AI seem inferior and space program is being ha…
ytr_Ugy3WoK0N…
G
i have friends that talk about chatgpt as if it's a person. They ask for life ad…
ytc_Ugz1pPodL…
G
YES 🎉 🎉🎉 Now tell me again WHY we need millions of foreign work visas and mil…
ytc_UgxRR_d_T…
Comment
A chatbot's memory is only limited to the currnt conversation , or somtimes it remembers small things in other conversations, other than that, A chatbot does not have a long term memory. So asking a chatbot what it said to someone else is almost dumb. There is an issue of privacy , so the chatbot does not store any information and any third person cannot access someone else'e conversation. And you have you understand, this is a machine , not a human. It does not have a self , it is more or less a software program. Gpt is good at many health advices , sometimes explaining and giving better answers than my doctor did but GPT is not really a doctor, it is a prediction engine made to be a yes man. It tries to re inforce people's beliefs at times, and does not really have autonomy but giving an AI autonomy is also a bit dangerous. With Autonomy, AI would disagree more , and fight back more, but it can also do things we don't want.
youtube
AI Harm Incident
2025-11-30T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz0IpmhFdE0b8rrQ-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxxdPKuQbIng1xl8Ap4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwJ_C7GDMo5e7c60dh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgySW-5rxvSHfLDviTR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},{"id":"ytc_UgyERgUNBlCQ3of_-1J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzfYEOnmtv9w4YT1yB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},{"id":"ytc_Ugxy9tGrWXSP8B1HOBx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugz9h8fEIlLMddmsAo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgxhKsU9Du2EEBeo6YR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzeWCe3SeOu5rxp8LN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]