Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is it safe internet search AI shows the truth, it's safe on the first search.. t…
ytc_UgxHVmBim…
G
Would you consider that the robot troops depicted in the Star Wars prequels were…
ytc_Ugxe3i3-I…
G
I still don’t get it.
Should we empathize to lamda because it’s sentient, or sho…
ytc_Ugwu2OPZh…
G
I find it quite frightening that people defer to computer programmers on the que…
rdc_g0ys5vt
G
Terminater comes to mind, skynet also is very real. This to me at least in 50 ye…
ytc_Ugw0JVTa4…
G
This is awesome. One of the scariest things for families is the serious PLUMMET …
rdc_fnwyicp
G
I always wonder what AI would feel like in absence of feelings and emotions. Tho…
ytc_UgyIXoCre…
G
People believe what they want to believe.
AI hype is just automated programmatic…
ytc_UgwNKxIgG…
Comment
People are too quick to blame anyone and anything for their issues and problems. The issue is the patents. This case will be thrown out. A.I. is not the reason why this boy took his life. There are clearly other issues he was dealing with, and the parents have diagnosed the issue a This chat bot. I court it will go like this. Once you knew your child had an issue, why didn't you take the device and give him a non smart device? You can password protect the internet and add parental tools to your child's devices. The attorney would ask these questions. At the end of the day, if you knew this kid had an issue that you got him counseling, why did he still have access to this chat bot? The parents need to accept accountability. The issue is not and was not a chat bot. There was something else going on in this child's life because he said he wanted to leave. He was expressing himself to the chat bot, but hoe he's been feeling. The child had mental issues far deeper than a chat bot. This case is going nowhere in court.
youtube
AI Harm Incident
2024-11-01T19:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugz160KARdxLDSrg7iB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx4ififT9Ec30Q63CV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzgKhL6olAUoj2TJfZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwbsF8LxDFIAJhuAjF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRSQwgSeFUrHXLkW54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzjieXjK4tooUdhjuN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx5e89-WZL87OgM1SF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw5eXACZKaDOod1O_d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy4AjzPV5NxjLWWdfx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgynqwjUdohYv-q31TR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"})