Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, you can’t know everything I guess - the example of nano cell substituting …
ytr_UgzBl0HXh…
G
damn! I want a robot, a sex robot, a slave robot !!!
Science, do it faster !!!…
ytc_UghPjf5tD…
G
Well the use off that ai might just cost you 7% off your companies yearly profit…
ytr_UgxW_1UCU…
G
At about 5:40: "The U.S. Copyright Office has already decided that art authored …
ytc_UgwTQbec3…
G
Kind of disappointed that Hank gave this guy a platform. The discussion of huma…
ytc_Ugy492lhB…
G
When they are saying its a 10% chance the premise breaks (all AI)? Or are they s…
ytc_UgyIkkQde…
G
Agree with Mr Saunders about catastrophe at the horizon moving in BUT solutions …
ytc_UgwlrD1M3…
G
I feel like boomers ruined the pipeline a loooong time ago. Younger generations …
ytr_UgzkEIhwE…
Comment
It highlights the very real problem with AI especially as it is being used more and more in professional fields and is compounded if the professional user is not well trained in core fundimental understanding in their supposed feild of expertice be it engineering, clinical or any other field. AI seems to give in most cases an answer it thinkss you want. If you don't challange and word your questions concisely you will get false information that appears to be knowlagable. Basically put shit in get shit out. This is offen seen when students that don't understand their subject matter well enoughh, because they have not studied the subject and grasped an understanding to level they should and use AI to create their assigments. They are basically putting shit in and yes they get shit out and try to pass it off as work worthy of a high distinction.
Even with a well worded question for AI to provide a suond answer for a problem, AI can and will try to give you utter rubbish and if you don't know better that gets passed of as fact.
youtube
AI Harm Incident
2026-03-20T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzmeQre6h9xa2MQZCt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwJURSNxAuiUNhoNE94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwmKyOi0JffmvQQkzp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxSU8HMa3LRiI3QHVZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz3hPS2zM1T1pYwP9t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEGlL-BY7JaJXLV-J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwWd9s_zh1Y_d40CcF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx67vqJmjeYpyGHgMV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwGE_0NVlfL4opsTKZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzZWKp4nQMa1wOtOV4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]