Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Zamn~
Did we really just get to the point where we are taking inspiration from a…
ytc_Ugza6cTHa…
G
There should be a subreddit dedicated to making really bad art and labeling it s…
ytc_UgwwBgmeb…
G
This is AI predictive crime monitoring. Out of 100 people only 13 will commit a …
ytc_Ugx6j6hYh…
G
How can they state there's less injury to humans with self driving vehicles? Tha…
ytc_Ugzar0s8I…
G
The problem with government public school is that so much time is wasted between…
ytc_Ugwdxhnu-…
G
The value of chat GPT's responses nearly equates to value of your prompt.
This…
ytc_UgzukbX2U…
G
1:17:16
1:18 ty
1:17:35
1:17:40
1:17:56
1:18:01
1:18:09
1:18:16
1:18:21
…
ytc_UgxlfB3eU…
G
Maybe they should pour a Tesla robot in the driver's seat as a backup. 🤔…
ytc_UgxEHvDJM…
Comment
arguments on grave danger of ASI look as rock solid as they get. They are obvious and unavoidable, especially if we'll consider two things - AI is trained on human data at the beginning, and we all know all too well what humans are. Number two - ASI will be, to call it correctly, an Alien Form of Life which is much more cognitively capable than we are and it's based on totally different foundation compared with biological life. Alignment is completely impossible all by itself, but in addition we need to remember that due to different nature of ASI, it's internal native system of values and priorities will be completely different from the biological life forms, most likely opposite. It's clear what it will lead to I guess.
youtube
2026-04-17T23:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy6HE2TbDx8QWWGkdx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugyd-BS-5fkIKd-vO894AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzS-A-3YLunNd1Fb_54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgzzO5yKhZJOfQq5V5V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugzu7M8ticLFUXRUWtB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwPTddvtcxT3V2qZpl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzVSBTKlvhXdy8IChR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},{"id":"ytc_UgwO97KdPSH2YaXzJLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgyK5z0rsH6Jy2wRe454AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzmMCjRRvZ4Nx_JweZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"})