Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This makes no sense. If the defense brought in evidence from ChatGPT, then they …
ytc_Ugy0xo6Dv…
G
I liked generating funny stuff like Obama winning a hotdog eating contest not tr…
ytc_UgzUu_-nu…
G
@barbielife5154Use AI as a learning tool, using it to do your work will only hur…
ytr_UgxhWpHor…
G
I appreciate your perspective! In the video, Sophia highlights the balance betwe…
ytr_UgxZAgj9B…
G
Not that this wouldn't have it's own problems... But if we do reach the point of…
ytc_Ugw84yFp-…
G
@crazydave214 And it's not just a trend either, artists have been against genera…
ytr_UgzdIXehR…
G
For thr last two years ive been making a table top rpg game, for two years ive b…
ytc_Ugzwi_RhG…
G
@gypsysoulcreationsshadowsp849 I’m well aware of low income health insurance rea…
ytr_UgyyvVYnj…
Comment
Yudkowsky is simply a doomsayer, not a reliable expert on this topic. He totally misrepresents what's actually going on with these systems to make them sound like a threat (including these sensational stories about the LLM "breaking into" systems).
LLMs will never be AGI. Anyone who knows how they work knows this is true for a variety of reasons. There are alternative architectures, but they are no closer to AGI than we were 20 years ago.
I highly recommend you look into interviewing Ed Zitron for a more realistic perspective on the AI landscape.
youtube
AI Governance
2025-10-15T11:0…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwYuhFUceLUp0DLTQl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzHgKJD8ED47ov2Nld4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyeX3fsEXHIblcijz54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgyYUBC6bjY3u0o51ox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgweJJ5Wqx9AE1Cc4W94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzaWCl__DKhlDGg9Qx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzqjyJAnCWCF6AbzlZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxYL4689kpmBK9Nlyp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugx5R6RPeKobIC3_9et4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugy8M5dKr2wOu7Bq50N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}]