Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem is chat models can do that now. By necessity any language model (or …
ytr_UgyR8PEa5…
G
I bet if I put a 🔫 to my head about this, these “AI Gods” would tell me to pull …
ytc_Ugzxd4Tnw…
G
"AI" is just data and software, you all need to settle down. It will NOT replace…
ytc_Ugx3lu5QB…
G
Hi Where AI Preparation Meets Opportunity, great content you got here. It should…
ytc_UgzI9lD2u…
G
the actual quote from the article is:
> “Google is 5,000 times better than Ub…
rdc_dftwchz
G
Roko Basilisk when you are tired of watching monkeys predicting apocalypse come…
ytc_UgzLs3s_r…
G
just for fun? Yeah no. AI Companies don't just do it for just the fun.…
ytr_Ugw317Hmq…
G
The real headline is that AI will come for all jobs sooner rather than later, ex…
rdc_ks6xnni
Comment
I've watched a bunch of technical videos on current AI capability and a bunch of pants on head fearmongers who got recommended by youtube's intelligent algorithm, but this was by far the best one.
Thanks. I hate it.
(P.S. Yes, ChatGPT is cool and has the potential to be dangerous. No, Transformer machine learning models will never be sentient or even very good at high level, self directed task completion. But who knows what the next innovation will bring?
If it's any consolation, it took 25 years for the linear transformer architecture to go from its first iteration in a paper in 1992 to real world implementations. Most neural network breakthroughs that built today's technologies were described between 1990 and 1997. The advances didn't all happen recently. The hardware just finally caught up.
If the paper that makes AGI possible was written in 2023, we might not see it put to use until 2050 or later, so we should have time to plan our reaction)
youtube
AI Moral Status
2023-08-20T21:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxB_VBfk9hE0W4dpC14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwaF5PK6DVT76ftJiR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyNt7yQSA05Q33jiMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw6xOWQvHU9u0Dpcwp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyD14-Y1YwgNUpjqjJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx3EpNZzsO5376PYJp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyYvEp8fw-ESOvdyqN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwWfIGIQDkhirPel1V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxf8upu2hZ8wWhs2Nd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzMMz8efRYWn_vJ7GV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]