Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robot had that kill switch on, it was gone finish him before the ref jumped in.…
ytc_Ugw_VktRa…
G
Warren Buffett would absolutely use an AI accountant. He hates spending money wh…
ytc_Ugx-6capd…
G
People who genuinely think AI is going to take over artist's jobs either dont dr…
ytc_UgzIbNWGk…
G
@parker73724damn parker what has it to do with newton and physics. Its about de…
ytr_UgyPjH8n4…
G
In the USA - there are 23.5 Million Millionaires - This is 1 out of 15 people in…
ytc_UgwY-tdlA…
G
i was just looking for a comment like this. It really sounds like it's someone c…
ytr_UgxpsH9uW…
G
@sharifastewart7316 No they have not. Tesla uses neural network to train it's c…
ytr_Ugx5VTQmO…
G
I think AI Art only works for either fun or like concept art to get ideas from, …
ytc_UgwzkGNke…
Comment
48:11 “Predicting data that humans created often requires you to be smarter than the humans that created it” seems either disingenuous or wrong to me, given that we are asking of an AI that it produces an output that fits a range of responses that we seem as acceptable, performed in such a way where it specifically doesn’t need to understand any component parts or anything at all, it simply has to conjure something that sounds reasonable. Extrapolating theories from a set of data is difficult, but popular word selection from aggregates of other extrapolation is bullshitting, not extrapolating. Creating a prediction in this way doesn’t require that one understands anything at all, I’m reminded of the “Chinese translation box” thought experiment, where someone has a set of symbols and instructions to give certain symbols out of the box when they receive other symbols into it- in this way the box could seem to speak a language even when the thinker in the box knows nothing about it.
My point is- it’s not a sign of intelligence that it is “predicting human data” because that could be done without any of the processes that we would ascribe to intelligence or being smart.
youtube
AI Moral Status
2025-11-17T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzygqCafbRLsp9Xr194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugywgk6du9hbvl99LO94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyl3QgrWOTtl6hKe3R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzPh9ySYWWVptvVjrF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyyxM9y89cm6W4WC954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy4E7InsIdi_3w7hNB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwyn9yX1AMEJtOc7114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy9JSmCZTyTbp2N4NZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfrNfhl5S1I770on14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwmRXUeGPtQkWYsN-p4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]