Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Good for you :) at least this whole AI generated images thing can get some peopl…
ytr_UgxwMHgi_…
G
The sad ending would be having poor Ai performance in many fields to lower the c…
ytc_UgzEY6aPl…
G
the pro ai crowd being mad is so funny to me. idk maybe it's because i don't wor…
ytc_UgxXnmTk3…
G
The discussion on AI job losses is indeed alarming, especially with the predicti…
ytc_UgyeCCcxR…
G
You do have to define these fundamental building blocks of existence, since most…
rdc_mdjn2zl
G
You wanna know the irony of it all … every one complains about Ai, data centers,…
ytc_UgzcUBn0y…
G
Generative AI isn't a tool. It's like a vending machine. You press some buttons,…
ytr_UgxVw7keu…
G
nah worst. there will be made with ai companies and or review by ai but that's i…
ytc_UgyAFZ4iX…
Comment
A.I. may never be conscious like we are... BUT CAN derive rules through observation that it gives an estimate of certain facts being correct. Already AI scans the internet and from that analysis derives a likelihood of some statement being factually correct. Only humans think in terms of true or false because our brains like absolutes yes or no , true or false, green or red... the real world is more complex and the best we can do in reality is a likelihood. e.g. Newtonian mechanics were "absolutely true" ... except they are not. As Einstein's laws of relativity take over in extreme cases, and we could say Eisenstein's laws are absolutely true... except where they come in conflict with quantum theory... and no that isn't resolved... there is no one theory that accounts for / combines both.
Isn't statistical probability better than rules anyway?
We humans derive rules not just for fun but to helps us solve problems and predict events given inputs. I can see AI deriving similar 'rules' though observations. However I believe AI can take it a step further predicting outcomes through detailed statistical probability which is something too complex for the human mind - most human brains can't juggle thousands of comparable facts in dozens of areas... We can write programs to do it, but it is not an innate capability of the limited neurons of our brain.
To put it another silly way we humans accept 2+2 = 4 it is a solid rule. And have difficulty with the idea that is not ALWAYS the case. Whereas AI has less difficulty thinking 2+2=4 99.9% of the time... but say in base 3 2+2 = 11 (1*3 + 1). AI will have a better ability to understand BECAUSE it isn't limited to working in strict rules (although I believe it can use them, AND EVEN DERIVE THEM).
youtube
AI Moral Status
2025-07-30T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgysCaw2IlAL8GU5iT94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxkaTkbIhgimwwFXHl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzT6fElH3xvnRyUGxF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyrPkMWU5cMqAqQe8N4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxAJEhKtwHzWI4HpvF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwOTJc772mlExwRbvh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxhG8lqQ-EGq3-x8wV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw8s4M98z7e2dcdwe54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwZY6cb-y_jb-EpAo54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBQp3XCuITb3c9X194AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]