Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wow such an awesome privilege to have your children go there!!! ❤Wish these sch…
ytc_Ugym4EvzC…
G
"Here's an 8 year old from Kenya who plays with rocks in his spare time. Let's l…
rdc_farteb5
G
Not a fan of AI art, but I disagree on youtubers part. If it's entertaining enou…
ytc_UgwEPFrud…
G
I could tell it was ai simply from the dead eyes and lack of life in them…
ytc_UgwYIbl3z…
G
the sacrifice zones topic near 28:18 broke me. i got goosebumps. i cant believe …
ytc_UgzSzrc0A…
G
I regularly get LLMs to gather data, then write and format things to save me tim…
rdc_nm1k4kv
G
"the machine does it for you" sure, yes, and... where did the machine get all IT…
ytc_UgxfMQ_ZH…
G
20:42 the point that AI is an art tool would only make sense if you made the AI …
ytc_UgzLLKTl1…
Comment
The reason I don't necessarily trust these predictions to be 100% accurate is pretty simple. These predictions suppose that humans will be convinced to allow other humans to be paid well for little to no work, and that humans will allow war to end, among other things. Nothing, not even a hyper-intelligent AI, can use logic to overcome emotionally-based beliefs and ways of thinking.
I still think AI will probably be the end of us, and I think the one thing that will _definitely_ not happen is humans banding together and agreeing to halt all progress. If China is producing one, the US will also. And if the US is, China will also. In fact, a hyper-intelligent AI doesn't even need to be in the works—if the mere possibility exists for one superpower to produce one, the other will build one also. There is, therefore, probably no chance for a mutual halt. Just look at nuclear proliferation. AI is another one of those things. The only major difference being that nukes can't think and reason and convince people that they're actually good for us, or become hyperdominant and unstoppable the way AI can and probably will.
youtube
AI Moral Status
2025-04-28T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwQTphock4pa1zG6RB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz4AmODNjEP62OF-0d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwGX0hD_OX7bST0swR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxuhdHd1QZucNNUcUN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxTbK2BNtoftu7n7g94AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzB0eF6_byPDepJWOp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzTAkxxY-VKk1aQs7B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWXnr4089gofYWpzJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTMX9lHvk02t-LzRF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgywN301FenxoFjOeWB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}
]