Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I thank god I’m at the age I can retire and was lucky enough to have a house,pen…
ytc_UgwrB2QgG…
G
There's one thing about most comments here, people are clearly not that bright, …
ytc_UgxV1EWPd…
G
That’s the kind of crap this world does not need because one day they are going …
ytc_Ugxh2LXAy…
G
Well if people dont have job how will they survive? Okay universal basic income …
ytc_UgwdKano-…
G
I don't agree so much with her. AI helped a lot of people to improve their lives…
ytc_Ugx6CzBUL…
G
It’d great to have sources here. (Doco makers - that's what title cards are for!…
ytc_UgwbVVhJ1…
G
I hope AI opens our minds to the illusions of life, such as wealth and money…
ytc_UgznezxMs…
G
I can never understand why ChatGPT doesn't force the AI to look shit up at the v…
ytc_UgxREwDvf…
Comment
@8:38 "the Gentlemen said "an AI Can do it in 1/10 the time". That isn't True From My Perspective. I run Local LLM's and have tied those LLM's to Various other functions. I can believe the video Processing taking time, but depending on the Model and Hardware It could be Thousands of times Faster Than a Human if slightly less accurate, Providing key preliminary Data with more specialized specifics Coming in on More Thorough Models Would Enable the fastest scenario for obvious key targets. The Speed in relation to Human time really All Depends on How Many Parameters, How Big Are the Weights, What is the Hardware. You can Run an 8Billion Parameter Model on a Cell Phone, and It completes the tasks ok but you put an 8Bmodel in a server farm and Ya it's More like Thousands Depending on The Specific Task at Hand. Go Play with Ollama or huggingface and AnythingLLM for several Hours and you will know I am Right.
youtube
2025-03-09T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyXLA5n1o2I4QGlV2p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgVEGQO_lqfSby1mZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz--PtGEx_E8bp_vId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGHgbQYR6aOeILiLt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz9dfz2Mg5pRctBTCx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyoaCeHBkfvo4M4bbd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwVfGnRoFjdnDAVnbZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwf16lFrpLG5KuS52p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgykRESyJNTMaaYGkB94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGAedXF_CZH7WS_FF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"unclear"}
]