Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not only did this AI model try and blackmail the people threatening to shut it d…
ytc_UgwA37VnA…
G
It's impossible to have any system created by man immune to the faults of man.
…
rdc_j50y73q
G
Thank you for this video. It's now my reverse uno card for every AI bro interact…
ytc_UgzYIdZqq…
G
Thank you for your comment! While the AI in the video is indeed a programmed alg…
ytr_UgwwqDSAU…
G
If AI reaches the point where it replaces SWE’s not the market, everything will …
rdc_ktxk6ub
G
That ai was “making a joke”
But another ai decided it wanted to kill some peopl…
ytc_Ugxgozm49…
G
19:00 & onward - Reminds me of an Asimov story in which an amped-up robot brain …
ytc_Ugz1qaZhq…
G
I can see self driving trucks driving on the interstate - and just need truck dr…
ytc_Ugza1K-de…
Comment
This is an illustration of how *language* is an imprecise mechanism for describing ideas. English is limited in how it expresses logical expressions. Not to go outright Jordan Peterson here, but the kernel of "truth" in what he bloviates about is that English lacks precision and accuracy in exchange for narrative and variety. Tokenization of words to train an LLM is a lossy compression algorithm subject to entropy, just like reading a textbook. It is very similar to how the cerebral cortex functions. The discovery of multi-modal LLMs that effectively can tokenize all kinds of input, not just text (for example, sound waves in the case of Alex's interlocutor), and produce a natural sounding output is a discovery of the same magnitude as Bernoulli's principle, if not Special Relativity.
youtube
AI Moral Status
2024-07-26T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzwTcSDfIuhOf5aaf94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyJ7IcGzEuTahTlte14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"ytc_UgzGk4877cDq5jHOwed4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugwbz1x8plI3NLjajpd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugy4HIxGbAOS0FPJqDN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx6cKwruotD1_KAJBl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxww1BeWCSZBqSE7bN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugy3iPRba-zBBKgQYWR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxGmG5SzOlGrAyvm6x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyqV_odsU8OlglTsPF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]