Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I definitely think that ai can be interesting, just not by itself. well, it is a…
ytc_UgzQfxq-K…
G
Crazy part about it all it's not just there, me and my family went to a Wendy's …
ytc_UgwFZ54HN…
G
You humanize the AI. It always responds as prompted. If you have expectations, t…
ytc_UgyHjWgxs…
G
Ai is beginning to output worse art now due to the large amount of online AI art…
ytc_Ugx8iPtvU…
G
@ttensohn If you paste code with vulnerability without noticing, you would likel…
ytr_UgxQdPkvH…
G
@Bonsiscott Lensa AI is an exemple. The program pieces together images from arti…
ytr_UgwiqeNGy…
G
So many are asking how would this work. Even right now many of us are already ap…
ytc_UgzyDtDdj…
G
The key word here is 'administrative' roles. Not tech roles. The accounts payabl…
ytc_UgwO7fX16…
Comment
Larger models are mostly empty void solution search space. Therefore mostly a waste of compute. Focused models are more powerful and less expensive and results based. Taking a LLM or FM LLM by itself without a lot of harnessing and or fine tuning is going to give within distribution generic slop. Remember we are building large language models on tiny datasets. The internet is so small it is less than 1% of data and they are not using all of it, they are using a small percentage of that. I say 1% because data is constantly growing, it is not 5% fixed. We produce more each year than decades when it started to grow.
youtube
AI Responsibility
2026-02-13T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyS-NM7-dras6wOkcx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugz5mEQG8Xej7xAbRqN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},{"id":"ytc_UgxHWErflSBabNRMvBF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgwwKRpKDB0VAFM6lPl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgwWGJsQCFl2ma5bTF94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzatYnP7Lfiozrpadh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugyte8-PQGpfeBV_vyV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_Ugx382hSG92YWzzZIQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxZBtRnZeZ-1kNUMNp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzE6rU3eg-EFhLpXcN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]