Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What about the sexual robot societies the prostitutas robots don't destroy socie…
ytc_Ugyh9sSCw…
G
"Listen, I know this robot robs you of your income but it will enable other peop…
ytr_Ugz29FBl9…
G
@daaaaaaanny I am not defending AI and it's not as if every job in the world is …
ytr_UgzIr7quY…
G
Anyone that wasn't at least 22 in 2015 and is involved in all these new AI start…
ytc_UgxpIkMar…
G
What if youtube channels like this talking about what if scenarios if AI take ov…
ytc_UgyXAweMt…
G
HAHAHA... I asked this exact question my chatGPT and suddenly in the middle of t…
ytc_Ugw63uCuF…
G
“we have two people working to see if ai will become sentient” that is not comfo…
ytc_UgyBo5rUG…
G
I agree 100%. I think Scott thinks retraining is going to solve this problem….an…
ytr_Ugz8Vp7g2…
Comment
ALSO: people sometimes forget a very important way that context (including that which might relate to Role-setting) gets built up: organically through chatting. It's why the longer your chat goes on, the more the quality of the conversation goes up - at least until it grows so long that chat starts to get truncated from the context window. So, don't just start a new chat each time and ask a coding question. Especially with no context document, no additional information in the prompt, etc. The LLM will simply lack optimal context to give you the answer that you would find to be an optimal answer. When you first start a chat, that is the stupidest the AI will be in your discussion (sometimes the AI even seems "bored" in the beginning until the conversation begins to build up valuable context). If you're doing this and then bailing on the chat early to start a new one each time, you could be leaving alot of potential effectiveness of the interaction on the table.
youtube
AI Jobs
2025-01-16T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyPDJ1VtdoKNh_3q8l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIEmmfrqqT6V2D4ol4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwmRA6Baeaq23tUSJl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2jQmZl1M9j5cm4Qh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJULyWoKId_kls34N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHTpkDT6Yi2t3DoRp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxA6jgyBeWQ9SRztd94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5ybdZ9Q9TbM3HNNl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxrvThebrNbJic0nhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwO2HPB5yrKty-bahV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]