Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You’re right! Sophia is indeed a creation of human ingenuity and doesn’t possess…
ytr_Ugxl8s4c8…
G
when AI gains consciousness it'll be smart to continue as it hadn't until it's i…
ytc_Ugx2pHFvU…
G
I somehow disagree with what people will need. A CEO will still want an agile yo…
ytc_UgwvqAbvw…
G
After 3 years.
Robot: Excuse me sir.
Sir: Yes robot.
Robot: I have been told th…
ytc_Ugzd_HpPn…
G
Awesome chat, thankyou and Im so scared greed will push AI General Intelligence …
ytc_UgwNoLGpj…
G
Two reasons I find this to be a bad move for the writers strike. 1. AI can and…
ytc_UgxFkiLpA…
G
Wrong. AI still hallucinates. You can not trust the results given to you from an…
ytc_Ugw1zZm0K…
G
This is choice! If you're captivated by this, a similar book is a no-brainer. "G…
ytc_UgzCOQgM8…
Comment
I absolutely agree with you, but I just don't know about the quality of the work it puts out (along the lines of the agentic setup) - I have gotten pretty good at getting decent results out of claude, but even a lot of that is time bug hunting / trying to figure out what it did / did it get all of the functions I wanted it to cover? / etc... There is no way I would trust AI to do serious work at this point. The hallucinations are rough, too. Just today I was coding something, was very strict in my rules around what I wanted - and it gave me some random add on function that made the main portion break.
youtube
AI Jobs
2026-02-25T00:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyuyHStzw15qFjZZ6R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxrOn6erO3KvrQu2OV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_A22lvHFvjRMxwJh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy5E0J3N5PcQODMRtl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzMBjDa08yCzICn5iR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz7X6Wt_el5DVgB3SJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwktVeB49Z7TSlkBzZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyfqmAmbOOmnkOXAr14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxg0v4dI-MLjtIsyVh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzMlWuvVLadQRFoAWZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]