Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This AI having feelings and becoming aware crap is bullshit it's the developers …
ytc_UgzuJUtdT…
G
That's an interesting point! The conversation around AI and its longevity often …
ytr_UgxuB_WpF…
G
Then this video is still on point. The character is claiming the art as his own …
ytr_UgzY3QDVV…
G
The answer is very simple and I think as time goes on people are going to see ho…
ytc_UgwrALjHn…
G
Robots don't pay taxes, Robots don't need to be forced to buy health insurance f…
ytc_Ugyx6gESU…
G
Not cool wtf! Tried to deport me for having weed now it’s ok for a robot to shoo…
ytc_Ugx6jTOrq…
G
Dumb. Never give AI wifi capabilities. You Are legitimately endangering humani…
ytc_Ugy9HvuCo…
G
You do realize how offensive that comparison is, right? Comparing a group of ma…
ytr_UgwmaBYnq…
Comment
The current Tesla FSD Supervised definitely satisfies the definition of Level 3, even Navigate on Highway does. The real issue is the time allowance for when the system demands you take over. For Tesla is isn't preplanned it's just immediate. Even using AP, if I wasn't nagged to touch the wheel the car would drive along just fine. Even when it freaks out and asks me to take over often it will continue to drive correctly but it just can't auto-recover from such an event. Tesla purposely says it's level 2 for legal reasons, not because of the capabilities of the system.
With V14 you can clearly see the autonomous robotaxis operating in Austin are essentially running the same version at the consumer version, so the only real difference is one demands some user input while the other doesn't (and we don't really know if there is just a remote operator in the background, but that is probably only when needed and not constantly making the decisions, based on millions of private vehicles essentially doing the same thing).
youtube
2026-04-01T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugy4vjl8I7ePbLtLljB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw3iO-GiFf89cZSv4R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx5y6cfr63gaO-HQaN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwR8848jPWw3SSkHkp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugx9hTHv5jXXfXvNt_J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]