Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The idea doesnt seem appealing and the fact that the engineers by using the AI tools are also causing their own death by retraining the model is very strange. What if x amount of people join the forces either manually or via some different AI tools and they start retraining these models with false information by accepting hallucinatons and declining something thats "proven" to be correct? What then?
youtube AI Jobs 2026-02-25T10:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzf5tkuXrd7kq9Vyj94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRrjz4RwUBynMMCCF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxCgk2xgxJfcMDhcr94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx58tFhcIe1DQSMwt54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgymOlzqXEApzHqkAm94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyW4F7nwt-I9Phn3kl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxf2sQa9bmzUe-45P14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwIdflZcgU5Mg728P94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJMuUdzzVcFqfPtgV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxUHmdyyzlcnkguyhJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]