Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
First 20 seconds, those statements weren't made by experts but people ripping off shareholders with the ideas of "agentic coding" and what looks like easy automation, only for these to be the results. Actual academics on the real research side of things predict 2075 with a ~50% likelihood of AGI roughly after that point.
youtube AI Jobs 2026-02-08T02:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw2VMO1ZQ3X5dxzE4B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzpPbY5kiwK6JJXJEN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzbYR95da-5MkVA66V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw9ZJM7bZqsbU7pxP54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzj6iSWPfAkwafAjmN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxh_Y9DEKoFsJsDVJR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwmn6QfsfAuOD0IkvV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyKceelVOlivIfwdxd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwtAkX4NSrM1N4zDo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7jsV6XUBHNa0-BbB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"} ]