Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Karen Hao is wrong about tech companies not making their models more productive and efficient. Enter the SLM which is by definition, more efficient due to its size. I ONLY use SLMs in my research work. They do offer privacy and they work just fine on today's computer systems provided said system contains at least 16GB RAM. The maximum size I've found that can run on 16GB RAM is around an 8 billion parameter model which is more than enough to accommodate most knowledge work today.
youtube Cross-Cultural 2025-07-14T04:5… ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz1JqBuDgR6QNn5wIV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyh2qUqIgApfVySXMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz-rrHwaY7osjA-35x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLdvaAfm4NXUpW7gJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzxbvKxWOPRe3gghKx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyZ7sNEDtUatQMC4EJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxiq8-0h8iOXAcrrYZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyGabqu1PhtoClC17p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyisPrOeQKqcjfSMaZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwDUBNhCE-TQNn2hzB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"} ]