Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Maybe we could use a Role Statement in a context document to get around this. Identify characteristics of good code versus shatty code - and optimize keywords. Then include in your Role Statement those keywords embedded in the statement. Something like: "You are a proficient programmer...[general specifications related to your task, language, stack, etc. etc. etc.]...who emphasizes abstraction, shows attention to detail in things like explicit typing, writes concise code, questions if...then..else conditions to test whether they are not truly boolean but other conditions exists and uses case structures over if...then..else, [just whatever you want to see in your results or even if you don't, what are hallmarks of good code]." Then also: when accessing training data for responses, weight bias to include information semantically near [keyword white list] and rule out information semantically near [keyword black list: i.e. keywords related to characteristics and facets of bad code]. The reason Role Statements work so well in AI context is that they essentially are acting as semantic filters on the roles and characteristics of the contributors of content within training data. In other words, you might be able to write a Role statement that steers the LLM to semantically predict from knowledge in a different part of the distribution, disregarding (or de-emphasiizing) that large area of the distribution.
youtube AI Jobs 2025-01-16T03:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyPDJ1VtdoKNh_3q8l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxIEmmfrqqT6V2D4ol4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwmRA6Baeaq23tUSJl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw2jQmZl1M9j5cm4Qh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwJULyWoKId_kls34N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzHTpkDT6Yi2t3DoRp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxA6jgyBeWQ9SRztd94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5ybdZ9Q9TbM3HNNl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxrvThebrNbJic0nhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwO2HPB5yrKty-bahV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]