Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a therapist, I’m finding this interesting to think about. It’s hard to see how an AI therapist will account for nonverbal behavior, which is super important clinical data. Things like sarcasm, pacing of speech, tone of voice. Some people are very indirect with their meaning. An AI therapist would be operating in a limited dimension. Heck, Modalities that use the therapist themselves as part of the patient’s learning (IPT?) would be useless in AI therapy. Though I am curious if AI can get good enough to employ the really manualized treatments like CPT and DBT, you know the ones that are pretty strict in us following the research backed protocol. I wonder if AI therapist’s strength will be consistency. The best human therapists can deviate from the protocol for any number of reasons, which may impact effectiveness in the end. Who knows, maybe in the future, the insurance companies will only reimburse AI care, if the computer is the best at applying the research backed care most consistently.
reddit AI Bias 1682907414.0 ♥ 7
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jidepku","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jid92tv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_jidfehj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_jidtvoh","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_jidk9f5","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]