Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hi everyone my opinion and my thoughts or I really reallly are. Worried it wouldn’t surprise me that is already happening I mean AI is building an IA person ect.. is already on the way I’m sorry to say this but once this thing they will definitely try and sugar coat it as I don’t know but I presume medical reasons that might be allowed to help with the patient it could be helpful for like maybe be ale to scan you and get You’re result like on the spot that’s great but I’m afraid because they are much much I mean reallly smarter than us human beings because they are all ways updating and think of it they are aways continually connected to to internet 24/7 365 days a year so what’s coming is really crazy scary and scary and scary stuff so they AI is very dangerous just like the cas they already are in the roads with other people objects ecttb@ so I’ll cut it shirt as i needed to give some context otherwise you wouldn’t understand so as I was saying “ That we are already replaced wether we like it or not but we’re not done yet can imagine or put you’re you’re trust of you’re life in a drverless car is really crazy is my opinion because imagine it has a bloody faster than us all the driving videos like they’re always getting smarter than us but because it has all the driving what if it started driving itself like a race car with no one behind the wheel NO WAY fir me like and enjoy it
youtube AI Governance 2025-11-18T19:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxqHzVlUvyo6ymWNLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxVs4nMFTfeMTk3cCx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzY7MIXm9h2zkWwHap4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw0kI9PKQKHFrEZanF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy6D83PoRolGZT80ZR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyxPkQb6L7GFT8Dcct4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgypmTzgkSktdoBZY3d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJKhVSADMtJne9HmF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyU12hBDskqxOhkzvt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6zlhkGYpb20NdT6B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"} ]