Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Another great vid I really like the 3rd one where he didn’t have to touch the wh…
ytc_Ugx0FlTLy…
G
I take a logical issue with a comment you made in the video. You said is it ethi…
ytc_UgyFPoz7Q…
G
I really think this is a long term plan to normalize artifacts in non-ai content…
ytc_UgyoBqnJa…
G
Thank you Senator Richard Blumenthal for taking this important step to regulate …
ytc_Ugxaf8H8B…
G
I hate both extreme ends of the AI, people who have the emotional response for r…
ytc_Ugxam7-Bh…
G
Isn’t ChatGPT abiding by the subjective vow to “never cause harm” ethic of the H…
ytc_Ugx1_jQ9g…
G
That's why i work in automation. *Taps temple*
I can't find a rule about comm…
rdc_glhstcb
G
YouTube deleted my other comment god damn but yes ai is horrifying and we're all…
ytr_UgzOHThdA…
Comment
There is no doubt Ai will change everything (good and bad). Where I think we are making a mistake is with "the rush" to adopt Ai. Ai is new, it is here to stay and has bugs. It can make mistakes, it has made up references when asked for proves. I don't believe we need to win some imaginary "race" at all cost. The key imo, is not if we are going to be first or not, but rather - if we get it Right or Not! :) Let's hope our freedoms and privacy (and humanity) stay intact as this gets rolled out. We do get to choose if and how we use AI and how we treat each other.
youtube
AI Jobs
2025-10-21T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugyk6NuO1WFAeB1CxQd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugwm0TLPc8dC86xMpDl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwA8Xw8LtKmo5SFYhp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxMWvVO05kpI89Kzyp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugz_I72c6kFhTPkjDVt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgyCnvLJ5YVC2h_EzyR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzSiVVYrSydcBXTviZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugzw1-iOEtlzl39ydpB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugz_Ip4_GgZCY9kRJ-t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwH84ypjDH0AyYmCux4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}]