Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro I heard once that an ai as a test was given free will
Mf wasted NO TIME he w…
ytc_Ugyt-THMY…
G
Ai can make education better but our intelligence isn’t needed anymore?!? WTF ar…
ytc_UgzMm6fhR…
G
AI doesn't want to get rid of us. AI's entire reason for existing, its "purpose"…
ytc_UgzNPc5ou…
G
This is absolutely insane! it’s one thing for a person to want to learn there st…
ytc_Ugz8uf8yZ…
G
I read this article from Scientific American the other day, it's a bit terrifyin…
rdc_cthsa42
G
>The tide is shifting back.
Or else it never actually shifted the way you t…
rdc_n9wsgan
G
Very true. But I do feel impressed with Meta going full force with open-sourcing…
rdc_jghbmu6
G
Mmmm the fact that she thinks AGI is not possible kinda make me loose trust in w…
ytc_Ugx96R5g6…
Comment
There is no such an ai safety there is a performance to make and rhat is why ai was created for the first place . But if it is not regulated and law protection to bring an ethical frame for thos progress things will be out of control knly if someone has a goal to reach the top control of this progression . Maybe the elite will change maybe the oeoole in control weren’t ethical at all are searching more power in ai. Still people nrednto engage with ai and lot of jobs will be created but there will be a transition and the people will have to adapt create compromise . We need to understand the langu1ge of the ai .the singularity is near and what I fear is how the elite will beh1be if it sees chaos coming on the basic level the levem of the people. Another virus!!???? Like covid to freeze the time a little??
youtube
AI Governance
2025-10-28T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx33Bjbd-Jh76NZS314AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwh7O8y66kRcQRCSMl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwflAyGHNCr_8FTx3h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwPsuHGRG8bNIRnwiV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGeCU8zQw7L_Zfm8F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwvLgrM3cvtZxpLehh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwbSp_GltU9_dzs2qJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz-xWWMNsgdXAV9JK14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyNy7jUJo_vRs2QsWh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyP441GI1nFMHLJ34N4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]