Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You are not arrogant. Those AI losers just have no skills, no talent, and are su…
ytc_UgwKpEaEM…
G
😂
Functional mechanistic understanding of consciousness Theory
• Primitive Func…
ytc_UgxbmMR1L…
G
which A.I. ?? earths manufactured A.I or the true dark earth A.I . Be precise Wh…
ytc_UgwRDSs3W…
G
Poetry, performance, photography, paintings, and photoshop: all realms of art th…
ytc_UgxYfM-fs…
G
why not simply move people to learning new things. AI needs training data. AI do…
ytc_UgzTvQsXr…
G
A.I. is a load of crap. It can only look BACKWARDS bLecause that is all it has a…
ytc_UgwHoCa2f…
G
Does this mean Sambucha is more AI than AI itself. Since he can't recognise the …
ytc_UgzPQxzpm…
G
So you are forced to get up, make a drink, put said drink on robot, then go back…
ytc_UgyT8jOTx…
Comment
The major risk with AI is the same risk with just I, believing every single thing you hear or read. Typical computer nerd. I don’t care what they call it. It’s still artificial no matter how intelligent it is and if humans are done enough to put them in charge of things like when and how to launch nuclear weapons yeah, we may have a problem. Biggest risk I can see, at least for now, is believing you can put all the information necessary into a computer and it’s very censor to allow the computer to operate your vehicle. That’s pretty stupid, but a lot of people who are mixed up in computer. Technology are pretty stupid, or at least pretty naïve
youtube
AI Governance
2025-08-19T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxAGK2I_zS0Kh_IdAJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzkmF6kmv2Q_Mw3MyF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwDWaT5GF3QB-7apT54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw7C64pSSq6zFEOMPN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgznVokIgrRVt4_ZKt14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgySod1cyrKIl10n9mR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyd0jeUBnZh0cQ1yIZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAhbAPx2Q0cF9-uw94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzHwC2g-cvmDsMJUn94AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwSUEB7ZH6th5LBs294AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]