Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nice Casino pay out $20 million it will teach folks to not trust AI without over…
ytc_UgyKxz4GP…
G
I don't trust the AI cloud the robots have, they could be communicating when the…
ytc_UgwEFPDs7…
G
A haunting reflection on two decades of work,
On a coming AI, and a future gone …
ytc_UgxMh1uKx…
G
Man you really don't seem to know how AI works, so maybe get 1 person who did wo…
ytc_UgxhOUFl-…
G
AI will never willfully eliminate one soul intentionally, but like humans buildi…
ytc_UgxVntyOV…
G
This is not good. Just a few minutes in, Sofia is already not willing to abide b…
ytc_Ugzet9C7J…
G
We’re not a safe Democracy at the moment. The president is a malicious, narcissi…
ytc_UgzNsOf7V…
G
AI is Oppenheimer 2.0 we all know the dangers but refuse to stop because everyon…
ytc_UgwUNNOZb…
Comment
AI only has specific uses and in some cases what I call bullshit jobs. Where it is dangerous is speed. Speed allows complex calculations/ deep fake/ autominous hacking. I think the risks AI pose will be more harmful than the benefits on the whole, what would be useful is data manipulation.. and something I call AnyDataMsnipulation eg how many websites dont print correctly? or you cannot copy and paste, or user has cut off part of a document on a scanner? What about screen shotting and making the image into a spreadsheet?? Also converting upper case text written in error to lower case... these are useful... AI creating bullshit waffle is not.
youtube
Cross-Cultural
2025-10-01T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzDCKE5lRW4MIsXy0h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyHKbFzbd2PY9R9B054AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzosRe6SLypWXM8JAB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7iJeXnFHcl_bvW-N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwT-zPjcR2YUXNqmd14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugza2YJEsbE-9xitXrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCw3vZkYAO5rhWvTx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw0tIP7fc2fGcQk1MZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-4M5X4_d8Lacv8DZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzzxjY7zU6sFnC13nt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]