Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@laurentiuvladutmaneabut the Ai does create things, things that aren’t replicas …
ytr_Ugz9S9BA6…
G
So instead of spending more money on training and vetting their cops, or making …
rdc_jg00qxx
G
Remember, Ai learns from our actions , and since 1 billion people use ChatGPT ev…
ytc_Ugx-U1IEl…
G
They knew what they were doing the whole time. AI needs to be regulated and the …
ytc_Ugyj9rp4A…
G
trusting a self-driving car is just retarded, no system is ever 100% foolproof, …
ytc_Ugy00IoEx…
G
For me I think ai should be implemented into the human brain for the Betterment …
ytc_UgwEZNoEH…
G
I feel like this has got to be more similar to that prison tower design, I can't…
rdc_idddjxv
G
YES!! I always turn off all the autocorrect on the phone (less in Word because I…
ytc_Ugz7ESnEm…
Comment
AI taking over the majority of the jobs market is inevitable and it always has been. The sooner we embrace it, start working to make our lives easier, and because of it start UBI; the quality of life for everyone will improve dramatically. We just have to make sure we're taking care of the average person with the cost of labor disappearing with the additional profits. If we don't do that then we're all screwed.
youtube
AI Governance
2025-06-23T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw7zf0saxJ_YKPN8kR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzr9ErANVIXoZv10pR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLdMOPzaWq75-_N_54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugw3j6gnW1xRJpqy46p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0kKF9SRv8jyftuDd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRLmC2O4aHny-Iztd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxU8GKV59ma_TGReB94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwN1TXqJEHWyDSe3jV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx0BtiYI2W28jxeNN94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzk8rk2T5-s7kfpKjd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]