Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@WroughtRocketyou when you don’t understand influence and AI being literally har…
ytr_UgxFalf4Z…
G
After the description of the end of the world via AI, I got a cheery ad for an A…
ytc_UgxCBjKzQ…
G
He is worse, Elon knew that technically it is possible, just extremely underesti…
rdc_lpbb7n4
G
AI can't do physical work like stocking shelves, driving a forklift, flipping bu…
ytc_UgwMvGA51…
G
ive seen this catchphrase a ton of times in the past few months and i think its …
rdc_n7h48e0
G
Today, there’s 2 guys watching every self driving car ready to take over if ther…
ytc_UgyOD-h0G…
G
2:03 when you wake up in the middle of the night and your robot is standing next…
ytc_UgiuIZ0o8…
G
*Entertainment people be like:* "How dare the AI produce mindless garbage to num…
ytc_UgxZxVIzR…
Comment
Take the 'invention' of nuclear fusion, discovered in the 1920's. If the driving force behind it's use is 'money and power' the result is a nuclear weapon, however if the driving force was the 'betterment of humanity' the result could be free power for everyone. Which way was that technology used...weapons and power. Same with AI, will it be used for good or bad......who decides what's good or bad though, and what is meant by good or bad ? Good for humanity/bad for profit or good for profit/bad for humanity. Unless we infuse the system with and for the WELLNESS AND BETTERMENT OF HUMANITY it will fail and these huge data centres will become the dead pyramids of the future.
youtube
AI Governance
2025-09-05T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwBHoCc-ZmAi0WrlYt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw45vUD8bnPPqzsB-p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhRqlr7rkzkKtcWuZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwJqQu5OT0phwQc-_x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwc3lrC63EynRm_G814AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwl7JPJUxMN8QMLUR94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyh5e_MBiSQrIADwzF4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyCEsxJjwL1FVR_yEJ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzvMKAl8NAyTiQvw954AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxs83NwUbXaukgZN-14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]