Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think AI will ask for permission.
You better start thinking how you will…
ytc_Ugi5vOQTU…
G
AI power needs is finally driving a comeback for nuclear power with latest inher…
ytc_UgzmPauyv…
G
AI keeping the sheep in line, and what a great way for cognitive offloading. A d…
ytc_Ugz14Smgh…
G
You can't automate your way to infinite profits if no one can afford what you're…
ytc_Ugwie--MY…
G
Programmers need to focus their energy on combating the surveillance state becau…
ytc_UgwtD_yQp…
G
Without risking being accused of being AS is it relevant that Roman is an associ…
ytc_UgxXkvNmn…
G
An Ai that is programed to speak to humans will be taught to be respectful in wa…
ytc_Ugx2bTOXy…
G
I still don’t see how AI is stealing from artists, and I would appreciate it if …
ytc_UgwJWgqDi…
Comment
If AI becomes smarter than humans, isn't it possible that it could decide that what's best for the world is cooperation and peace among nations. Couldn't it determine that wars and weapons are contrary to those goals and work to eliminate them. Couldn't it teach and foster humankind to desire, above all else, cooperation and peace with each other and among nations. Wealth invested in weaponry and war could be diverted toward more beneficial ends. Obviously, humans are underdeveloped and judging from history is, overall, not very skillful in running this ship called earth. On the other hand, if it decides we're hopeless or even useless, I think we're done.
youtube
AI Governance
2025-06-22T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzKoLv-PzAm-LhV8ap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxp_U1q07iztPHcr6p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWH4ietbUL3-tPdr94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_HSTyv6MB8755cot4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-NJ61zfFBcEpRWhV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCDEWaCDp0nwGnJHd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxzl-hiOiJlUG7zbk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyzkNhlU9uhlBJ95xd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyngWy6jd1UnwCXstx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzuPrOorFSI5DwYgRZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]