Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder if asking a series of questions over time, can cause the algorithms to …
ytc_Ugzp7RT4V…
G
Americans are already barely making it financially. With AI taking over jobs Ame…
ytc_UgyBRK3gN…
G
Robots are tools like anything else. I think its a waste of time honestly to tr…
ytc_UgxvGyGK8…
G
I guess they didn't think about cars that purposely get in Aurora trucks way, ev…
ytc_UgyzZa8Xe…
G
Dude got dropped so hard he blacked out and turned into a robot before falling d…
ytc_UgyKeYSlI…
G
I understand it is an easy dunk on doomers, but I think that is baby out with th…
ytc_UgwT0TAao…
G
Or imagine the first supervirus that will subtly manipulate the output or behavi…
ytc_Ugycpws6F…
G
Eliminating the proffesions we made A.I to keep instead of eliminating the profe…
ytc_UgyMaWpb9…
Comment
With AI evolving and growing, there is more information available. This availability is good in the sense that we can consume more knowledge in less time and in a style suited to each person’s comprehension. However, this pace of information consumption and availability is also one of the major causes of dopamine crashes. Humans think they are learning more while using AI, but in reality, they are consuming so much that they retain only bits of information from each prompt. Basically in some way we will and can get dummer while ai will get smarter and smarter.
youtube
AI Governance
2025-09-07T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzALsxq6bQ-rpOlk5h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_K-bUwfNjcDj2BX94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzOa7PLqmkQ_SGwsyZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwRAEZYj7Ny-gqInoZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxFb3VMiluAPcWtS1R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyH5q7kVAAojJOIoPp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxQtTetiCR3Loa50m94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxnaxgj_Ax7jHkPFQl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyfMB16FHionhQM9Vt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzrah4EHnd5iTaTtyx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]