Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I spent 10+ years drawing daily and completely desperate of ever getting actuall…
ytc_UgysxvNJT…
G
AI is much faster than Winston Smith was in 1984
In authoritarian regimes, AI-po…
ytc_Ugyov9Toi…
G
Yes AI is useful but in other ways. Its not to answer ur assignments or acads.…
ytc_UgxSwvtEz…
G
If we don't have work then we can't spend the money then who will purchase from …
ytc_UgziW5AkX…
G
Everyone talks about uh robot fuckery but me i hope they do not find the warcrim…
ytc_Ugz9u2T57…
G
I think replacing human laborers with robots is fine. AI is a wholly different t…
ytc_UgjiENPkd…
G
Indeed the future for human existence appears bleak. We, in the free world, have…
ytc_UgyKo9GdZ…
G
Aka, unchecked insecurity &/or ego. No consideration for consequence, or arrogan…
ytr_UgwnvIT1y…
Comment
The thing with AI is that it still can't really learn anything. It is very good at pulling in what is already out there and then using that information to complete a task, but it can't create and it can't go outside the bounds of its own programming. All the creepy things they say are just regurgitated ideas that humans have already put out there. Currently if AI were to go bad it will be because it was told and allowed to do so by the people who control it. The only time we truly become in danger is if an AI is allowed to recompile itself at will. It can then change what it is allowed to do. If humans give the program that ability then we are in trouble. Until then I wouldn't worry too much about it.
youtube
AI Governance
2023-07-07T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0Txo1NgcixdnRnnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx--uFdVqPITSEWRf94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzNySYaxuxfV-4LSol4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTlRELWKsr4cF76014AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6aMf72BL9UgtZHzN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgynDDTIR55w6WUhZD54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwiOKMZAFfuUyASpTl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxPNkxi4nKCVO-zfBp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwra1GYExM6JFQlSVN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYtD8aJ8Dgr1evxst4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"}
]