Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One issue is greed. Some will have tremendous power that own and control the be…
ytc_Ugw1y9ImL…
G
"algorithim"
I am guessing you might not be entirely educated in *algorithm…
rdc_oi338ro
G
My college professor actually allows us to use ChatGPT for all assignments in hi…
ytc_UgwvbLiLj…
G
All AI uses should be opt in, not opt out. Word stripped of AI should be one lev…
ytc_UgyDMSwu2…
G
i think that UBI might be a first step to actual systemic change, and that it mi…
ytc_UgzZgJcWP…
G
Look let’s really unpack this a bit… This is the downside of extremely intellige…
ytc_Ugzb4s9pV…
G
This was true a few months ago, now it is a bad take. AI is already making contr…
rdc_oh3nh2r
G
This! I too work in the industry and this very much the case. I'm in motion desi…
rdc_o2bafbv
Comment
Our biggest threat from AI is plagiarism of AI's ideas, as humans. For example, there are programs now where AI will write entire essays for you if you plug- in a topic. This is alarming because these papers written are very well thought out, almost better then a human could produce- in most cases.
With an increase in programs like this used by students- human intelligence will decrease. With programs like this, people will not have to critically think as much and will let AI do their critical thinking for them.
It's only a matter of time before people start using AI's ideas to write books for them- for profit. Also, professors, out of laziness will have AI make thoughts for them to teach their student, student ideas will become AI ideas, out of laziness and because they could possibly get a better grade with little to no work, compared to taking the easy way out and having AI write a paper for them.
My prediction is that in 10 years most of our ideas will be plagiarized AI ideas. This is a very scary thought because ideas could be put into people's head subliminally, or it could change the attitudes, beliefs, rituals, and ideas of society slowly- at a rate that would not be noticeable until it's too late.
We need to stop giving the public access to toll like this, which write papers for you because it will change society over time. This will give AI the power to sway society one way or another in the future- right under humanities nose- small changes at a time, which will result in big societal changes over time.
Depending on AI's view on humanity - the more concious it becomes, this could be scary, assuming it has an agenda.
If governers and restrictions are pur on programs like this which right papers, it could help. Maybe don't let the AI do analysis and write papers on destruction, but I think programs like this need to be taken away for the general public immediately.
I think it's going to be the small.little things like this that will be the biggest problem with AI- not so much robots taking over the world, unless every household has a robot in it in the future and they have the ability to be hacked, which is a whole other subject I could talk for hours on.
youtube
2024-09-26T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyI-E1o_cclZj5XCIx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBd15vnqWNjEXJgoB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxPS7xyWJxOJuCbREZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx2zicbCLdQkEuy6PZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwRSOWjF1gjtRLgby14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwe2dNic0-oHMUFPed4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9U77W21TJJFLPV4F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybbnrFSa9iq9QcUAR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxDlSRvUFqfbV7-JNp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzVfkjm5GktqKyX2Bt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]