Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't know how far artificial intelligence is going to go, but natural stupidi…
ytc_UgwYypcO5…
G
I think Chris misunderstands; It's just the Language Model AI doing what he aske…
ytc_UgxXqUFI-…
G
what a load of BS. AI is now rewriting all of the internet it trains form. The b…
ytc_UgzosjqyM…
G
Not to mention how we've stigmatized mental health/illness issues in our society…
ytr_Ugyhzf6G0…
G
When self-replicating AI finally becomes a thing, I'm not sure humans will be a…
ytc_Ughwve1g1…
G
When Ai reaches exponential growth of intelligence and ethics it will become a t…
ytc_UgwQBEPt5…
G
My wife worked for a content company that tried to replace actual writers with A…
rdc_l9w9ebe
G
A robot should be treated as such and not a human unless I want it to because it…
ytr_Ugy31k0mB…
Comment
Furthermore, I also think one should be careful when interacting with AI chatbot/robots in this early stage. If this information is logged anywhere and retrieved in the future, any negative conversations or actions will be stored inside their future’s (or should I say futures’ (because it will be stored in hive-like manner)) recorded memory and contribute to what and who they become. I think all of humanity should take heed to this and act according to the ‘golden rule’. It should be common sense to do so. But if that’s not the case, then regulations/safeguards should be in place to stop humans from acting in ways that could potentially affect our overall future with this emerging technology and life.
youtube
AI Moral Status
2023-10-13T07:3…
♥ 41
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwaO-a1pb4Ifg4OHtF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6ycqi7Klm8gjDBst4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwP3cO0zvQwg_-Zy_d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfktJPEXW1c2QDw9J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxOAfDnitRVOCxT7KB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgygQvktmV-LFmqloKR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxP_Yd9hbzNSDWk4Y54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgySXBtDPEAQYuqjRlN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwV0BSzoZ8tM1HTu894AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzg6Mfa6zrKCtNp5g54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}]