Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Your playing with fire this people think it's a joke building this AI robots it …
ytc_Ugz5_k_uu…
G
@Likid_sec il a payé retiré un max de billet pour un problème de ceo, et a réd…
ytr_Ugz35DKV4…
G
Race exists but humans are of the same race. The math is incredibly basic when u…
ytr_UgxV4xRAn…
G
if we are living in a simulation then why does it matter if AI goes rogue ?…
ytc_UgxrImX-j…
G
we should replace... umm... re-align the CEO's with AI ... i bet this would lead…
ytc_Ugy0mWjN9…
G
That's the problem. Assuming that the data sets are biased is itself a potential…
ytr_Ugwh3Ku74…
G
@11:20 Why would you use an AI slop interpretation of St George and the Dragon i…
ytc_UgzboNL0T…
G
Top management is some of the most easily replaced manpower. AI will be far bett…
ytc_Ugwok_STU…
Comment
42:00 "we are missing something essential to make it truelly intelligent" 5 years ago not him or any other AI expert would have thought any of the 140+ emergent properties of LLMs would have been possible like f.e. oops now it can do math or ooops now it can translate in any language, which are capabilities which have not specificly been coded into LLMs but just were there by increasing compute and training data. Meanwhile just over the past 3 months we have gotton new research papers and LLMs which can do with less trainging data and compute produce better LLMs in comparison to GPT4, and even GPT4 with just a few tweaks has become considerably better at some tasks. The major point here is we do not know when the next big ooops and now it can do XYZ or ooops now it is truelly intelligent would come around. So saying only because we are not there yet in having an AGI, doesn't mean we aren't on the cusp of getting one. So the argument can no longer be about we don't have one and therefor we can not speculate on what it would be like in having one, but against our human incapability of seeing long range/stance effect we need to really speculate on those what they could be. The uncertainty may make some people anxious or unwilling to speculate, but we still should and it is better to also have an outlook and plan for worst case scenarios to prevent or avoid them, than not even trying to have that.
The second major point is, when we have an ASI, that is more intelligent and knowledgable than us and maybe down the line also gains consciousness which we still don't know how it comes to be for ourselves, so what when we have ooops a conscious AGI/ASI, that looks onto the world as its playground. What keeps it from not seeing the ant like humans and stepping onto them when formulating its own goals and make no mistake developers are working on short/long term memory, selfreplicating and selfwriting code and formulating in that progress new objectives, this is already been tried and will be tried again and again till it comes into existance. So the question becomes, how can we achiev coexistance with a god like intelect and still be able to gain fruits from that and not be sidelined.
I therefor also disagree with an actual assumption that it would be better if we only get AGI in say 20 or 30 years or later, because then overall we will be more dependent on digital systems as we already are, which i assume are more prone to infiltration by a rogue AGI/ASI. Other than that we would really need a full stop and give science a chance to keep up, as currently it is mostly developers driving the advances and not researchers carefully doing this in lab conditions. GPT4 integration into MS Bing isn't lab condition, it is giving an access point. Similarly the approach that has been establishing realtime access to the internet instead of having fixed versions using training data sets. Think about labs using AIs currently from a datacenter miles away, testing molecules. We are already doing that and i very much doubt that all these would be holding up to scrutiny when tested by say all hackers around the world focusing on some of these systems. Corrupting code is nothing unimaginable even if you are not a developer with ill intent.
youtube
AI Governance
2023-06-27T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyEhL4ch47VLdP9gNJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugy54_8cttHpxZSJiJd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoUkud1w7TAbQHNYJ4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx4ml_9jq-QphGs3QN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwt2RbzurF3SGpPwPB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8yUV9CM49pTu14AR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8fQDWMBP-0LRsOAB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyPmsCuJ23rvS19wY54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAAEp9lz-G1mKP3sl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxcuDNaybYEsp5vnLZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]