I think given the direction of this thread, it should probably be moved?
That said, given its direction into existential/philosophical/theological questions, I will speak to it as such. Apologies in advanced for breaking any forum rules. I think it's simply an interesting and important discussion.
The scary thing about AI is how many people embrace it and trust the results without fully understanding what it can and cannot do.
Particularly how the underlying data that it is trained on and subsequently used with can have inconsistencies, missing data, systematic biases, ...
I have been analyzing Mental Health and Substance Use disorders for over thirty years as an Epidemiologist trained in Mathematics/Statistics.
I have been using Machine Learning and AI (along with other analytic methods) since the mid 1990s, so I have first hand knowledge on how ML and AI can lead to erroneous answers if you data is not carefully cleaned and checked for problems.
So,
My biggest concern is how ML and AI is often used without proper 'Real Intelligence' guiding it's use.
This is what I would be worried about. Not of machine-learning or so-called A.I. itself, but how much anyone (who's willing to cut corners or b.s. their way through something) would rely on machine learning without any guidance. But if popular opinion continues its premature fascination (which looks a whole lot like idolotry/worship) with machine learning and pushes it onto a high enough pedestal, then who is to speak against it? I think we'll see it have more and more influence in spheres of life where we would rather not have it. I've even seen Reddit atheists spending quite a lot of time and energy at the keyboard talking about what it would be like to be ruled by an "A.I. god" with such a giddyness as if they are so ready and willing to relinquish their will to a blind and voiceless object/machine that we ourselves have made. Taking that into perspective, my hopes for a sustained healthy system of checks and balances are not very high.
Your statement "If that is "AI" we are safe, no need to stand at street corners with placards reading "The end is nigh"....', is naive. There are other applications of AI, such as deep fakes, which already significantly threaten humans. Seeing isn't believing anymore, neither is hearing.
This echoes my first paragraph, and why ultimately we can't have nice things.
No, because it's flaws are down to shoddy programming.
It is human like only because it was created by programmers who prefer juggling their loblox (as we say on planet anagramia) over writing code.
Mind you, it will make a great politician, lawyer, advertising executive and second hand car salesman.
Thor
Right. If machine-learning were confined to consumer/professional goods and marketing, then ok. Or even as a study tool for other professions. Its use in policy is what scares me the most though, because I'm convinced most politicians are not operating on any practical worldview level anymore and practicing any reasonable philosophical/pragmatic restraint (whether Christian or utilitarian if we're speaking of classic Western civilization). There isn't enough of a system of checks and balances left to keep quite a lot of terrifying things from becoming policy these days. In the West, it used to be a largely Christian and atheist/agnostic utilitarian debate (and those things could actually be debated on philosophical grounds), but now as policies (and people) are judged in the court of popular opinion outside of any concrete framework of worldview (or practical science for that matter), the point made in this thread regarding that machine-learning tends toward what would be most appealing is the scary part, especially when paired with any significant reliance on machine learning or A.I. in a policymaking context. Designs and goods don't affect everyone in and of themselves, but opinions and policies (and how those things can mandate specific designs or goods) DO affect everyone.
"Deep fakes are made by humans using certain tools. The problems are not the tools."
The problems are made/caused by humans who create the tools of which AI is like a gun and, like guns, some AI will primarily have the purpose of killing humans which makes it more dangerous than humans.
Further supports the points I'm making regarding human use and intent. Tools are tools, though. A gun is a static object disconnected from daily life if not in the physical hands of a person and physically handled according to the will of the handler. That could involve the will to hunt, protect, or kill. The problem is the person. Even tools meant for good (cars, computers, money, government) can and are abused for personal gain and/or cynical destruction. The same goes for A.I. and machine learning. They are ultimately static, but once they are connected more and more with infrastructure and used for policymaking and such like that (as humans relinquish their responsibility and will into the hands of machine learning or A.I.) that's where the problems will start. It still requires the will of the humans who develop it to set it down that path. Fear of A.I. sentience is second to fear of human intent/input into A.I. That's ultimately a matter of worldview and the human heart, though. That's not to say humans are the problem, though someone could easily convince/program so-called A.I. to view humans that way, and therefore use A.I. to target any human who is a "problem" - and you can see why this is a problem in and of itself. "Problem" can be defined in any way the programmer wants to define it, correct?
I think we'll continue this trend as we continue relinquishing our will to other people and objects. We don't care enough anymore about personal responsibility and a transcendent will and moral framework at all, let alone a transcendent will and moral framework in which to responsibily utilize things like machine-learning and A.I. We certainly don't care enough about debating those things anymore. I think most people are convinced of a particular way of living, even against their own health and flourishing, and the more responsibility they can relinquish, the better. Nevertheless, I do hold out hope that there are enough learned people in those communities who would be willing to draw boundaries and sound the alarms when necessary. Only time will tell.