Artificial Intelligence
Artificial Intelligence (AI) was an aspiration of Alan Turing and John von Neumann, two of the geniuses who created electronic computers and computer science. Both died young (Turing in 1954 and von Neumann in 1957). In the absence of these founders, a summer meeting was held at Dartmouth College in the summer of 1956, which became known as the beginning of the academic discipline of AI. Among the attendees were John McCarthy and Marvin Minsky (both age 29 at the time), who were thereafter viewed as the leaders of the discipline.
Progress in the science of AI was heavily dependent of government funding, which went up and down periodically over the rest of the century. Finally, early in this century, breakthroughs in “machine learning” were made and AI has become a focus of government and business investment. So a half-century of hope and speculation has given way to important results and yet more speculation.
Since the beginning, there have been two interpretations of what AI means. Does it mean simulating the way humans do things that require intelligence or does it mean doing things that require intelligence in any means possible? Since we don’t (yet) understand human intelligence, AI by any means has been the path forward. However, an interesting attitude developed. For example, once a computer beat the chess champion, some people said that the computer accomplished the feat by non-intelligent means. This attitude has surfaced often enough to create a tongue-in-cheek definition of intelligence as “anything the computer hasn’t done yet.” This is reminiscent of “the God of the gaps,” where until we scientifically understand something, God did it. Afterwards, God retreats to be responsible for remaining mysteries.
Having mentioned God, this is a good time to consider the breadth of the implications of AI. From the narrowest perspective, AI is a technological subject—what can we do and how do we do it? From a societal perspective, will AI and its cousin robotics eventually do all our work, placing humans in either a utopia or a dystopia? From a religious and philosophical perspective, if our AI constructions eventually displace humans from being the crown of creation, where does that leave human-centered religion?
Technologically, limited AI is here now. Sociologically, it will be here in the near future. Philosophically and religiously, it is in the farther future. All three of these perspectives and undoubtedly others (for example, the connections of AI to process thought) are well worth our consideration. I propose that interested persons participate here in questions, answers, and speculations about AI. It would be my pleasure to moderate such a discussion. Although I have been a computerist for six decades, I’m a generalist, not an AI specialist. And as you know, our society respects specialists and suspects generalist, so participants be warned!
Peter Farleigh gave a talk on the possibility of computer intelligence at a small conference about 30 years ago. His take was that since complex human subjectivity (for Whitehead) arises out of the experiential integration of the feelings of other multi-leveled organic entities (neuronal events, past moments of the psyche, etc), there is no way for even the most complex configuration of integrated circuits, or even quantum computers, to create the kind of matrix of intense feeling out of which human experience arises. Thus computers will never be able to "think", or "feel", in the same sense that human beings do. I concur with that, with the caveat that unforeseen novel possibilities often lie just around the bend.
Olaf Stapledon, the Whiteheadian philosopher/psychologist sci-fi writer from the 1920s-30s, imagined huge silos of neural matter generating super-intelligences. This might circumvent the objections raised above, but Stapledon goes on to speculate that these super-brains's lack of bodily feelings/emotions would inherently restrict the possibility of the deepest spiritual experiences and insights. But they were real smart!
From my original post. “Does [AI] mean simulating the way humans do things that require intelligence or does it mean doing things that require intelligence in any means possible?” Your post suggests that “the way humans do things” is beyond our reach and may always be. But the increasingly complex problems that computers can solve (eg, protean folding) suggest limited-scope AI
Thanks for kicking things off here, George! I majored in cognitive science as an undergrad, and so was exposed early on to some of the issues you mention surrounding the development of AI. I once had the chance to spend a few days with 79 year old Marvin Minksy when he visited my university in 2006 (University of Central Florida). His book The Emotion Machine had just been published, and he was eager to discuss his ideas with faculty and students. He gave a public lecture to an audience which included faculty from across the sciences and humanities, which turned into something of a spectacle. Faculty from the literature department stood up to denounce the questionable ethics of creating an emoting machine. I personally have major doubts just on technical grounds that such a feat can be pulled off, but when it came my turn to ask Minksy a question, I granted that such a machine could be built and asked him whether it should have rights, be paid for its labor, or receive time off. His answer has had a powerful impact on my thinking ever since. He said that he was just a scientist, that such questions were not his problem, and that I'd do better to ask a politician.
When it comes to understanding whether or not a computer is really intelligent, I think there's a lot of metaphysical confusion in the discourse. We don't have widespread agreement about what "natural intelligence" is or how it works, so defining "artificial intelligence" is even more fraught. One issue is the prevalence of computer metaphors in the cognitive neurosciences, which are all too often literalized into ontology. It is one thing to model brain and cognition 'as if' they were computers or information processors, but quite another to claim that the brain and mind *are* computers (a great article on this issue just published: "Cognition without neural representation" by Inês Hipólito).
As for the theological issues you raise, I was reminded of a book by Alexander Bard Syntheism: Creating God in the Internet Age. https://en.wikipedia.org/wiki/Syntheism The basic idea is that God is not a creator but something humans will create technologically.
When I started studying mathematics and philosophy in the 80s, the way for AI was essentially already prepared: Logic Programming (Prolog), Problem Solving (Heuristic), Reasoning with Bayes Nets, Machine Learning and Datamining and Neural Networks, which have recently become en vogue again. I have written a paper on this and sent it around: "Nexus and Networks" - I am particularly concerned here with the convergence of Whiehead's ontology with that of neural networks. I think that at least for the part of AI that deals with self-learning neural networks, one can answer the question of what the "god" of such a network would be if one eliminated the human component (supervisor) from it. If anyone is interested, I would say something more about this.
The key thing is that AI is just a model (rules-based, statistical), and that, like all models, it can be overly-relied upon and mistaken for the thing it's only modeling.
It's worthwhile reviewing how AI works and the biggest concerns about it.
The essence of an AI model is quite straightforward, operating 1) through rules, 2) through "learning" new rules -- or, most often, 3) through a hybrid between these two poles.
Re 1, a rules-based approach to AI, whether it's learning to play chess or doing anything else, is essentially this: writing a rule to make clear that if confronted with a given phenomenon, it can be defined as X or, say, assigned the value of Y. A rules-based approach is as simple as that. Complexity comes about by having thousands upon thousands of such rules that can be used for a given data set; and by weighting the relative importance of any of the rules the system is using (unless all rules are weighted equally).
Re 2, machine learning happens when a system is "trained" to learn new rules and relationships for a data set; it does this by way of statistical probability: that is, training a system on a set of data to learn the probability of A under the conditions of situation B.
If a rules-based approach to AI can be compared to raising a child with a ton of rules to live by, a statistical approach is similar to teaching a child what's right and wrong based on life experiences or training examples. The system is fed training examples by the thousands so that it can uncover rules of probability, and weight findings. There's nothing mysterious happening on the back end of things, just an extraordinary number of calculations of statistical probability, piled on top of each other, and weighted for importance.
Re 1-3, the model is only as good as the assumptions that have gone into it. Assumptions are limited to those the designers have deemed important relative to the specific problem they are addressing and challenges they confront at a particular time.
AI may be used to assess something as complex and "hidden" as, say, the sentiment of content. But it is a category confusion to go from the ability to assess the sentiment of content, using a model of statistical probability, to think that that model might itself some day exhibit the characteristics of the phenomena it is assessing -- in this case, say, emotion. Assigning statistical probability to correctly identify emotion is one thing, emoting is another.
Challenges of AI are based upon the limitations 1) of the assumptions that have gone into a given model and 2) the data set it has trained on.
At the application level, there is the danger that the consistent, untiring performance of a statistical probability model will be seen as preferable to human decision-making when in fact that conclusion may be unwarranted and overly reliant (over-reliance on AI in warfare seems an obvious example).
Fundamentally lies the problem of AI lies most in forgetting that the world is vastly more complex than any model of it, given the characteristics of lived and novel experience.
Here’s the first part of an article I’m writing about “the new definition of intelligence,” opened the door to artificial intelligence. Comments?
Data-information-knowledge and intelligence
These four important concepts can be defined in various ways, and the definitions chosen (not surprisingly) determine how they are related. This brief article will suggest that the traditional definition of intelligence has been changed to enable the definition of artificial intelligence.
We begin by defining data, information and knowledge.
Data is digital data such as can be found in computer memory and storage. It consists of bit strings together with metadata (other bit strings) that describe its syntactical structure.
Information is data that is is well-formed, meaningful, and true. Understanding meaning requires a semantic engine, and since computers are only syntactic engines, information is restricted to humans. (1)
Knowledge is a familiarity, awareness, or understanding of someone or something, such as facts, skills, or objects. (2)
Facts, skills, and objects can be further described as follows. Facts: “descriptive knowledge” (know-that) that can be expressed in a declarative sentence or a logical proposition; skills: "procedural knowledge" (know-how) is performing a task; objects: "knowledge by acquaintance” (know-of) is non-propositional knowledge of something, which is constituted by familiarity with it or direct awareness of it. (3)
By this definition, knowledge consists of information and therefore can be understood only by humans. However, knowledge is increasingly derived from data that is processed by computers.
Intelligence was traditionally defined as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context. (*)
This definition limits intelligence to humans (and perhaps a few other animals) because it involves information and knowledge, not data. As such, computers, which input and output only data, cannot be said to be intelligent. Of course, they contribute greatly to human intelligence. However, a more recent definition of intelligence opens the door to computers.
An intelligent agent (IA) is anything which perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or may use knowledge. (*)
Avoiding talk of consciousness
Philosophers and scientists have been struggling for millennia to understand human consciousness and now are speculating about whether computers will ever become conscious. The consensus is that currently humans are conscious (but how would you prove it?) and that computers aren’t. But the definition of an IA opens the door for unconscious intelligence. Is this an oxymoron? By the old definition, perhaps, but by the new definition, no. Scientists like to stay away from concepts they don’t understand, so defining intelligence in a way that doesn’t involve consciousness is understandable. Moreover, the importance of what computers increasingly can do is undisputed, so defining them as intelligent makes some sense. The next section will review some of the ways computers can transform syntactic data into semantic knowledge for us, even though the output remains syntactic data for them
- 35 Forums
- 13 Topics
- 48 Posts
- 0 Online
- 121 Members