forum
Artificial Intellig...
 
Notifications
Clear all

Artificial Intelligence


George Strawn
Posts: 1
Member - Ind
Topic starter
(@gostrawn)
New Member
Joined: 3 weeks ago

Artificial Intelligence (AI) was an aspiration of Alan Turing and John von Neumann, two of the geniuses who created electronic computers and computer science. Both died young (Turing in 1954 and von Neumann in 1957).  In the absence of these founders, a summer meeting was held at Dartmouth College in the summer of 1956, which became known as the beginning of the academic discipline of AI. Among the attendees were John McCarthy and Marvin Minsky (both age 29 at the time), who were thereafter viewed as the leaders of the discipline.  

 

Progress in the science of AI was heavily dependent of government funding, which went up and down periodically over the rest of the century. Finally, early in this century, breakthroughs in “machine learning” were made and AI has become a focus of government and business investment. So a half-century of hope and speculation has given way to important results and yet more speculation. 

 

Since the beginning, there have been two interpretations of what AI means. Does it mean simulating the way humans do things that require intelligence or does it mean doing things that require intelligence in any means possible?  Since we don’t (yet) understand human intelligence, AI by any means has been the path forward. However, an interesting attitude developed. For example, once a computer beat the chess champion, some people said that the computer accomplished the feat by non-intelligent means. This attitude has surfaced often enough to create a tongue-in-cheek definition of intelligence as “anything the computer hasn’t done yet.”  This is reminiscent of “the God of the gaps,” where until we scientifically understand something, God did it. Afterwards, God retreats to be responsible for remaining mysteries. 

 

Having mentioned God, this is a good time to consider the breadth of the implications of AI.  From the narrowest perspective, AI is a technological subject—what can we do and how do we do it?  From a societal perspective, will AI and its cousin robotics eventually do all our work, placing humans in either a utopia or a dystopia?  From a religious and philosophical perspective, if our AI constructions eventually displace humans from being the crown of creation, where does that leave human-centered religion?

 

Technologically, limited AI is here now. Sociologically, it will be here in the near future. Philosophically and religiously, it is in the farther future.  All three of these perspectives and undoubtedly others (for example, the connections of AI to process thought) are well worth our consideration. I propose that interested persons participate here in questions, answers, and speculations about AI. It would be my pleasure to moderate such a discussion. Although I have been a computerist for six decades, I’m a generalist, not an AI specialist. And as you know, our society respects specialists and suspects generalist, so participants be warned!

2 Replies
John Buchanan
Posts: 2
Member - Ind
(@john-buchanan)
New Member
Joined: 2 months ago

Peter Farleigh gave a talk on the possibility of computer intelligence at a small conference about 30 years ago.  His take was that since complex human subjectivity (for Whitehead) arises out of the experiential integration of the feelings of other multi-leveled organic entities (neuronal events, past moments of the psyche, etc), there is no way for even the most complex configuration of integrated circuits, or even quantum computers, to create the kind of matrix of intense feeling out of which human experience arises.  Thus computers will never be able to "think", or "feel", in the same sense that human beings do.  I concur with that, with the caveat that unforeseen novel possibilities often lie just around the bend.

Olaf Stapledon, the Whiteheadian philosopher/psychologist sci-fi writer from the 1920s-30s, imagined huge silos of neural matter generating super-intelligences.  This might circumvent the objections raised above, but Stapledon goes on to speculate that these super-brains's lack of bodily feelings/emotions would inherently restrict the possibility of the deepest spiritual experiences and insights.  But they were real smart!

Reply
Matt Segall, Cobb Institute Science Advisory Committee Chair
Posts: 1
(@cobb-institute-science-advisory-committee)
Member
Joined: 5 months ago

Thanks for kicking things off here, George! I majored in cognitive science as an undergrad, and so was exposed early on to some of the issues you mention surrounding the development of AI. I once had the chance to spend a few days with 79 year old Marvin Minksy when he visited my university in 2006 (University of Central Florida). His book The Emotion Machine had just been published, and he was eager to discuss his ideas with faculty and students. He gave a public lecture to an audience which included faculty from across the sciences and humanities, which turned into something of a spectacle. Faculty from the literature department stood up to denounce the questionable ethics of creating an emoting machine. I personally have major doubts just on technical grounds that such a feat can be pulled off, but when it came my turn to ask Minksy a question, I granted that such a machine could be built and asked him whether it should have rights, be paid for its labor, or receive time off. His answer has had a powerful impact on my thinking ever since. He said that he was just a scientist, that such questions were not his problem, and that I'd do better to ask a politician. 

 

When it comes to understanding whether or not a computer is really intelligent, I think there's a lot of metaphysical confusion in the discourse. We don't have widespread agreement about what "natural intelligence" is or how it works, so defining "artificial intelligence" is even more fraught. One issue is the prevalence of computer metaphors in the cognitive neurosciences, which are all too often literalized into ontology. It is one thing to model brain and cognition 'as if' they were computers or information processors, but quite another to claim that the brain and mind *are* computers (a great article on this issue just published: "Cognition without neural representation" by Inês Hipólito). 

 

As for the theological issues you raise, I was reminded of a book by Alexander Bard Syntheism: Creating God in the Internet Age. https://en.wikipedia.org/wiki/Syntheism  The basic idea is that God is not a creator but something humans will create technologically. 

Reply
Share: