History of Artificial intelligence

What is Artificial intelligence (AI)

Artificial intelligence (AI) is extensive branch of computer science involved in constructing intelligent machines capable of doing jobs that the typically need human intelligence.  AI is a interdisciplinary science with numerous strategies, but improvements in machine learning and profound learning are developing a paradigm shift in any sector of their technology market.

 

Intelligent robots and artificial beings appeared in the early Greek myths of Antiquity.  Aristotle’s growth of this syllogism and it is use of deductive reasoning were an integral moment in humankind’s quest to comprehend its own intellect.  Even though the roots are deep and long, the background of artificial intelligence because we consider it now spans less than a century.  Listed below is a glance at a few of the most crucial events in AI.

History of Artificial intelligence (AI)

 

1943

The paper suggested the first mathematical version for constructing a neural network.

1949

In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb suggests the concept that neural pathways are made from encounters and that links between neurons become more powerful the more often they are used.  Hebbian learning is still a significant model in AI.

1950

Alan Turing publishes”Computing Machinery and Intelligence, indicating what’s now called the Turing Test, a way of determining if a system is intelligent.

Harvard undergraduates Marvin Minsky and Dean Edmonds construct SNARC, the initial neural network pc.

1952

  • Arthur Samuel develops a self-learning program to play checkers.

1954

  • The Georgetown-IBM machine interpretation experimentation mechanically translates 60 carefully chosen Russian sentences into English.

1956

  • The expression artificial intelligence is filmed in the”Dartmouth Summer Research Project on Artificial Intelligence.” Directed by John McCarthy, the summit, which characterized the extent and intentions of AI, is broadly regarded as the arrival of artificial intelligence as we understand it now.

1958

The paper suggested the hypothetical Advice Taker, a complete AI program together with the capability to learn from experience as efficiently as people do.

1959

  • Allen Newell, Herbert Simon and J.C. Shaw create the General Problem Solver (GPS), a software designed to mimic human problem-solving.
  • Arthur Samuel coins the expression machine learning while in IBM.

1963

  • John McCarthy begins the AI Lab at Stanford.

1966

  • The Automatic Language Processing Advisory Committee (ALPAC) report from the U.S. government details the absence of advancement in machine postings research, a leading Cold War initiative together with the guarantee of automatic and instantaneous translation of Russian. The ALPAC report contributes to the cancellation of government-funded MT jobs.

1969

  • The very first powerful expert systems are manufactured in DENDRAL, a XX application, also MYCIN, made to diagnose blood ailments, are made at Stanford.

1973

  • The”Lighthill Report,” detailing the disappointments in AI research, is published by the British authorities and contributes to severe reductions in funding for artificial intelligence projects.

1974-1980

  • Frustration with the advancement of AI growth contributes to major DARPA cutbacks in grants. Along with the earlier ALPAC report along with the preceding year’s”Lighthill Report,” artificial intelligence financing dries up and study stalls. This period is referred to as the”Initial AI Winter.”

1980

  • Digital Equipment Businesses develops R1 (also called XCON), the first successful business expert system. Made to configure orders to get new pc programs, R1 kicks off an investment boom from specialist systems that will endure for a lot of the decade, effectively finishing the initial”AI Winter.”

1982

The objective of FGCS would be to create supercomputer-like performance along with a stage for AI development.

1983

  • In reaction to Japan’s FGCS, the U.S. government establishes the Strategic Computing Initiative to supply DARPA funded research in advanced computing and artificial intelligence.

1985

  • Firms are spending over a billion dollars annually on expert systems and an whole industry referred to as the Lisp machine market springs up to encourage them. Firms like Symbolics and Lisp Machines Inc. construct technical computers to operate on the AI programming language Lisp.

1987-1993

  • As computing technologies enhanced, more economical options appeared along with the Lisp machine market dropped in 1987, ushering in the”Secondly AI Winter.” In this age, specialist systems was too expensive to keep and upgrade, finally falling from favor.
  • Japan terminates the FGCS project in 1992, citing failure in fulfilling the ambitious targets outlined a decade before.

1991

  • U.S. forces deploy DART, an automatic logistics planning and monitoring tool, throughout the Gulf War.

1997

  • IBM’s Deep Blue beats world chess champion Gary Kasparov

2005

  • STANLEY, a self-driving car, wins the DARPA Grand Challenge.
  • The U.S. military begins investing in autonomous robots like Boston Dynamic’s “Big Dog” and iRobot’s “PackBot.”

2008

  • Google makes discoveries in language recognition and introduces the characteristic in its iPhone program.

2012

  • Andrew Ng, creator of the Google Brain Deep Learning job, feeds a neural network utilizing profound learning algorithms 10 million YouTube videos as a training set. The neural network learned to comprehend a cat without being told exactly what a cat is, ushering in breakthrough age for neural networks and profound learning financing.

2014

  • Google makes initial self-driving automobile to pass a state driving test.

2016

  • Google DeepMind’s AlphaGo beats world winner Go participant Lee Sedol. The intricacy of the ancient Chinese sport was regarded as a significant barrier to clear from AI.

Leave a Reply

Your email address will not be published.