2022年神经网络设计 .pdf
《2022年神经网络设计 .pdf》由会员分享,可在线阅读,更多相关《2022年神经网络设计 .pdf(12页珍藏版)》请在淘文阁 - 分享文档赚钱的网站上搜索。
1、Objectives1-111IntroductionObjectives1-1History1-2Applications1-5Biological Inspiration1-8Further Reading1-10ObjectivesAs you read these words you are using a complex biological neural network.You have a highly interconnected set of some 1011 neurons to facilitate your reading,breathing,motion and t
2、hinking.Each of your biological neurons,a rich assembly of tissue and chemistry,has the complexity,if not the speed,of a microprocessor.Some of your neural structure was with you at birth.Other parts have been established by experience.Scientists have only just begun to understand how biological neu
3、ral net-works operate.It is generally understood that all biological neural func-tions,including memory,are stored in the neurons and in the connections between them.Learning is viewed as the establishment of new connections between neurons or the modification of existing connections.This leads to t
4、he following question:Although we have only a rudimentary understand-ing of biological neural networks,is it possible to construct a small set of simple artificial neurons and perhaps train them to serve a useful func-tion?The answer is yes.This book,then,is about artificial neural net-works.The neu
5、rons that we consider here are not biological.They are extremely simple abstractions of biological neurons,realized as elements in a pro-gram or perhaps as circuits made of silicon.Networks of these artificial neurons do not have a fraction of the power of the human brain,but they can be trained to
6、perform useful functions.This book is about such neu-rons,the networks that contain them and their training.名师资料总结-精品资料欢迎下载-名师精心整理-第 1 页,共 12 页 -1 Introduction1-2HistoryThe history of artificial neural networks is filled with colorful,creative in-dividuals from many different fields,many of whom str
7、uggled for decades to develop concepts that we now take for granted.This history has been documented by various authors.One particularly interesting book is Neu-rocomputing:Foundations of Research by John Anderson and Edward Rosenfeld.They have collected and edited a set of some 43 papers of special
8、 historical interest.Each paper is preceded by an introduction that puts the paper in historical perspective.Histories of some of the main neural network contributors are included at the beginning of various chapters throughout this text and will not be re-peated here.However,it seems appropriate to
9、 give a brief overview,a sam-ple of the major developments.At least two ingredients are necessary for the advancement of a technology:concept and implementation.First,one must have a concept,a way of thinking about a topic,some view of it that gives a clarity not there before.This may involve a simp
10、le idea,or it may be more specific and include a mathematical description.To illustrate this point,consider the history of the heart.It was thought to be,at various times,the center of the soul or a source of heat.In the 17th century medical practitioners finally began to view the heart as a pump,an
11、d they designed experiments to study its pumping action.These experiments revolutionized our view of the circula-tory system.Without the pump concept,an understanding of the heart was out of grasp.Concepts and their accompanying mathematics are not sufficient for a technology to mature unless there
12、is some way to implement the system.For instance,the mathematics necessary for the reconstruction of images from computer-aided tomography(CAT)scans was known many years be-fore the availability of high-speed computers and efficient algorithms final-ly made it practical to implement a useful CAT sys
13、tem.The history of neural networks has progressed through both conceptual in-novations and implementation developments.These advancements,how-ever,seem to have occurred in fits and starts rather than by steady evolution.Some of the background work for the field of neural networks occurred in the lat
14、e 19th and early 20th centuries.This consisted primarily of interdis-ciplinary work in physics,psychology and neurophysiology by such scien-tists as Hermann von Helmholtz,Ernst Mach and Ivan Pavlov.This early work emphasized general theories of learning,vision,conditioning,etc.,and did not include s
15、pecific mathematical models of neuron operation.名师资料总结-精品资料欢迎下载-名师精心整理-第 2 页,共 12 页 -History1-31The modern view of neural networks began in the 1940s with the work of Warren McCulloch and Walter Pitts McPi43,who showed that networks of artificial neurons could,in principle,compute any arithmetic or
16、logical function.Their work is often acknowledged as the origin of the neural net-work field.McCulloch and Pitts were followed by Donald Hebb Hebb49,who pro-posed that classical conditioning(as discovered by Pavlov)is present be-cause of the properties of individual neurons.He proposed a mechanism f
17、or learning in biological neurons(see Chapter 7).The first practical application of artificial neural networks came in the late 1950s,with the invention of the perceptron network and associated learn-ing rule by Frank Rosenblatt Rose58.Rosenblatt and his colleagues built a perceptron network and dem
18、onstrated its ability to perform pattern rec-ognition.This early success generated a great deal of interest in neural net-work research.Unfortunately,it was later shown that the basic perceptron network could solve only a limited class of problems.(See Chapter 4 for more on Rosenblatt and the percep
19、tron learning rule.)At about the same time,Bernard Widrow and Ted Hoff WiHo60 intro-duced a new learning algorithm and used it to train adaptive linear neural networks,which were similar in structure and capability to Rosenblatt?s perceptron.The Widrow-Hoff learning rule is still in use today.(See C
20、hap-ter 10 for more on Widrow-Hoff learning.)Unfortunately,both Rosenblatt?s and Widrow?s networks suffered from the same inherent limitations,which were widely publicized in a book by Mar-vin Minsky and Seymour Papert MiPa69.Rosenblatt and Widrow were aware of these limitations and proposed new net
21、works that would over-come them.However,they were not able to successfully modify their learn-ing algorithms to train the more complex networks.Many people,influenced by Minsky and Papert,believed that further re-search on neural networks was a dead end.This,combined with the fact that there were no
22、 powerful digital computers on which to experiment,caused many researchers to leave the field.For a decade neural network research was largely suspended.Some important work,however,did continue during the 1970s.In 1972 Teuvo Kohonen Koho72 and James Anderson Ande72 independently and separately devel
23、oped new neural networks that could act as memories.(See Chapters 13 and 14 for more on Kohonen networks.)Stephen Gross-berg Gros76 was also very active during this period in the investigation of self-organizing networks.(See Chapters 15 and 16.)Interest in neural networks had faltered during the la
24、te 1960s because of the lack of new ideas and powerful computers with which to experiment.During the 1980s both of these impediments were overcome,and research in neural networks increased dramatically.New personal computers and 名师资料总结-精品资料欢迎下载-名师精心整理-第 3 页,共 12 页 -1 Introduction1-4workstations,whic
25、h rapidly grew in capability,became widely available.In addition,important new concepts were introduced.Two new concepts were most responsible for the rebirth of neural net-works.The first was the use of statistical mechanics to explain the opera-tion of a certain class of recurrent network,which co
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- 2022年神经网络设计 2022 神经网络 设计
限制150内