欢迎来到淘文阁 - 分享文档赚钱的网站! | 帮助中心 好文档才是您的得力助手!
淘文阁 - 分享文档赚钱的网站
全部分类
  • 研究报告>
  • 管理文献>
  • 标准材料>
  • 技术资料>
  • 教育专区>
  • 应用文书>
  • 生活休闲>
  • 考试试题>
  • pptx模板>
  • 工商注册>
  • 期刊短文>
  • 图片设计>
  • ImageVerifierCode 换一换

    2022年神经网络设计 .pdf

    • 资源ID:39721575       资源大小:137.13KB        全文页数:12页
    • 资源格式: PDF        下载积分:4.3金币
    快捷下载 游客一键下载
    会员登录下载
    微信登录下载
    三方登录下载: 微信开放平台登录   QQ登录  
    二维码
    微信扫一扫登录
    下载资源需要4.3金币
    邮箱/手机:
    温馨提示:
    快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如填写123,账号就是123,密码也是123。
    支付方式: 支付宝    微信支付   
    验证码:   换一换

     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    2022年神经网络设计 .pdf

    Objectives1-111IntroductionObjectives1-1History1-2Applications1-5Biological Inspiration1-8Further Reading1-10ObjectivesAs you read these words you are using a complex biological neural network.You have a highly interconnected set of some 1011 neurons to facilitate your reading,breathing,motion and thinking.Each of your biological neurons,a rich assembly of tissue and chemistry,has the complexity,if not the speed,of a microprocessor.Some of your neural structure was with you at birth.Other parts have been established by experience.Scientists have only just begun to understand how biological neural net-works operate.It is generally understood that all biological neural func-tions,including memory,are stored in the neurons and in the connections between them.Learning is viewed as the establishment of new connections between neurons or the modification of existing connections.This leads to the following question:Although we have only a rudimentary understand-ing of biological neural networks,is it possible to construct a small set of simple artificial neurons and perhaps train them to serve a useful func-tion?The answer is yes.This book,then,is about artificial neural net-works.The neurons that we consider here are not biological.They are extremely simple abstractions of biological neurons,realized as elements in a pro-gram or perhaps as circuits made of silicon.Networks of these artificial neurons do not have a fraction of the power of the human brain,but they can be trained to perform useful functions.This book is about such neu-rons,the networks that contain them and their training.名师资料总结-精品资料欢迎下载-名师精心整理-第 1 页,共 12 页 -1 Introduction1-2HistoryThe history of artificial neural networks is filled with colorful,creative in-dividuals from many different fields,many of whom struggled for decades to develop concepts that we now take for granted.This history has been documented by various authors.One particularly interesting book is Neu-rocomputing:Foundations of Research by John Anderson and Edward Rosenfeld.They have collected and edited a set of some 43 papers of special historical interest.Each paper is preceded by an introduction that puts the paper in historical perspective.Histories of some of the main neural network contributors are included at the beginning of various chapters throughout this text and will not be re-peated here.However,it seems appropriate to give a brief overview,a sam-ple of the major developments.At least two ingredients are necessary for the advancement of a technology:concept and implementation.First,one must have a concept,a way of thinking about a topic,some view of it that gives a clarity not there before.This may involve a simple idea,or it may be more specific and include a mathematical description.To illustrate this point,consider the history of the heart.It was thought to be,at various times,the center of the soul or a source of heat.In the 17th century medical practitioners finally began to view the heart as a pump,and they designed experiments to study its pumping action.These experiments revolutionized our view of the circula-tory system.Without the pump concept,an understanding of the heart was out of grasp.Concepts and their accompanying mathematics are not sufficient for a technology to mature unless there is some way to implement the system.For instance,the mathematics necessary for the reconstruction of images from computer-aided tomography(CAT)scans was known many years be-fore the availability of high-speed computers and efficient algorithms final-ly made it practical to implement a useful CAT system.The history of neural networks has progressed through both conceptual in-novations and implementation developments.These advancements,how-ever,seem to have occurred in fits and starts rather than by steady evolution.Some of the background work for the field of neural networks occurred in the late 19th and early 20th centuries.This consisted primarily of interdis-ciplinary work in physics,psychology and neurophysiology by such scien-tists as Hermann von Helmholtz,Ernst Mach and Ivan Pavlov.This early work emphasized general theories of learning,vision,conditioning,etc.,and did not include specific mathematical models of neuron operation.名师资料总结-精品资料欢迎下载-名师精心整理-第 2 页,共 12 页 -History1-31The modern view of neural networks began in the 1940s with the work of Warren McCulloch and Walter Pitts McPi43,who showed that networks of artificial neurons could,in principle,compute any arithmetic or logical function.Their work is often acknowledged as the origin of the neural net-work field.McCulloch and Pitts were followed by Donald Hebb Hebb49,who pro-posed that classical conditioning(as discovered by Pavlov)is present be-cause of the properties of individual neurons.He proposed a mechanism for learning in biological neurons(see Chapter 7).The first practical application of artificial neural networks came in the late 1950s,with the invention of the perceptron network and associated learn-ing rule by Frank Rosenblatt Rose58.Rosenblatt and his colleagues built a perceptron network and demonstrated its ability to perform pattern rec-ognition.This early success generated a great deal of interest in neural net-work research.Unfortunately,it was later shown that the basic perceptron network could solve only a limited class of problems.(See Chapter 4 for more on Rosenblatt and the perceptron learning rule.)At about the same time,Bernard Widrow and Ted Hoff WiHo60 intro-duced a new learning algorithm and used it to train adaptive linear neural networks,which were similar in structure and capability to Rosenblatt?s perceptron.The Widrow-Hoff learning rule is still in use today.(See Chap-ter 10 for more on Widrow-Hoff learning.)Unfortunately,both Rosenblatt?s and Widrow?s networks suffered from the same inherent limitations,which were widely publicized in a book by Mar-vin Minsky and Seymour Papert MiPa69.Rosenblatt and Widrow were aware of these limitations and proposed new networks that would over-come them.However,they were not able to successfully modify their learn-ing algorithms to train the more complex networks.Many people,influenced by Minsky and Papert,believed that further re-search on neural networks was a dead end.This,combined with the fact that there were no powerful digital computers on which to experiment,caused many researchers to leave the field.For a decade neural network research was largely suspended.Some important work,however,did continue during the 1970s.In 1972 Teuvo Kohonen Koho72 and James Anderson Ande72 independently and separately developed new neural networks that could act as memories.(See Chapters 13 and 14 for more on Kohonen networks.)Stephen Gross-berg Gros76 was also very active during this period in the investigation of self-organizing networks.(See Chapters 15 and 16.)Interest in neural networks had faltered during the late 1960s because of the lack of new ideas and powerful computers with which to experiment.During the 1980s both of these impediments were overcome,and research in neural networks increased dramatically.New personal computers and 名师资料总结-精品资料欢迎下载-名师精心整理-第 3 页,共 12 页 -1 Introduction1-4workstations,which rapidly grew in capability,became widely available.In addition,important new concepts were introduced.Two new concepts were most responsible for the rebirth of neural net-works.The first was the use of statistical mechanics to explain the opera-tion of a certain class of recurrent network,which could be used as an associative memory.This was described in a seminal paper by physicist John Hopfield Hopf82.(Chapters 17 and 18 discuss these Hopfield net-works.)The second key development of the 1980s was the backpropagation algo-rithm for training multilayer perceptron networks,which was discovered independently by several different researchers.The most influential publi-cation of the backpropagation algorithm was by David Rumelhart and James McClelland RuMc86.This algorithm was the answer to the criti-cisms Minsky and Papert had made in the 1960s.(See Chapters 11 and 12 for a development of the backpropagation algorithm.)These new developments reinvigorated the field of neural networks.In the last ten years,thousands of papers have been written,and neural networks have found many applications.The field is buzzing with new theoretical and practical work.As noted below,it is not clear where all of this will lead us.The brief historical account given above is not intended to identify all of the major contributors,but is simply to give the reader some feel for how knowledge in the neural network field has progressed.As one might note,the progress has not always been slow but sure.There have been periods of dramatic progress and periods when relatively little has been accom-plished.Many of the advances in neural networks have had to do with new con-cepts,such as innovative architectures and training rules.Just as impor-tant has been the availability of powerful new computers on which to test these new concepts.Well,so much for the history of neural networks to this date.The real ques-tion is,What will happen in the next ten to twenty years?Will neural net-works take a permanent place as a mathematical/engineering tool,or will they fade away as have so many promising technologies?At present,the answer seems to be that neural networks will not only have their day but will have a permanent place,not as a solution to every problem,but as a tool to be used in appropriate situations.In addition,remember that we still know very little about how the brain works.The most important ad-vances in neural networks almost certainly lie in the future.Although it is difficult to predict the future success of neural networks,the large number and wide variety of applications of this new technology are very encouraging.The next section describes some of these applications.名师资料总结-精品资料欢迎下载-名师精心整理-第 4 页,共 12 页 -Applications1-51ApplicationsA recent newspaper article described the use of neural networks in litera-ture research by Aston University.It stated that the network can be taught to recognize individual writing styles,and the researchers used it to compare works attributed to Shakespeare and his contemporaries.A pop-ular science television program recently documented the use of neural net-works by an Italian research institute to test the purity of olive oil.These examples are indicative of the broad range of applications that can be found for neural networks.The applications are expanding because neural net-works are good at solving problems,not just in engineering,science and mathematics,but in medicine,business,finance and literature as well.Their application to a wide variety of problems in many fields makes them very attractive.Also,faster computers and faster algorithms have made it possible to use neural networks to solve complex industrial problems that formerly required too much computation.The following note and Table of Neural Network Applications are repro-duced here from the Neural Network Toolbox for MATLAB with the per-mission of the MathWorks,Inc.The 1988 DARPA Neural Network Study DARP88 lists various neural network applications,beginning with the adaptive channel equalizer in about 1984.This device,which is an outstanding commercial success,is a single-neuron network used in long distance telephone systems to stabilize voice signals.The DARPA report goes on to list other commercial applica-tions,including a small word recognizer,a process monitor,a sonar classi-fier and a risk analysis system.Neuralnetworks have been applied in many fields since the DARPA report was written.A list of some applications mentioned in the literature follows.AerospaceHigh performance aircraft autopilots,flight path simulations,aircraft control systems,autopilot enhancements,aircraft com-ponent simulations,aircraft component fault detectorsAutomotiveAutomobile automatic guidance systems,warranty activity an-alyzersBankingCheck and other document readers,credit application evalua-tors名师资料总结-精品资料欢迎下载-名师精心整理-第 5 页,共 12 页 -1 Introduction1-6DefenseWeapon steering,target tracking,object discrimination,facial recognition,new kinds of sensors,sonar,radar and image sig-nal processing including data compression,feature extraction and noise suppression,signal/image identificationElectronicsCode sequence prediction,integrated circuit chip layout,pro-cess control,chip failure analysis,machine vision,voice syn-thesis,nonlinear modelingEntertainmentAnimation,special effects,market forecastingFinancialReal estate appraisal,loan advisor,mortgage screening,corpo-rate bond rating,credit line use analysis,portfolio trading pro-gram,corporate financial analysis,currency price prediction InsurancePolicy application evaluation,product optimizationManufacturingManufacturing process control,product design and analysis,process and machine diagnosis,real-time particle identifica-tion,visual quality inspection systems,beer testing,welding quality analysis,paper quality prediction,computer chip qual-ity analysis,analysis of grinding operations,chemical product design analysis,machine maintenance analysis,project bid-ding,planning and management,dynamic modeling of chemi-cal process systemsMedicalBreast cancer cell analysis,EEG and ECG analysis,prosthesis design,optimization of transplant times,hospital expense re-duction,hospital quality improvement,emergency room test advisementOil and GasExploration名师资料总结-精品资料欢迎下载-名师精心整理-第 6 页,共 12 页 -Applications1-71RoboticsTrajectory control,forklift robot,manipulator controllers,vi-sion systemsSpeechSpeech recognition,speech compression,vowel classification,text to speech synthesisSecuritiesMarket analysis,automatic bond rating,stock trading advisory systemsTelecommunicationsImage and data compression,automated information services,real-time translation of spoken language,customer payment processing systemsTransportationTruck brake diagnosis systems,vehicle scheduling,routing systemsConclusionThe number of neural network applications,the money that has been in-vested in neural network software and hardware,and the depth and breadth of interest in these devices have been growing rapidly.名师资料总结-精品资料欢迎下载-名师精心整理-第 7 页,共 12 页 -1 Introduction1-8Biological InspirationThe artificial neural networks discussed in this text are only remotely re-lated to their biological counterparts.In this section we will briefly describe those characteristics of brain function that have inspired the development of artificial neural networks.The brain consists of a large number(approximately 1011)of highly con-nected elements(approximately 104 connections per element)called neu-rons.For our purposes these neurons have three principal components:the dendrites,the cell body and the axon.The dendrites are tree-like receptive networks of nerve fibers that carry electrical signals into the cell body.The cell body effectively sums and thresholds these incoming signals.The axon is a single long fiber that

    注意事项

    本文(2022年神经网络设计 .pdf)为本站会员(H****o)主动上传,淘文阁 - 分享文档赚钱的网站仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知淘文阁 - 分享文档赚钱的网站(点击联系客服),我们立即给予删除!

    温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




    关于淘文阁 - 版权申诉 - 用户使用规则 - 积分规则 - 联系我们

    本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

    工信部备案号:黑ICP备15003705号 © 2020-2023 www.taowenge.com 淘文阁 

    收起
    展开