欢迎来到淘文阁 - 分享文档赚钱的网站! | 帮助中心 好文档才是您的得力助手!
淘文阁 - 分享文档赚钱的网站
全部分类
  • 研究报告>
  • 管理文献>
  • 标准材料>
  • 技术资料>
  • 教育专区>
  • 应用文书>
  • 生活休闲>
  • 考试试题>
  • pptx模板>
  • 工商注册>
  • 期刊短文>
  • 图片设计>
  • ImageVerifierCode 换一换

    并行程序设计导论第一章.ppt

    • 资源ID:88522047       资源大小:3.35MB        全文页数:43页
    • 资源格式: PPT        下载积分:11.9金币
    快捷下载 游客一键下载
    会员登录下载
    微信登录下载
    三方登录下载: 微信开放平台登录   QQ登录  
    二维码
    微信扫一扫登录
    下载资源需要11.9金币
    邮箱/手机:
    温馨提示:
    快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如填写123,账号就是123,密码也是123。
    支付方式: 支付宝    微信支付   
    验证码:   换一换

     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    并行程序设计导论第一章.ppt

    1Copyright 2010,Elsevier Inc.All rights ReservedChapter 1Why Parallel Computing?An Introduction to Parallel ProgrammingPeter Pacheco2Copyright 2010,Elsevier Inc.All rights ReservedRoadmapnWhy we need ever-increasing performance.nWhy were building parallel systems.nWhy we need to write parallel programs.nHow do we write parallel programs?nWhat well be doing.nConcurrent,parallel,distributed!#Chapter Subtitle3Changing timesCopyright 2010,Elsevier Inc.All rights ReservednFrom 1986 2002,microprocessors were speeding like a rocket,increasing in performance an average of 50%per year.nSince then,its dropped to about 20%increase per year.4An intelligent solutionCopyright 2010,Elsevier Inc.All rights ReservednInstead of designing and building faster microprocessors,put multiple processors on a single integrated circuit.5Now its up to the programmersnAdding more processors doesnt help much if programmers arent aware of themn or dont know how to use them.nSerial programs dont benefit from this approach(in most cases).Copyright 2010,Elsevier Inc.All rights Reserved6Why we need ever-increasing performancenComputational power is increasing,but so are our computation problems and needs.nProblems we never dreamed of have been solved because of past increases,such as decoding the human genome.nMore complex problems are still waiting to be solved.Copyright 2010,Elsevier Inc.All rights Reserved7Climate modelingCopyright 2010,Elsevier Inc.All rights Reserved8Protein foldingCopyright 2010,Elsevier Inc.All rights Reserved9Drug discoveryCopyright 2010,Elsevier Inc.All rights Reserved10Energy researchCopyright 2010,Elsevier Inc.All rights Reserved11Data analysisCopyright 2010,Elsevier Inc.All rights Reserved12Why were building parallel systemsnUp to now,performance increases have been attributable to increasing density of transistors.nBut there areinherent problems.Copyright 2010,Elsevier Inc.All rights Reserved13A little physics lessonnSmaller transistors=faster processors.nFaster processors=increased power consumption.nIncreased power consumption=increased heat.nIncreased heat=unreliable processors.Copyright 2010,Elsevier Inc.All rights Reserved14Solution nMove away from single-core systems to multicore processors.n“core”=central processing unit(CPU)Copyright 2010,Elsevier Inc.All rights ReservednIntroducing parallelism!15Why we need to write parallel programsnRunning multiple instances of a serial program often isnt very useful.nThink of running multiple instances of your favorite game.nWhat you really want is forit to run faster.Copyright 2010,Elsevier Inc.All rights Reserved16Approaches to the serial problemnRewrite serial programs so that theyre parallel.nWrite translation programs that automatically convert serial programs into parallel programs.nThis is very difficult to do.nSuccess has been limited.Copyright 2010,Elsevier Inc.All rights Reserved17More problemsnSome coding constructs can be recognized by an automatic program generator,and converted to a parallel construct.nHowever,its likely that the result will be a very inefficient program.nSometimes the best parallel solution is to step back and devise an entirely new algorithm.Copyright 2010,Elsevier Inc.All rights Reserved18ExamplenCompute n values and add them together.nSerial solution:Copyright 2010,Elsevier Inc.All rights Reserved19Example(cont.)nWe have p cores,p much smaller than n.nEach core performs a partial sum of approximately n/p values.Copyright 2010,Elsevier Inc.All rights ReservedEach core uses its own private variablesand executes this block of codeindependently of the other cores.20Example(cont.)nAfter each core completes execution of the code,is a private variable my_sum contains the sum of the values computed by its calls to Compute_next_value.nEx.,8 cores,n=24,then the calls to Compute_next_value return:Copyright 2010,Elsevier Inc.All rights Reserved1,4,3,9,2,8,5,1,1,5,2,7,2,5,0,4,1,8,6,5,1,2,3,921Example(cont.)nOnce all the cores are done computing their private my_sum,they form a global sum by sending results to a designated“master”core which adds the final result.Copyright 2010,Elsevier Inc.All rights Reserved22Example(cont.)Copyright 2010,Elsevier Inc.All rights Reserved23Example(cont.)Copyright 2010,Elsevier Inc.All rights ReservedCore01234567my_sum8197157131214Global sum8+19+7+15+7+13+12+14=95Core01234567my_sum9519715713121424Copyright 2010,Elsevier Inc.All rights ReservedBut wait!Theres a much better wayto compute the global sum.25Better parallel algorithmnDont make the master core do all the work.nShare it among the other cores.nPair the cores so that core 0 adds its result with core 1s result.nCore 2 adds its result with core 3s result,etc.nWork with odd and even numbered pairs of cores.Copyright 2010,Elsevier Inc.All rights Reserved26Better parallel algorithm(cont.)nRepeat the process now with only the evenly ranked cores.nCore 0 adds result from core 2.nCore 4 adds the result from core 6,etc.nNow cores divisible by 4 repeat the process,and so forth,until core 0 has the final result.Copyright 2010,Elsevier Inc.All rights Reserved27Multiple cores forming a global sumCopyright 2010,Elsevier Inc.All rights Reserved28AnalysisnIn the first example,the master core performs 7 receives and 7 additions.nIn the second example,the master core performs 3 receives and 3 additions.nThe improvement is more than a factor of 2!Copyright 2010,Elsevier Inc.All rights Reserved29Analysis(cont.)nThe difference is more dramatic with a larger number of cores.nIf we have 1000 cores:nThe first example would require the master to perform 999 receives and 999 additions.nThe second example would only require 10 receives and 10 additions.nThats an improvement of almost a factor of 100!Copyright 2010,Elsevier Inc.All rights Reserved30How do we write parallel programs?nTask parallelism nPartition various tasks carried out solving the problem among the cores.nData parallelismnPartition the data used in solving the problem among the cores.nEach core carries out similar operations on its part of the data.Copyright 2010,Elsevier Inc.All rights Reserved31Professor PCopyright 2010,Elsevier Inc.All rights Reserved15 questions300 exams32Professor Ps grading assistantsCopyright 2010,Elsevier Inc.All rights ReservedTA#1TA#2TA#333Division of work data parallelismCopyright 2010,Elsevier Inc.All rights ReservedTA#1TA#2TA#3100 exams100 exams100 exams34Division of work task parallelismCopyright 2010,Elsevier Inc.All rights ReservedTA#1TA#2TA#3Questions 1-5Questions 6-10Questions 11-1535Division of work data parallelismCopyright 2010,Elsevier Inc.All rights Reserved36Division of work task parallelismCopyright 2010,Elsevier Inc.All rights ReservedTasks1)Receiving2)Addition 37CoordinationnCores usually need to coordinate their work.nCommunication one or more cores send their current partial sums to another core.nLoad balancing share the work evenly among the cores so that one is not heavily loaded.nSynchronization because each core works at its own pace,make sure cores do not get too far ahead of the rest.Copyright 2010,Elsevier Inc.All rights Reserved38What well be doingnLearning to write programs that are explicitly parallel.nUsing the C language.nUsing three different extensions to C.nMessage-Passing Interface(MPI)nPosix Threads(Pthreads)nOpenMPCopyright 2010,Elsevier Inc.All rights Reserved39Type of parallel systemsnShared-memorynThe cores can share access to the computers memory.nCoordinate the cores by having them examine and update shared memory locations.nDistributed-memorynEach core has its own,private memory.nThe cores must communicate explicitly by sending messages across a network.Copyright 2010,Elsevier Inc.All rights Reserved40Type of parallel systemsCopyright 2010,Elsevier Inc.All rights ReservedShared-memoryDistributed-memory41Terminology nConcurrent computing a program is one in which multiple tasks can be in progress at any instant.nParallel computing a program is one in which multiple tasks cooperate closely to solve a problemnDistributed computing a program may need to cooperate with other programs to solve a problem.Copyright 2010,Elsevier Inc.All rights Reserved42Concluding Remarks(1)nThe laws of physics have brought us to the doorstep of multicore technology.nSerial programs typically dont benefit from multiple cores.nAutomatic parallel program generation from serial program code isnt the most efficient approach to get high performance from multicore computers.Copyright 2010,Elsevier Inc.All rights Reserved43Concluding Remarks(2)nLearning to write parallel programs involves learning how to coordinate the cores.nParallel programs are usually very complex and therefore,require sound program techniques and development.Copyright 2010,Elsevier Inc.All rights Reserved

    注意事项

    本文(并行程序设计导论第一章.ppt)为本站会员(wuy****n92)主动上传,淘文阁 - 分享文档赚钱的网站仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知淘文阁 - 分享文档赚钱的网站(点击联系客服),我们立即给予删除!

    温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




    关于淘文阁 - 版权申诉 - 用户使用规则 - 积分规则 - 联系我们

    本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

    工信部备案号:黑ICP备15003705号 © 2020-2023 www.taowenge.com 淘文阁 

    收起
    展开