(5.15.1)--Chapter5-6Inmemorycomputing-Spar.pdf
-
资源ID:57972777
资源大小:918.54KB
全文页数:13页
- 资源格式: PDF
下载积分:10金币
快捷下载
会员登录下载
微信登录下载
三方登录下载:
微信扫一扫登录
友情提示
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
|
(5.15.1)--Chapter5-6Inmemorycomputing-Spar.pdf
In memory Computing-Spark2Data Processing System Architecture Computing algorithmComputing ModelData processing systemComputing Platform&EngineComputing Platforms that provide various development kits and operating environmentsData storing systemData application systemComputing Models for different types of data,such as 1.Batch Processing Model for massive data,MapReduce2.Stream Computing model for dynamic data streams,3.Large-scale concurrent processing(MPP)model for structured data4.large-scale physical memory In-memory Computing model;5.Data Flow Graph model;Computing Engine Hadoop,Spark,Storm,etc34L3-SparkSpark was initially started by Matei Zaharia at UC Berkeleys AMP Lab in 2009,and open sourced in 2010.In 2013,donated to the Apache Software Foundation.one of the most active open source big data projects,Top-Level Apache ProjectParallel processing framework based on the memory computing model.It can be built on the Hadoop platform and use the HDFS file system to store data,but a Resilient Distributed dataset(RDD)architecture is built on top of the file system for Supports efficient Distributed Memory Computing.5What is Spark6RDD(Resilient Distributed Dataset)78Spark Driver(running on the Master node,there is also a mode of running on a Worker node)and Executor(running on the Worker node):Driver is responsible for converting the computing tasks of the application into a directed acyclic graph(DAG)Executor is responsible for completing the calculation and data storage on the worker node On each worker,the Executor generates task threads for each data partition distributed to it to complete parallel calculations.9Features of Sparktransform the whole dataset but not individual element on the datasetsave the result of RDD evaluationstores the intermediate result so that we can use it further RDDs are the huge collection of various data items,cannot fit into a single node and must be partitioned across various nodesCreated data can be retrieved anytime but its value cant be changedRDDs track data lineage information to reconstruct lost data automaticallyIt doesnt compute the result immediately means that execution does not start until an action is triggered.When we call some operation in RDD for transformation,it does not execute immediately.Computed results are stored in distributed memory(RAM)instead of stable storage(disk).10Spark Components 11Spark Advantages Fast processing Flexibility In-memory computing Real-time processing Better analytics12Spark ecosystemQuestions?