Spark编程基础及项目实践章节练习题项目6+答案.docx
-
资源ID:76722284
资源大小:14.33KB
全文页数:2页
- 资源格式: DOCX
下载积分:5金币
快捷下载
会员登录下载
微信登录下载
三方登录下载:
微信扫一扫登录
友情提示
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
|
Spark编程基础及项目实践章节练习题项目6+答案.docx
Spark编程基础及项目实践章节练习题项目6+答案习题6答案1.选择题(1)下列哪个不可以作为Spark Streaming的输入数据流?( D )A. KafkaB. TwitterC. TCP套接字D. Openstack(2)不可以作为Spark编程语言的是( C )。A. JavaB. ScalaC. RubyD. Python(3)Dstream的转换操作中,将RDD进行聚合的操作是( B )。A. flat map()B. reduce()C. count()D. union()(4)下列不是划窗操作重要参数是( D )。A. 批处理间隔B. 窗口间隔C. 滑动间隔D. 输入流间隔(5)下列不属于Spark Streaming的输出操作的是( B )。A. saveAsTextFilesB. saveAsStreamingFiles C. saveAsHadoopFilesD. saveAsObjectFiles2.操作题 使用Hadoop用户名登录Linux系统,启动Spark-shell,使用Hadoop提供的Shell完成如下操作:(1) 引入Spark Streaming相关包,并创建一个Streaming Context对象,流计算的间隔时长为5秒。import org.apache.spark.SparkConfimport org.apache.spark.streaming.Seconds, StreamingContextobject WordCount def main(args: ArrayString) = /create sparkConf val spark_conf = new SparkConf().setAppName("WordCount").setMaster("local3") /create streamingContext val spark_context = new StreamingContext(spark_conf,Seconds(5)(2)在spark-shell中读取Linux系统的本地文件“/home/hadoop/test.txt”,然后统计出英语单词的个数(该文本文件只有英文组成)。import org.apache.spark.SparkConfimport org.apache.spark.streaming.Seconds, State, StateSpec, StreamingContextobject CumulativeWord def main(args: ArrayString): Unit = val conf = new SparkConf().setMaster("local2").setAppName("NetworkWordCount")val ssc = new StreamingContext(conf, Seconds(1)val initialRDD = ssc.sparkContext.parallelize(List("hello", 0)val lines = ssc.socketTextStream("localhost", 9999)val words = lines.flatMap(_.split(" ")val pairs = words.map(word => (word, 1)val mappingFunc = (word: String, one: OptionInt, state: StateInt) => val sum = one.getOrElse(0) + state.getOption.getOrElse(0)val output = (word, sum)state.update(sum)output(3)在spark-shell中监听本地的9999端口的TCP套接字,并使用划窗技术,流计算间隔为5秒,窗口间隔和滑动间隔自定义,来统计单词数。import org.apache.spark.streaming.Secondsimport org.apache.spark.streaming.StreamingContextimport org.apache.spark.SparkConfobject WindowsWordNumdef main(args: ArrayString): Unit = val conf = new SparkConf().setAppName("WindowsWordNum").setMaster("local2")/Scala中,创建的是StreamingContext val ssc = new StreamingContext(conf, Seconds(5)val searchWordCountsDStream = searchWordPairDStream.reduceByKeyAndWindow(v1: Int, v2: Int) => v1 + v2, Seconds(60), Seconds(10)val finalDStream = searchWordCountsDStream.transform(searchWordCountsRDD => val countSearchWordsRDD = searchWordCountsRDD.map(tuple => (tuple._2, tuple._1)val sortedCountSearchWordsRDD = countSearchWordsRDD.sortByKey(false)val sortedSearchWordCountsRDD = sortedCountSearchWordsRDD.map(tuple => (tuple._1, tuple._2)val top3SearchWordCounts = sortedSearchWordCountsRDD.take(3)for (tuple <- top3SearchWordCounts) println("result : " + tuple)searchWordCountsRDD)finalDStream.print()ssc.start()ssc.awaitTermination()