`
bit1129
  • 浏览: 1051113 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

【Spark四十二】RDD算子逻辑执行图第二部分

 
阅读更多

1.distinct

2.cogroup

 

 

 

1.distinct

1.示例代码

package spark.examples

import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.SparkContext._

object SparkRDDDistinct {

  def main(args : Array[String]) {
    val conf = new SparkConf().setAppName("SparkRDDDistinct").setMaster("local");
    val sc = new SparkContext(conf);
    val rdd1 = sc.parallelize(List(1,8,2,1,4,2,7,6,2,3,1), 3)
    val pairs = rdd1.distinct();

    pairs.saveAsTextFile("file:///D:/distinct" + System.currentTimeMillis());

    println(pairs.toDebugString)
  }

}

1.1 输出的RDD依赖

(3) MappedRDD[3] at distinct at SparkRDDDisctinct.scala:14 []
 |  ShuffledRDD[2] at distinct at SparkRDDDisctinct.scala:14 []
 +-(3) MappedRDD[1] at distinct at SparkRDDDisctinct.scala:14 []
    |  ParallelCollectionRDD[0] at parallelize at SparkRDDDisctinct.scala:13 []

1.2  作业结果

part-000000:   6 3

part-000001:   4 1 7

part-000002:   8 2

 

注意的是:结果并没有排序

 

2.distict的源代码

def distinct(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] =
    map(x => (x, null)).reduceByKey((x, y) => x, numPartitions).map(_._1) ///map得到元组的第一个元素

 

 

 

3.RDD依赖图



 

 2. cogroup

2.1 示例代码

package spark.examples

import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.SparkContext._

object SparkRDDCogroup {

  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("SparkRDDCogroup").setMaster("local");
    val sc = new SparkContext(conf);

    //第一个参数是集合,第二个参数是分区数
    val rdd1 = sc.parallelize(List((1, 2), (2, 3), (3, 4), (2,10),(4, 5), (5, 6)), 3)
    val rdd2 = sc.parallelize(List((3, 6), (2, 8), (9,11)), 2);

    //cogroup操作的RDD的元素类型必须是K/V类型
    val pairs = rdd1.cogroup(rdd2);
    pairs.saveAsTextFile("file:///D:/cogroup" + System.currentTimeMillis());

    println(pairs.toDebugString)
  }

}

 

2.2 RDD依赖关系

 

(3) MappedValuesRDD[3] at cogroup at SparkRDDCogroup.scala:17 []
 |  CoGroupedRDD[2] at cogroup at SparkRDDCogroup.scala:17 []
 +-(3) ParallelCollectionRDD[0] at parallelize at SparkRDDCogroup.scala:13 []
 +-(2) ParallelCollectionRDD[1] at parallelize at SparkRDDCogroup.scala:14 []

 

2.3 执行结果:

part-00000: (3,(CompactBuffer(4),CompactBuffer(6))) (9,(CompactBuffer(),CompactBuffer(11)))

part-00001: (4,(CompactBuffer(5),CompactBuffer())) (1,(CompactBuffer(2),CompactBuffer()))

part-00002: (5,(CompactBuffer(6),CompactBuffer())) (2,(CompactBuffer(3, 10),CompactBuffer(8)))

 

从结果中可以看到,

cogroup是对所有的Key进行聚合,不管这个Key在哪个RDD中出现,比如9,在rdd2中出现,那么也会出现在结果集中。

如果rdd中有两个Key一样的元素,比如(2,3),(2,10),那么跟rdd2的(2,8)聚合后得到什么结果?(2,(CompactBuffer(3, 10),CompactBuffer(8)))

 

2.4 RDD依赖图




 

 cogroup函数的的源代码

 

 /**
   * For each key k in `this` or `other1` or `other2` or `other3`,
   * return a resulting RDD that contains a tuple with the list of values
   * for that key in `this`, `other1`, `other2` and `other3`.
   */
  def cogroup[W1, W2, W3](other1: RDD[(K, W1)],
      other2: RDD[(K, W2)],
      other3: RDD[(K, W3)],
      partitioner: Partitioner)
      : RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2], Iterable[W3]))] = {
    if (partitioner.isInstanceOf[HashPartitioner] && keyClass.isArray) {
      throw new SparkException("Default partitioner cannot partition array keys.")
    }
    val cg = new CoGroupedRDD[K](Seq(self, other1, other2, other3), partitioner)
    cg.mapValues { case Array(vs, w1s, w2s, w3s) =>
       (vs.asInstanceOf[Iterable[V]],
         w1s.asInstanceOf[Iterable[W1]],
         w2s.asInstanceOf[Iterable[W2]],
         w3s.asInstanceOf[Iterable[W3]])
    }
  }

 可见,cogroup最多对四个RDD同时做cogroup操作。cogroup操作的含义是,对在四个RDD中的每个Key进行操作,Key对应的Value是,每个RDD中这个Key对应的Value的集合所构成的元组

 

  • 大小: 194.1 KB
  • 大小: 316.7 KB
  • 大小: 69.5 KB
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics