热门IT资讯网

Spark1.4源码走读笔记之模式匹配

发表于:2024-11-25 作者:热门IT资讯网编辑
编辑最后更新 2024年11月25日,RDD里的模式匹配:def hasNext: Boolean = (thisIter.hasNext, otherIter.hasNext) match {case (true, true) => t

RDD里的模式匹配:

def hasNext: Boolean = (thisIter.hasNext, otherIter.hasNext) match {

case (true, true) => true

case (false, false) => false

case _ => throw new SparkException("Can only zip RDDs with " +

"same number of elements in each partition")

}


jobResult = jobResult match {

case Some(value) => Some(f(value, taskResult.get))

case None => taskResult

}


take(1) match {

case Array(t) => t

case _ => throw new UnsupportedOperationException("empty collection")

}

下面的比较好理解:

val len = rdd.dependencies.length

len match {

case 0 => Seq.empty

case 1 =>

val d = rdd.dependencies.head

debugString(d.rdd, prefix, d.isInstanceOf[ShuffleDependency[_, _, _]], true)

case _ => //所有的都到碗里来

val frontDeps = rdd.dependencies.take(len - 1)

val frontDepStrings = frontDeps.flatMap(

d => debugString(d.rdd, prefix, d.isInstanceOf[ShuffleDependency[_, _, _]]))


val lastDep = rdd.dependencies.last

val lastDepStrings =

debugString(lastDep.rdd, prefix, lastDep.isInstanceOf[ShuffleDependency[_, _, _]], true)


(frontDepStrings ++ lastDepStrings)

}


0