Adds a (Java friendly) listener to be executed on task completion.
Adds a (Java friendly) listener to be executed on task completion. This will be called in all situations - success, failure, or cancellation. Adding a listener to an already completed task will result in that listener being called immediately.
An example use is for HadoopRDD to register a callback to close the input stream.
Exceptions thrown by the listener will result in failure of the task.
Adds a listener in the form of a Scala closure to be executed on task completion.
Adds a listener in the form of a Scala closure to be executed on task completion. This will be called in all situations - success, failure, or cancellation. Adding a listener to an already completed task will result in that listener being called immediately.
An example use is for HadoopRDD to register a callback to close the input stream.
Exceptions thrown by the listener will result in failure of the task.
Adds a listener to be executed on task failure.
Adds a listener to be executed on task failure. Adding a listener to an already failed task will result in that listener being called immediately.
Adds a listener to be executed on task failure.
Adds a listener to be executed on task failure. Adding a listener to an already failed task will result in that listener being called immediately.
How many times this task has been attempted.
How many times this task has been attempted. The first task attempt will be assigned attemptNumber = 0, and subsequent attempts will have increasing attempt numbers.
:: Experimental :: Sets a global barrier and waits until all tasks in this stage hit this barrier.
:: Experimental :: Sets a global barrier and waits until all tasks in this stage hit this barrier. Similar to MPI_Barrier function in MPI, the barrier() function call blocks until all tasks in the same stage have reached this routine.
CAUTION! In a barrier stage, each task must have the same number of barrier() calls, in all possible code branches. Otherwise, you may get the job hanging or a SparkException after timeout. Some examples of misuses are listed below: 1. Only call barrier() function on a subset of all the tasks in the same barrier stage, it shall lead to timeout of the function call.
rdd.barrier().mapPartitions { iter => val context = BarrierTaskContext.get() if (context.partitionId() == 0) { // Do nothing. } else { context.barrier() } iter }
2. Include barrier() function in a try-catch code block, this may lead to timeout of the second function call.
rdd.barrier().mapPartitions { iter => val context = BarrierTaskContext.get() try { // Do something that might throw an Exception. doSomething() context.barrier() } catch { case e: Exception => logWarning("...", e) } context.barrier() iter }
Get a local property set upstream in the driver, or null if it is missing.
Get a local property set upstream in the driver, or null if it is missing. See also
org.apache.spark.SparkContext.setLocalProperty
.
::DeveloperApi:: Returns all metrics sources with the given name which are associated with the instance which runs the task.
::DeveloperApi::
Returns all metrics sources with the given name which are associated with the instance
which runs the task. For more information see org.apache.spark.metrics.MetricsSystem
.
:: Experimental :: Returns BarrierTaskInfo for all tasks in this barrier stage, ordered by partition ID.
:: Experimental :: Returns BarrierTaskInfo for all tasks in this barrier stage, ordered by partition ID.
Returns true if the task has completed.
Returns true if the task has completed.
Returns true if the task has been killed.
Returns true if the task has been killed.
Returns true if the task is running locally in the driver program.
Returns true if the task is running locally in the driver program.
false
The ID of the RDD partition that is computed by this task.
The ID of the RDD partition that is computed by this task.
How many times the stage that this task belongs to has been attempted.
How many times the stage that this task belongs to has been attempted. The first stage attempt will be assigned stageAttemptNumber = 0, and subsequent attempts will have increasing attempt numbers.
The ID of the stage that this task belong to.
The ID of the stage that this task belong to.
An ID that is unique to this task attempt (within the same SparkContext, no two task attempts will share the same attempt ID).
An ID that is unique to this task attempt (within the same SparkContext, no two task attempts will share the same attempt ID). This is roughly equivalent to Hadoop's TaskAttemptID.
:: Experimental :: A TaskContext with extra contextual info and tooling for tasks in a barrier stage. Use BarrierTaskContext#get to obtain the barrier context for a running barrier task.