Class ParallelFlowable<T>

    • Constructor Detail

      • ParallelFlowable

        public ParallelFlowable()
    • Method Detail

      • subscribe

        @BackpressureSupport(SPECIAL)
        @SchedulerSupport("none")
        public abstract void subscribe​(@NonNull
                                       @NonNull org.reactivestreams.Subscriber<? super @NonNull T>[] subscribers)
        Subscribes an array of Subscribers to this ParallelFlowable and triggers the execution chain for all 'rails'.
        Backpressure:
        The backpressure behavior/expectation is determined by the supplied Subscriber.
        Scheduler:
        subscribe does not operate by default on a particular Scheduler.
        Parameters:
        subscribers - the subscribers array to run in parallel, the number of items must be equal to the parallelism level of this ParallelFlowable
        Throws:
        java.lang.NullPointerException - if subscribers is null
        See Also:
        parallelism()
      • parallelism

        @CheckReturnValue
        public abstract int parallelism()
        Returns the number of expected parallel Subscribers.
        Returns:
        the number of expected parallel Subscribers
      • validate

        protected final boolean validate​(@NonNull
                                         @NonNull org.reactivestreams.Subscriber<?>[] subscribers)
        Validates the number of subscribers and returns true if their number matches the parallelism level of this ParallelFlowable.
        Parameters:
        subscribers - the array of Subscribers
        Returns:
        true if the number of subscribers equals to the parallelism level
        Throws:
        java.lang.NullPointerException - if subscribers is null
        java.lang.IllegalArgumentException - if subscribers.length is different from parallelism()
      • from

        @CheckReturnValue
        @NonNull
        @SchedulerSupport("none")
        @BackpressureSupport(FULL)
        public static <@NonNull T> @NonNull ParallelFlowable<T> from​(@NonNull
                                                                     @NonNull org.reactivestreams.Publisher<? extends @NonNull T> source)
        Take a Publisher and prepare to consume it on multiple 'rails' (number of CPUs) in a round-robin fashion.
        Backpressure:
        The operator honors the backpressure of the parallel rails and requests Flowable.bufferSize() amount from the upstream, followed by 75% of that amount requested after every 75% received.
        Scheduler:
        from does not operate by default on a particular Scheduler.
        Type Parameters:
        T - the value type
        Parameters:
        source - the source Publisher
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if source is null
      • from

        @CheckReturnValue
        @NonNull
        @SchedulerSupport("none")
        @BackpressureSupport(FULL)
        public static <@NonNull T> @NonNull ParallelFlowable<T> from​(@NonNull
                                                                     @NonNull org.reactivestreams.Publisher<? extends @NonNull T> source,
                                                                     int parallelism)
        Take a Publisher and prepare to consume it on parallelism number of 'rails' in a round-robin fashion.
        Backpressure:
        The operator honors the backpressure of the parallel rails and requests Flowable.bufferSize() amount from the upstream, followed by 75% of that amount requested after every 75% received.
        Scheduler:
        from does not operate by default on a particular Scheduler.
        Type Parameters:
        T - the value type
        Parameters:
        source - the source Publisher
        parallelism - the number of parallel rails
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if source is null
        java.lang.IllegalArgumentException - if parallelism is non-positive
      • from

        @CheckReturnValue
        @NonNull
        @SchedulerSupport("none")
        @BackpressureSupport(FULL)
        public static <@NonNull T> @NonNull ParallelFlowable<T> from​(@NonNull
                                                                     @NonNull org.reactivestreams.Publisher<? extends @NonNull T> source,
                                                                     int parallelism,
                                                                     int prefetch)
        Take a Publisher and prepare to consume it on parallelism number of 'rails' , possibly ordered and round-robin fashion and use custom prefetch amount and queue for dealing with the source Publisher's values.
        Backpressure:
        The operator honors the backpressure of the parallel rails and requests the prefetch amount from the upstream, followed by 75% of that amount requested after every 75% received.
        Scheduler:
        from does not operate by default on a particular Scheduler.
        Type Parameters:
        T - the value type
        Parameters:
        source - the source Publisher
        parallelism - the number of parallel rails
        prefetch - the number of values to prefetch from the source the source until there is a rail ready to process it.
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if source is null
        java.lang.IllegalArgumentException - if parallelism or prefetch is non-positive
      • map

        @CheckReturnValue
        @NonNull
        @SchedulerSupport("none")
        @BackpressureSupport(PASS_THROUGH)
        public final <@NonNull R> @NonNull ParallelFlowable<R> map​(@NonNull
                                                                   @NonNull Function<? super @NonNull T,​? extends @NonNull R> mapper)
        Maps the source values on each 'rail' to another value.

        Note that the same mapper function may be called from multiple threads concurrently.

        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by the upstream and downstream rail behaviors.
        Scheduler:
        map does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the output value type
        Parameters:
        mapper - the mapper function turning Ts into Rs.
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
      • map

        @CheckReturnValue
        @NonNull
        @SchedulerSupport("none")
        @BackpressureSupport(PASS_THROUGH)
        public final <@NonNull R> @NonNull ParallelFlowable<R> map​(@NonNull
                                                                   @NonNull Function<? super @NonNull T,​? extends @NonNull R> mapper,
                                                                   @NonNull
                                                                   @NonNull BiFunction<? super java.lang.Long,​? super java.lang.Throwable,​ParallelFailureHandling> errorHandler)
        Maps the source values on each 'rail' to another value and handles errors based on the returned value by the handler function.

        Note that the same mapper function may be called from multiple threads concurrently.

        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by the upstream and downstream rail behaviors.
        Scheduler:
        map does not operate by default on a particular Scheduler.

        History: 2.0.8 - experimental

        Type Parameters:
        R - the output value type
        Parameters:
        mapper - the mapper function turning Ts into Rs.
        errorHandler - the function called with the current repeat count and failure Throwable and should return one of the ParallelFailureHandling enumeration values to indicate how to proceed.
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper or errorHandler is null
        Since:
        2.2
      • filter

        @CheckReturnValue
        @NonNull
        @SchedulerSupport("none")
        @BackpressureSupport(PASS_THROUGH)
        public final @NonNull ParallelFlowable<T> filter​(@NonNull
                                                         @NonNull Predicate<? super @NonNull T> predicate)
        Filters the source values on each 'rail'.

        Note that the same predicate may be called from multiple threads concurrently.

        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by the upstream and downstream rail behaviors.
        Scheduler:
        filter does not operate by default on a particular Scheduler.
        Parameters:
        predicate - the function returning true to keep a value or false to drop a value
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if predicate is null
      • filter

        @CheckReturnValue
        @NonNull
        @SchedulerSupport("none")
        @BackpressureSupport(PASS_THROUGH)
        public final @NonNull ParallelFlowable<T> filter​(@NonNull
                                                         @NonNull Predicate<? super @NonNull T> predicate,
                                                         @NonNull
                                                         @NonNull ParallelFailureHandling errorHandler)
        Filters the source values on each 'rail' and handles errors based on the given ParallelFailureHandling enumeration value.

        Note that the same predicate may be called from multiple threads concurrently.

        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by the upstream and downstream rail behaviors.
        Scheduler:
        filter does not operate by default on a particular Scheduler.

        History: 2.0.8 - experimental

        Parameters:
        predicate - the function returning true to keep a value or false to drop a value
        errorHandler - the enumeration that defines how to handle errors thrown from the predicate
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if predicate or errorHandler is null
        Since:
        2.2
      • filter

        @CheckReturnValue
        @NonNull
        @SchedulerSupport("none")
        @BackpressureSupport(PASS_THROUGH)
        public final @NonNull ParallelFlowable<T> filter​(@NonNull
                                                         @NonNull Predicate<? super @NonNull T> predicate,
                                                         @NonNull
                                                         @NonNull BiFunction<? super java.lang.Long,​? super java.lang.Throwable,​ParallelFailureHandling> errorHandler)
        Filters the source values on each 'rail' and handles errors based on the returned value by the handler function.

        Note that the same predicate may be called from multiple threads concurrently.

        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by the upstream and downstream rail behaviors.
        Scheduler:
        map does not operate by default on a particular Scheduler.

        History: 2.0.8 - experimental

        Parameters:
        predicate - the function returning true to keep a value or false to drop a value
        errorHandler - the function called with the current repeat count and failure Throwable and should return one of the ParallelFailureHandling enumeration values to indicate how to proceed.
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if predicate or errorHandler is null
        Since:
        2.2
      • runOn

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(FULL)
        @SchedulerSupport("custom")
        public final @NonNull ParallelFlowable<T> runOn​(@NonNull
                                                        @NonNull Scheduler scheduler)
        Specifies where each 'rail' will observe its incoming values, specified via a Scheduler, with no work-stealing and default prefetch amount.

        This operator uses the default prefetch size returned by Flowable.bufferSize().

        The operator will call Scheduler.createWorker() as many times as this ParallelFlowable's parallelism level is.

        No assumptions are made about the Scheduler's parallelism level, if the Scheduler's parallelism level is lower than the ParallelFlowable's, some rails may end up on the same thread/worker.

        This operator doesn't require the Scheduler to be trampolining as it does its own built-in trampolining logic.

        Backpressure:
        The operator honors the backpressure of the parallel rails and requests Flowable.bufferSize() amount from the upstream, followed by 75% of that amount requested after every 75% received.
        Scheduler:
        runOn drains the upstream rails on the specified Scheduler's Workers.
        Parameters:
        scheduler - the scheduler to use
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if scheduler is null
      • runOn

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(FULL)
        @SchedulerSupport("custom")
        public final @NonNull ParallelFlowable<T> runOn​(@NonNull
                                                        @NonNull Scheduler scheduler,
                                                        int prefetch)
        Specifies where each 'rail' will observe its incoming values, specified via a Scheduler, with possibly work-stealing and a given prefetch amount.

        This operator uses the default prefetch size returned by Flowable.bufferSize().

        The operator will call Scheduler.createWorker() as many times as this ParallelFlowable's parallelism level is.

        No assumptions are made about the Scheduler's parallelism level, if the Scheduler's parallelism level is lower than the ParallelFlowable's, some rails may end up on the same thread/worker.

        This operator doesn't require the Scheduler to be trampolining as it does its own built-in trampolining logic.

        Backpressure:
        The operator honors the backpressure of the parallel rails and requests the prefetch amount from the upstream, followed by 75% of that amount requested after every 75% received.
        Scheduler:
        runOn drains the upstream rails on the specified Scheduler's Workers.
        Parameters:
        scheduler - the scheduler to use that rail's worker has run out of work.
        prefetch - the number of values to request on each 'rail' from the source
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if scheduler is null
        java.lang.IllegalArgumentException - if prefetch is non-positive
      • reduce

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(UNBOUNDED_IN)
        @SchedulerSupport("none")
        public final @NonNull Flowable<T> reduce​(@NonNull
                                                 @NonNull BiFunction<@NonNull T,​@NonNull T,​@NonNull T> reducer)
        Reduces all values within a 'rail' and across 'rails' with a reducer function into one Flowable sequence.

        Note that the same reducer function may be called from multiple threads concurrently.

        Backpressure:
        The operator honors backpressure from the downstream and consumes the upstream rails in an unbounded manner (requesting Long.MAX_VALUE).
        Scheduler:
        reduce does not operate by default on a particular Scheduler.
        Parameters:
        reducer - the function to reduce two values into one.
        Returns:
        the new Flowable instance emitting the reduced value or empty if the current ParallelFlowable is empty
        Throws:
        java.lang.NullPointerException - if reducer is null
      • reduce

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(UNBOUNDED_IN)
        @SchedulerSupport("none")
        public final <@NonNull R> @NonNull ParallelFlowable<R> reduce​(@NonNull
                                                                      @NonNull Supplier<@NonNull R> initialSupplier,
                                                                      @NonNull
                                                                      @NonNull BiFunction<@NonNull R,​? super @NonNull T,​@NonNull R> reducer)
        Reduces all values within a 'rail' to a single value (with a possibly different type) via a reducer function that is initialized on each rail from an initialSupplier value.

        Note that the same mapper function may be called from multiple threads concurrently.

        Backpressure:
        The operator honors backpressure from the downstream rails and consumes the upstream rails in an unbounded manner (requesting Long.MAX_VALUE).
        Scheduler:
        reduce does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the reduced output type
        Parameters:
        initialSupplier - the supplier for the initial value
        reducer - the function to reduce a previous output of reduce (or the initial value supplied) with a current source value.
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if initialSupplier or reducer is null
      • sequential

        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        @CheckReturnValue
        @NonNull
        public final @NonNull Flowable<T> sequential​(int prefetch)
        Merges the values from each 'rail' in a round-robin or same-order fashion and exposes it as a regular Flowable sequence, running with a give prefetch value for the rails.
        Backpressure:
        The operator honors backpressure from the downstream and requests the prefetch amount from each rail, then requests from each rail 75% of this amount after 75% received.
        Scheduler:
        sequential does not operate by default on a particular Scheduler.
        Parameters:
        prefetch - the prefetch amount to use for each rail
        Returns:
        the new Flowable instance
        Throws:
        java.lang.IllegalArgumentException - if prefetch is non-positive
        See Also:
        sequential(), sequentialDelayError(int)
      • sequentialDelayError

        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        @CheckReturnValue
        @NonNull
        public final @NonNull Flowable<T> sequentialDelayError()
        Merges the values from each 'rail' in a round-robin or same-order fashion and exposes it as a regular Flowable sequence, running with a default prefetch value for the rails and delaying errors from all rails till all terminate.

        This operator uses the default prefetch size returned by Flowable.bufferSize().

        Backpressure:
        The operator honors backpressure from the downstream and requests Flowable.bufferSize() amount from each rail, then requests from each rail 75% of this amount after 75% received.
        Scheduler:
        sequentialDelayError does not operate by default on a particular Scheduler.

        History: 2.0.7 - experimental

        Returns:
        the new Flowable instance
        Since:
        2.2
        See Also:
        sequentialDelayError(int), sequential()
      • sequentialDelayError

        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        @CheckReturnValue
        @NonNull
        public final @NonNull Flowable<T> sequentialDelayError​(int prefetch)
        Merges the values from each 'rail' in a round-robin or same-order fashion and exposes it as a regular Flowable sequence, running with a give prefetch value for the rails and delaying errors from all rails till all terminate.
        Backpressure:
        The operator honors backpressure from the downstream and requests the prefetch amount from each rail, then requests from each rail 75% of this amount after 75% received.
        Scheduler:
        sequentialDelayError does not operate by default on a particular Scheduler.

        History: 2.0.7 - experimental

        Parameters:
        prefetch - the prefetch amount to use for each rail
        Returns:
        the new Flowable instance
        Throws:
        java.lang.IllegalArgumentException - if prefetch is non-positive
        Since:
        2.2
        See Also:
        sequential(), sequentialDelayError()
      • sorted

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(UNBOUNDED_IN)
        @SchedulerSupport("none")
        public final @NonNull Flowable<T> sorted​(@NonNull
                                                 @NonNull java.util.Comparator<? super @NonNull T> comparator)
        Sorts the 'rails' of this ParallelFlowable and returns a Flowable that sequentially picks the smallest next value from the rails.

        This operator requires a finite source ParallelFlowable.

        Backpressure:
        The operator honors backpressure from the downstream and consumes the upstream rails in an unbounded manner (requesting Long.MAX_VALUE).
        Scheduler:
        sorted does not operate by default on a particular Scheduler.
        Parameters:
        comparator - the comparator to use
        Returns:
        the new Flowable instance
        Throws:
        java.lang.NullPointerException - if comparator is null
      • sorted

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(UNBOUNDED_IN)
        @SchedulerSupport("none")
        public final @NonNull Flowable<T> sorted​(@NonNull
                                                 @NonNull java.util.Comparator<? super @NonNull T> comparator,
                                                 int capacityHint)
        Sorts the 'rails' of this ParallelFlowable and returns a Flowable that sequentially picks the smallest next value from the rails.

        This operator requires a finite source ParallelFlowable.

        Backpressure:
        The operator honors backpressure from the downstream and consumes the upstream rails in an unbounded manner (requesting Long.MAX_VALUE).
        Scheduler:
        sorted does not operate by default on a particular Scheduler.
        Parameters:
        comparator - the comparator to use
        capacityHint - the expected number of total elements
        Returns:
        the new Flowable instance
        Throws:
        java.lang.NullPointerException - if comparator is null
        java.lang.IllegalArgumentException - if capacityHint is non-positive
      • toSortedList

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(UNBOUNDED_IN)
        @SchedulerSupport("none")
        public final @NonNull Flowable<java.util.List<T>> toSortedList​(@NonNull
                                                                       @NonNull java.util.Comparator<? super @NonNull T> comparator)
        Sorts the 'rails' according to the comparator and returns a full sorted List as a Flowable.

        This operator requires a finite source ParallelFlowable.

        Backpressure:
        The operator honors backpressure from the downstream and consumes the upstream rails in an unbounded manner (requesting Long.MAX_VALUE).
        Scheduler:
        toSortedList does not operate by default on a particular Scheduler.
        Parameters:
        comparator - the comparator to compare elements
        Returns:
        the new Flowable instance
        Throws:
        java.lang.NullPointerException - if comparator is null
      • toSortedList

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(UNBOUNDED_IN)
        @SchedulerSupport("none")
        public final @NonNull Flowable<@NonNull java.util.List<T>> toSortedList​(@NonNull
                                                                                @NonNull java.util.Comparator<? super @NonNull T> comparator,
                                                                                int capacityHint)
        Sorts the 'rails' according to the comparator and returns a full sorted List as a Flowable.

        This operator requires a finite source ParallelFlowable.

        Backpressure:
        The operator honors backpressure from the downstream and consumes the upstream rails in an unbounded manner (requesting Long.MAX_VALUE).
        Scheduler:
        toSortedList does not operate by default on a particular Scheduler.
        Parameters:
        comparator - the comparator to compare elements
        capacityHint - the expected number of total elements
        Returns:
        the new Flowable instance
        Throws:
        java.lang.NullPointerException - if comparator is null
        java.lang.IllegalArgumentException - if capacityHint is non-positive
      • doOnNext

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(PASS_THROUGH)
        @SchedulerSupport("none")
        public final @NonNull ParallelFlowable<T> doOnNext​(@NonNull
                                                           @NonNull Consumer<? super @NonNull T> onNext,
                                                           @NonNull
                                                           @NonNull BiFunction<? super java.lang.Long,​? super java.lang.Throwable,​ParallelFailureHandling> errorHandler)
        Call the specified consumer with the current element passing through any 'rail' and handles errors based on the returned value by the handler function.
        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by the upstream and downstream rail behaviors.
        Scheduler:
        map does not operate by default on a particular Scheduler.

        History: 2.0.8 - experimental

        Parameters:
        onNext - the callback
        errorHandler - the function called with the current repeat count and failure Throwable and should return one of the ParallelFailureHandling enumeration values to indicate how to proceed.
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if onNext or errorHandler is null
        Since:
        2.2
      • doAfterNext

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(PASS_THROUGH)
        @SchedulerSupport("none")
        public final @NonNull ParallelFlowable<T> doAfterNext​(@NonNull
                                                              @NonNull Consumer<? super @NonNull T> onAfterNext)
        Call the specified consumer with the current element passing through any 'rail' after it has been delivered to downstream within the rail.
        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by the upstream and downstream rail behaviors.
        Scheduler:
        map does not operate by default on a particular Scheduler.
        Parameters:
        onAfterNext - the callback
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if onAfterNext is null
      • doOnSubscribe

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(PASS_THROUGH)
        @SchedulerSupport("none")
        public final @NonNull ParallelFlowable<T> doOnSubscribe​(@NonNull
                                                                @NonNull Consumer<? super org.reactivestreams.Subscription> onSubscribe)
        Call the specified callback when a 'rail' receives a Subscription from its upstream.
        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by the upstream and downstream rail behaviors.
        Scheduler:
        map does not operate by default on a particular Scheduler.
        Parameters:
        onSubscribe - the callback
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if onSubscribe is null
      • collect

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(UNBOUNDED_IN)
        @SchedulerSupport("none")
        public final <@NonNull C> @NonNull ParallelFlowable<C> collect​(@NonNull
                                                                       @NonNull Supplier<? extends @NonNull C> collectionSupplier,
                                                                       @NonNull
                                                                       @NonNull BiConsumer<? super @NonNull C,​? super @NonNull T> collector)
        Collect the elements in each rail into a collection supplied via a collectionSupplier and collected into with a collector action, emitting the collection at the end.
        Backpressure:
        The operator honors backpressure from the downstream rails and consumes the upstream rails in an unbounded manner (requesting Long.MAX_VALUE).
        Scheduler:
        map does not operate by default on a particular Scheduler.
        Type Parameters:
        C - the collection type
        Parameters:
        collectionSupplier - the supplier of the collection in each rail
        collector - the collector, taking the per-rail collection and the current item
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if collectionSupplier or collector is null
      • fromArray

        @CheckReturnValue
        @NonNull
        @SafeVarargs
        @BackpressureSupport(PASS_THROUGH)
        @SchedulerSupport("none")
        public static <@NonNull T> @NonNull ParallelFlowable<T> fromArray​(@NonNull
                                                                          @NonNull org.reactivestreams.Publisher<@NonNull T>... publishers)
        Wraps multiple Publishers into a ParallelFlowable which runs them in parallel and unordered.
        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by the upstream and downstream rail behaviors.
        Scheduler:
        map does not operate by default on a particular Scheduler.
        Type Parameters:
        T - the value type
        Parameters:
        publishers - the array of publishers
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if publishers is null
        java.lang.IllegalArgumentException - if publishers is an empty array
      • to

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(PASS_THROUGH)
        @SchedulerSupport("none")
        public final <@NonNull R> R to​(@NonNull
                                       @NonNull ParallelFlowableConverter<@NonNull T,​@NonNull R> converter)
        Calls the specified converter function during assembly time and returns its resulting value.

        This allows fluent conversion to any other type.

        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by how the converter function composes over the upstream source.
        Scheduler:
        to does not operate by default on a particular Scheduler.

        History: 2.1.7 - experimental

        Type Parameters:
        R - the resulting object type
        Parameters:
        converter - the function that receives the current ParallelFlowable instance and returns a value
        Returns:
        the converted value
        Throws:
        java.lang.NullPointerException - if converter is null
        Since:
        2.2
      • compose

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(PASS_THROUGH)
        @SchedulerSupport("none")
        public final <@NonNull U> @NonNull ParallelFlowable<U> compose​(@NonNull
                                                                       @NonNull ParallelTransformer<@NonNull T,​@NonNull U> composer)
        Allows composing operators, in assembly time, on top of this ParallelFlowable and returns another ParallelFlowable with composed features.
        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by how the converter function composes over the upstream source.
        Scheduler:
        compose does not operate by default on a particular Scheduler.
        Type Parameters:
        U - the output value type
        Parameters:
        composer - the composer function from ParallelFlowable (this) to another ParallelFlowable
        Returns:
        the ParallelFlowable returned by the function
        Throws:
        java.lang.NullPointerException - if composer is null
      • flatMap

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        public final <@NonNull R> @NonNull ParallelFlowable<R> flatMap​(@NonNull
                                                                       @NonNull Function<? super @NonNull T,​? extends org.reactivestreams.Publisher<? extends @NonNull R>> mapper)
        Generates and flattens Publishers on each 'rail'.

        The errors are not delayed and uses unbounded concurrency along with default inner prefetch.

        Backpressure:
        The operator honors backpressure from the downstream rails and requests Flowable.bufferSize() amount from each rail upfront and keeps requesting as many items per rail as many inner sources on that rail completed. The inner sources are requested Flowable.bufferSize() amount upfront, then 75% of this amount requested after 75% received.
        Scheduler:
        flatMap does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the result type
        Parameters:
        mapper - the function to map each rail's value into a Publisher
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
      • flatMap

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        public final <@NonNull R> @NonNull ParallelFlowable<R> flatMap​(@NonNull
                                                                       @NonNull Function<? super @NonNull T,​? extends org.reactivestreams.Publisher<? extends @NonNull R>> mapper,
                                                                       boolean delayError)
        Generates and flattens Publishers on each 'rail', optionally delaying errors.

        It uses unbounded concurrency along with default inner prefetch.

        Backpressure:
        The operator honors backpressure from the downstream rails and requests Flowable.bufferSize() amount from each rail upfront and keeps requesting as many items per rail as many inner sources on that rail completed. The inner sources are requested Flowable.bufferSize() amount upfront, then 75% of this amount requested after 75% received.
        Scheduler:
        flatMap does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the result type
        Parameters:
        mapper - the function to map each rail's value into a Publisher
        delayError - should the errors from the main and the inner sources delayed till everybody terminates?
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
      • flatMap

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        public final <@NonNull R> @NonNull ParallelFlowable<R> flatMap​(@NonNull
                                                                       @NonNull Function<? super @NonNull T,​? extends org.reactivestreams.Publisher<? extends @NonNull R>> mapper,
                                                                       boolean delayError,
                                                                       int maxConcurrency)
        Generates and flattens Publishers on each 'rail', optionally delaying errors and having a total number of simultaneous subscriptions to the inner Publishers.

        It uses a default inner prefetch.

        Backpressure:
        The operator honors backpressure from the downstream rails and requests maxConcurrency amount from each rail upfront and keeps requesting as many items per rail as many inner sources on that rail completed. The inner sources are requested Flowable.bufferSize() amount upfront, then 75% of this amount requested after 75% received.
        Scheduler:
        flatMap does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the result type
        Parameters:
        mapper - the function to map each rail's value into a Publisher
        delayError - should the errors from the main and the inner sources delayed till everybody terminates?
        maxConcurrency - the maximum number of simultaneous subscriptions to the generated inner Publishers
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
        java.lang.IllegalArgumentException - if maxConcurrency is non-positive
      • flatMap

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        public final <@NonNull R> @NonNull ParallelFlowable<R> flatMap​(@NonNull
                                                                       @NonNull Function<? super @NonNull T,​? extends org.reactivestreams.Publisher<? extends @NonNull R>> mapper,
                                                                       boolean delayError,
                                                                       int maxConcurrency,
                                                                       int prefetch)
        Generates and flattens Publishers on each 'rail', optionally delaying errors, having a total number of simultaneous subscriptions to the inner Publishers and using the given prefetch amount for the inner Publishers.
        Backpressure:
        The operator honors backpressure from the downstream rails and requests maxConcurrency amount from each rail upfront and keeps requesting as many items per rail as many inner sources on that rail completed. The inner sources are requested the prefetch amount upfront, then 75% of this amount requested after 75% received.
        Scheduler:
        flatMap does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the result type
        Parameters:
        mapper - the function to map each rail's value into a Publisher
        delayError - should the errors from the main and the inner sources delayed till everybody terminates?
        maxConcurrency - the maximum number of simultaneous subscriptions to the generated inner Publishers
        prefetch - the number of items to prefetch from each inner Publisher
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
        java.lang.IllegalArgumentException - if maxConcurrency or prefetch is non-positive
      • concatMap

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        public final <@NonNull R> @NonNull ParallelFlowable<R> concatMap​(@NonNull
                                                                         @NonNull Function<? super @NonNull T,​? extends org.reactivestreams.Publisher<? extends @NonNull R>> mapper)
        Generates and concatenates Publishers on each 'rail', signalling errors immediately and generating 2 publishers upfront.
        Backpressure:
        The operator honors backpressure from the downstream rails and requests 2 from each rail upfront and keeps requesting 1 when the inner source complete. Requests for the inner sources are determined by the downstream rails' backpressure behavior.
        Scheduler:
        concatMap does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the result type
        Parameters:
        mapper - the function to map each rail's value into a Publisher source and the inner Publishers (immediate, boundary, end)
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
      • concatMap

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        public final <@NonNull R> @NonNull ParallelFlowable<R> concatMap​(@NonNull
                                                                         @NonNull Function<? super @NonNull T,​? extends org.reactivestreams.Publisher<? extends @NonNull R>> mapper,
                                                                         int prefetch)
        Generates and concatenates Publishers on each 'rail', signalling errors immediately and using the given prefetch amount for generating Publishers upfront.
        Backpressure:
        The operator honors backpressure from the downstream rails and requests the prefetch amount from each rail upfront and keeps requesting 75% of this amount after 75% received and the inner sources completed. Requests for the inner sources are determined by the downstream rails' backpressure behavior.
        Scheduler:
        concatMap does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the result type
        Parameters:
        mapper - the function to map each rail's value into a Publisher
        prefetch - the number of items to prefetch from each inner Publisher source and the inner Publishers (immediate, boundary, end)
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
        java.lang.IllegalArgumentException - if prefetch is non-positive
      • concatMapDelayError

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        public final <@NonNull R> @NonNull ParallelFlowable<R> concatMapDelayError​(@NonNull
                                                                                   @NonNull Function<? super @NonNull T,​? extends org.reactivestreams.Publisher<? extends @NonNull R>> mapper,
                                                                                   boolean tillTheEnd)
        Generates and concatenates Publishers on each 'rail', optionally delaying errors and generating 2 publishers upfront.
        Backpressure:
        The operator honors backpressure from the downstream rails and requests 2 from each rail upfront and keeps requesting 1 when the inner source complete. Requests for the inner sources are determined by the downstream rails' backpressure behavior.
        Scheduler:
        concatMap does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the result type
        Parameters:
        mapper - the function to map each rail's value into a Publisher
        tillTheEnd - if true, all errors from the upstream and inner Publishers are delayed till all of them terminate, if false, the error is emitted when an inner Publisher terminates. source and the inner Publishers (immediate, boundary, end)
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
      • concatMapDelayError

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        public final <@NonNull R> @NonNull ParallelFlowable<R> concatMapDelayError​(@NonNull
                                                                                   @NonNull Function<? super @NonNull T,​? extends org.reactivestreams.Publisher<? extends @NonNull R>> mapper,
                                                                                   int prefetch,
                                                                                   boolean tillTheEnd)
        Generates and concatenates Publishers on each 'rail', optionally delaying errors and using the given prefetch amount for generating Publishers upfront.
        Backpressure:
        The operator honors backpressure from the downstream rails and requests the prefetch amount from each rail upfront and keeps requesting 75% of this amount after 75% received and the inner sources completed. Requests for the inner sources are determined by the downstream rails' backpressure behavior.
        Scheduler:
        concatMap does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the result type
        Parameters:
        mapper - the function to map each rail's value into a Publisher
        prefetch - the number of items to prefetch from each inner Publisher
        tillTheEnd - if true, all errors from the upstream and inner Publishers are delayed till all of them terminate, if false, the error is emitted when an inner Publisher terminates.
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
        java.lang.IllegalArgumentException - if prefetch is non-positive
      • flatMapIterable

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        public final <@NonNull U> @NonNull ParallelFlowable<U> flatMapIterable​(@NonNull
                                                                               @NonNull Function<? super @NonNull T,​? extends java.lang.Iterable<? extends @NonNull U>> mapper,
                                                                               int bufferSize)
        Returns a ParallelFlowable that merges each item emitted by the source ParallelFlowable with the values in an Iterable corresponding to that item that is generated by a selector.

        Backpressure:
        The operator honors backpressure from each downstream rail. The source ParallelFlowables is expected to honor backpressure as well. If the source ParallelFlowable violates the rule, the operator will signal a MissingBackpressureException.
        Scheduler:
        flatMapIterable does not operate by default on a particular Scheduler.
        Type Parameters:
        U - the type of item emitted by the resulting Iterable
        Parameters:
        mapper - a function that returns an Iterable sequence of values for when given an item emitted by the source ParallelFlowable
        bufferSize - the number of elements to prefetch from each upstream rail
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
        java.lang.IllegalArgumentException - if bufferSize is non-positive
        Since:
        3.0.0
        See Also:
        ReactiveX operators documentation: FlatMap, flatMapStream(Function, int)
      • mapOptional

        @CheckReturnValue
        @NonNull
        @SchedulerSupport("none")
        @BackpressureSupport(PASS_THROUGH)
        public final <@NonNull R> @NonNull ParallelFlowable<R> mapOptional​(@NonNull
                                                                           @NonNull Function<? super @NonNull T,​@NonNull java.util.Optional<? extends @NonNull R>> mapper)
        Maps the source values on each 'rail' to an optional and emits its value if any.

        Note that the same mapper function may be called from multiple threads concurrently.

        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by the upstream and downstream rail behaviors.
        Scheduler:
        map does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the output value type
        Parameters:
        mapper - the mapper function turning Ts into optional of Rs.
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
        Since:
        3.0.0
      • mapOptional

        @CheckReturnValue
        @NonNull
        @SchedulerSupport("none")
        @BackpressureSupport(PASS_THROUGH)
        public final <@NonNull R> @NonNull ParallelFlowable<R> mapOptional​(@NonNull
                                                                           @NonNull Function<? super @NonNull T,​@NonNull java.util.Optional<? extends @NonNull R>> mapper,
                                                                           @NonNull
                                                                           @NonNull ParallelFailureHandling errorHandler)
        Maps the source values on each 'rail' to an optional and emits its value if any and handles errors based on the given ParallelFailureHandling enumeration value.

        Note that the same mapper function may be called from multiple threads concurrently.

        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by the upstream and downstream rail behaviors.
        Scheduler:
        map does not operate by default on a particular Scheduler.

        History: 2.0.8 - experimental

        Type Parameters:
        R - the output value type
        Parameters:
        mapper - the mapper function turning Ts into optional of Rs.
        errorHandler - the enumeration that defines how to handle errors thrown from the mapper function
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper or errorHandler is null
        Since:
        3.0.0
      • mapOptional

        @CheckReturnValue
        @NonNull
        @SchedulerSupport("none")
        @BackpressureSupport(PASS_THROUGH)
        public final <@NonNull R> @NonNull ParallelFlowable<R> mapOptional​(@NonNull
                                                                           @NonNull Function<? super @NonNull T,​@NonNull java.util.Optional<? extends @NonNull R>> mapper,
                                                                           @NonNull
                                                                           @NonNull BiFunction<? super java.lang.Long,​? super java.lang.Throwable,​ParallelFailureHandling> errorHandler)
        Maps the source values on each 'rail' to an optional and emits its value if any and handles errors based on the returned value by the handler function.

        Note that the same mapper function may be called from multiple threads concurrently.

        Backpressure:
        The operator is a pass-through for backpressure and the behavior is determined by the upstream and downstream rail behaviors.
        Scheduler:
        map does not operate by default on a particular Scheduler.

        History: 2.0.8 - experimental

        Type Parameters:
        R - the output value type
        Parameters:
        mapper - the mapper function turning Ts into optional of Rs.
        errorHandler - the function called with the current repeat count and failure Throwable and should return one of the ParallelFailureHandling enumeration values to indicate how to proceed.
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper or errorHandler is null
        Since:
        3.0.0
      • flatMapStream

        @CheckReturnValue
        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        @NonNull
        public final <@NonNull R> @NonNull ParallelFlowable<R> flatMapStream​(@NonNull
                                                                             @NonNull Function<? super @NonNull T,​? extends java.util.stream.Stream<? extends @NonNull R>> mapper)
        Maps each upstream item on each rail into a Stream and emits the Stream's items to the downstream in a sequential fashion.

        Due to the blocking and sequential nature of Java Streams, the streams are mapped and consumed in a sequential fashion without interleaving (unlike a more general flatMap(Function)). Therefore, flatMapStream and concatMapStream are identical operators and are provided as aliases.

        The operator closes the Stream upon cancellation and when it terminates. The exceptions raised when closing a Stream are routed to the global error handler (RxJavaPlugins.onError(Throwable). If a Stream should not be closed, turn it into an Iterable and use flatMapIterable(Function):

        
         source.flatMapIterable(v -> createStream(v)::iterator);
         

        Note that Streams can be consumed only once; any subsequent attempt to consume a Stream will result in an IllegalStateException.

        Primitive streams are not supported and items have to be boxed manually (e.g., via IntStream.boxed()):

        
         source.flatMapStream(v -> IntStream.rangeClosed(v + 1, v + 10).boxed());
         

        Stream does not support concurrent usage so creating and/or consuming the same instance multiple times from multiple threads can lead to undefined behavior.

        Backpressure:
        The operator honors the downstream backpressure and consumes the inner stream only on demand. The operator prefetches Flowable.bufferSize() items of the upstream (then 75% of it after the 75% received) and caches them until they are ready to be mapped into Streams after the current Stream has been consumed.
        Scheduler:
        flatMapStream does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the element type of the Streams and the result
        Parameters:
        mapper - the function that receives an upstream item and should return a Stream whose elements will be emitted to the downstream
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
        Since:
        3.0.0
        See Also:
        flatMap(Function), flatMapIterable(Function), flatMapStream(Function, int)
      • flatMapStream

        @CheckReturnValue
        @BackpressureSupport(FULL)
        @SchedulerSupport("none")
        @NonNull
        public final <@NonNull R> @NonNull ParallelFlowable<R> flatMapStream​(@NonNull
                                                                             @NonNull Function<? super @NonNull T,​? extends java.util.stream.Stream<? extends @NonNull R>> mapper,
                                                                             int prefetch)
        Maps each upstream item of each rail into a Stream and emits the Stream's items to the downstream in a sequential fashion.

        Due to the blocking and sequential nature of Java Streams, the streams are mapped and consumed in a sequential fashion without interleaving (unlike a more general flatMap(Function)). Therefore, flatMapStream and concatMapStream are identical operators and are provided as aliases.

        The operator closes the Stream upon cancellation and when it terminates. The exceptions raised when closing a Stream are routed to the global error handler (RxJavaPlugins.onError(Throwable). If a Stream should not be closed, turn it into an Iterable and use flatMapIterable(Function, int):

        
         source.flatMapIterable(v -> createStream(v)::iterator, 32);
         

        Note that Streams can be consumed only once; any subsequent attempt to consume a Stream will result in an IllegalStateException.

        Primitive streams are not supported and items have to be boxed manually (e.g., via IntStream.boxed()):

        
         source.flatMapStream(v -> IntStream.rangeClosed(v + 1, v + 10).boxed(), 32);
         

        Stream does not support concurrent usage so creating and/or consuming the same instance multiple times from multiple threads can lead to undefined behavior.

        Backpressure:
        The operator honors the downstream backpressure and consumes the inner stream only on demand. The operator prefetches the given amount of upstream items and caches them until they are ready to be mapped into Streams after the current Stream has been consumed.
        Scheduler:
        flatMapStream does not operate by default on a particular Scheduler.
        Type Parameters:
        R - the element type of the Streams and the result
        Parameters:
        mapper - the function that receives an upstream item and should return a Stream whose elements will be emitted to the downstream
        prefetch - the number of upstream items to request upfront, then 75% of this amount after each 75% upstream items received
        Returns:
        the new ParallelFlowable instance
        Throws:
        java.lang.NullPointerException - if mapper is null
        java.lang.IllegalArgumentException - if prefetch is non-positive
        Since:
        3.0.0
        See Also:
        flatMap(Function, boolean, int), flatMapIterable(Function, int)
      • collect

        @CheckReturnValue
        @NonNull
        @BackpressureSupport(UNBOUNDED_IN)
        @SchedulerSupport("none")
        public final <@NonNull A,​@NonNull R> @NonNull Flowable<R> collect​(@NonNull
                                                                                @NonNull java.util.stream.Collector<@NonNull T,​@NonNull A,​@NonNull R> collector)
        Reduces all values within a 'rail' and across 'rails' with a callbacks of the given Collector into one Flowable containing a single value.

        Each parallel rail receives its own Collector.accumulator() and Collector.combiner().

        Backpressure:
        The operator honors backpressure from the downstream and consumes the upstream rails in an unbounded manner (requesting Long.MAX_VALUE).
        Scheduler:
        collect does not operate by default on a particular Scheduler.
        Type Parameters:
        A - the accumulator type
        R - the output value type
        Parameters:
        collector - the Collector instance
        Returns:
        the new Flowable instance emitting the collected value.
        Throws:
        java.lang.NullPointerException - if collector is null
        Since:
        3.0.0