Is Java 8 Stream Laziness Useless in Practice?

Clash Royale CLAN TAG#URR8PPP
up vote
6
down vote
favorite
I have read a lot about Java 8 Streams lately, and several articles about lazy loading with Java 8 streams specifically: here and over here. I can't seem to get over the fact that lazy loading is COMPLETELY useless (or at best, a minor syntactic convenience offering zero performance value).
Let's take this code as an example:
int myInts = new int1,2,3,5,8,13,21;
IntStream myIntStream = IntStream.of(myInts);
int myChangedArray = myIntStream
.peek(n -> System.out.println("About to square: " + n))
.map(n -> (int)Math.pow(n, 2))
.peek(n -> System.out.println("Done squaring, result: " + n))
.toArray();
This will log in the console, because the terminal operation, in this case toArray(), is called, and our stream is lazy and executes only when the terminal operation is called. Of course I can also do this:
IntStream myChangedInts = myIntStream
.peek(n -> System.out.println("About to square: " + n))
.map(n -> (int)Math.pow(n, 2))
.peek(n -> System.out.println("Done squaring, result: " + n));
And nothing will be printed, because the map isn't happening, because I don't need the data. Until I call this:
int myChangedArray = myChangedInts.toArray();
And voila, I get my mapped data, and my console logs. Except I see zero benefit to it whatsoever. I realize I can define the filter code long before I call to toArray(), and I can pass around this "not-really-filtered stream around), but so what? Is this the only benefit?
The articles seem to imply there is a performance gain associated with lazyness, for example:
In the Java 8 Streams API, the intermediate operations are lazy and their internal processing model is optimized to make it being capable of processing the large amount of data with high performance.
and
Java 8 Streams API optimizes stream processing with the help of short circuiting operations. Short Circuit methods ends the stream processing as soon as their conditions are satisfied. In normal words short circuit operations, once the condition is satisfied just breaks all of the intermediate operations, lying before in the pipeline. Some of the intermediate as well as terminal operations have this behavior.
Sounds literally like breaking out of a loop, and not associated with lazyness at all.
Finally, there is this perplexing line in the second article:
Lazy operations achieve efficiency. It is a way not to work on stale data. Lazy operations might be useful in the situations where input data is consumed gradually rather than having whole complete set of elements beforehand. For example consider the situations where an infinite stream has been created using Stream#generate(Supplier) and the provided Supplier function is gradually receiving data from a remote server. In those kind of the situations server call will only be made at a terminal operation when it's needed.
Not working on stale data? What? How does lazy loading keep someone from working on stale data?
tl;dr: Is there any benefit to lazy loading besides being able to run the filter/map/reduce/whatever operation at a later time (which offers zero performance benefit)?
If so, what's a real-world use case?
java java-stream lazy-loading
add a comment |Â
up vote
6
down vote
favorite
I have read a lot about Java 8 Streams lately, and several articles about lazy loading with Java 8 streams specifically: here and over here. I can't seem to get over the fact that lazy loading is COMPLETELY useless (or at best, a minor syntactic convenience offering zero performance value).
Let's take this code as an example:
int myInts = new int1,2,3,5,8,13,21;
IntStream myIntStream = IntStream.of(myInts);
int myChangedArray = myIntStream
.peek(n -> System.out.println("About to square: " + n))
.map(n -> (int)Math.pow(n, 2))
.peek(n -> System.out.println("Done squaring, result: " + n))
.toArray();
This will log in the console, because the terminal operation, in this case toArray(), is called, and our stream is lazy and executes only when the terminal operation is called. Of course I can also do this:
IntStream myChangedInts = myIntStream
.peek(n -> System.out.println("About to square: " + n))
.map(n -> (int)Math.pow(n, 2))
.peek(n -> System.out.println("Done squaring, result: " + n));
And nothing will be printed, because the map isn't happening, because I don't need the data. Until I call this:
int myChangedArray = myChangedInts.toArray();
And voila, I get my mapped data, and my console logs. Except I see zero benefit to it whatsoever. I realize I can define the filter code long before I call to toArray(), and I can pass around this "not-really-filtered stream around), but so what? Is this the only benefit?
The articles seem to imply there is a performance gain associated with lazyness, for example:
In the Java 8 Streams API, the intermediate operations are lazy and their internal processing model is optimized to make it being capable of processing the large amount of data with high performance.
and
Java 8 Streams API optimizes stream processing with the help of short circuiting operations. Short Circuit methods ends the stream processing as soon as their conditions are satisfied. In normal words short circuit operations, once the condition is satisfied just breaks all of the intermediate operations, lying before in the pipeline. Some of the intermediate as well as terminal operations have this behavior.
Sounds literally like breaking out of a loop, and not associated with lazyness at all.
Finally, there is this perplexing line in the second article:
Lazy operations achieve efficiency. It is a way not to work on stale data. Lazy operations might be useful in the situations where input data is consumed gradually rather than having whole complete set of elements beforehand. For example consider the situations where an infinite stream has been created using Stream#generate(Supplier) and the provided Supplier function is gradually receiving data from a remote server. In those kind of the situations server call will only be made at a terminal operation when it's needed.
Not working on stale data? What? How does lazy loading keep someone from working on stale data?
tl;dr: Is there any benefit to lazy loading besides being able to run the filter/map/reduce/whatever operation at a later time (which offers zero performance benefit)?
If so, what's a real-world use case?
java java-stream lazy-loading
"Lazy" here just means "only do work when necessary". Doing unnecessary work is less performant than only doing necessary work. I feel like you're stuck on the fact that everythingStreamcan do you can also do in loops. But that's irrelevant. Nothing in the quoted documentation states thatStreams are better than loops, only that the former is coded to be as efficient as possible.
â Slaw
5 hours ago
add a comment |Â
up vote
6
down vote
favorite
up vote
6
down vote
favorite
I have read a lot about Java 8 Streams lately, and several articles about lazy loading with Java 8 streams specifically: here and over here. I can't seem to get over the fact that lazy loading is COMPLETELY useless (or at best, a minor syntactic convenience offering zero performance value).
Let's take this code as an example:
int myInts = new int1,2,3,5,8,13,21;
IntStream myIntStream = IntStream.of(myInts);
int myChangedArray = myIntStream
.peek(n -> System.out.println("About to square: " + n))
.map(n -> (int)Math.pow(n, 2))
.peek(n -> System.out.println("Done squaring, result: " + n))
.toArray();
This will log in the console, because the terminal operation, in this case toArray(), is called, and our stream is lazy and executes only when the terminal operation is called. Of course I can also do this:
IntStream myChangedInts = myIntStream
.peek(n -> System.out.println("About to square: " + n))
.map(n -> (int)Math.pow(n, 2))
.peek(n -> System.out.println("Done squaring, result: " + n));
And nothing will be printed, because the map isn't happening, because I don't need the data. Until I call this:
int myChangedArray = myChangedInts.toArray();
And voila, I get my mapped data, and my console logs. Except I see zero benefit to it whatsoever. I realize I can define the filter code long before I call to toArray(), and I can pass around this "not-really-filtered stream around), but so what? Is this the only benefit?
The articles seem to imply there is a performance gain associated with lazyness, for example:
In the Java 8 Streams API, the intermediate operations are lazy and their internal processing model is optimized to make it being capable of processing the large amount of data with high performance.
and
Java 8 Streams API optimizes stream processing with the help of short circuiting operations. Short Circuit methods ends the stream processing as soon as their conditions are satisfied. In normal words short circuit operations, once the condition is satisfied just breaks all of the intermediate operations, lying before in the pipeline. Some of the intermediate as well as terminal operations have this behavior.
Sounds literally like breaking out of a loop, and not associated with lazyness at all.
Finally, there is this perplexing line in the second article:
Lazy operations achieve efficiency. It is a way not to work on stale data. Lazy operations might be useful in the situations where input data is consumed gradually rather than having whole complete set of elements beforehand. For example consider the situations where an infinite stream has been created using Stream#generate(Supplier) and the provided Supplier function is gradually receiving data from a remote server. In those kind of the situations server call will only be made at a terminal operation when it's needed.
Not working on stale data? What? How does lazy loading keep someone from working on stale data?
tl;dr: Is there any benefit to lazy loading besides being able to run the filter/map/reduce/whatever operation at a later time (which offers zero performance benefit)?
If so, what's a real-world use case?
java java-stream lazy-loading
I have read a lot about Java 8 Streams lately, and several articles about lazy loading with Java 8 streams specifically: here and over here. I can't seem to get over the fact that lazy loading is COMPLETELY useless (or at best, a minor syntactic convenience offering zero performance value).
Let's take this code as an example:
int myInts = new int1,2,3,5,8,13,21;
IntStream myIntStream = IntStream.of(myInts);
int myChangedArray = myIntStream
.peek(n -> System.out.println("About to square: " + n))
.map(n -> (int)Math.pow(n, 2))
.peek(n -> System.out.println("Done squaring, result: " + n))
.toArray();
This will log in the console, because the terminal operation, in this case toArray(), is called, and our stream is lazy and executes only when the terminal operation is called. Of course I can also do this:
IntStream myChangedInts = myIntStream
.peek(n -> System.out.println("About to square: " + n))
.map(n -> (int)Math.pow(n, 2))
.peek(n -> System.out.println("Done squaring, result: " + n));
And nothing will be printed, because the map isn't happening, because I don't need the data. Until I call this:
int myChangedArray = myChangedInts.toArray();
And voila, I get my mapped data, and my console logs. Except I see zero benefit to it whatsoever. I realize I can define the filter code long before I call to toArray(), and I can pass around this "not-really-filtered stream around), but so what? Is this the only benefit?
The articles seem to imply there is a performance gain associated with lazyness, for example:
In the Java 8 Streams API, the intermediate operations are lazy and their internal processing model is optimized to make it being capable of processing the large amount of data with high performance.
and
Java 8 Streams API optimizes stream processing with the help of short circuiting operations. Short Circuit methods ends the stream processing as soon as their conditions are satisfied. In normal words short circuit operations, once the condition is satisfied just breaks all of the intermediate operations, lying before in the pipeline. Some of the intermediate as well as terminal operations have this behavior.
Sounds literally like breaking out of a loop, and not associated with lazyness at all.
Finally, there is this perplexing line in the second article:
Lazy operations achieve efficiency. It is a way not to work on stale data. Lazy operations might be useful in the situations where input data is consumed gradually rather than having whole complete set of elements beforehand. For example consider the situations where an infinite stream has been created using Stream#generate(Supplier) and the provided Supplier function is gradually receiving data from a remote server. In those kind of the situations server call will only be made at a terminal operation when it's needed.
Not working on stale data? What? How does lazy loading keep someone from working on stale data?
tl;dr: Is there any benefit to lazy loading besides being able to run the filter/map/reduce/whatever operation at a later time (which offers zero performance benefit)?
If so, what's a real-world use case?
java java-stream lazy-loading
java java-stream lazy-loading
edited 6 hours ago
asked 6 hours ago
VSO
1,85984087
1,85984087
"Lazy" here just means "only do work when necessary". Doing unnecessary work is less performant than only doing necessary work. I feel like you're stuck on the fact that everythingStreamcan do you can also do in loops. But that's irrelevant. Nothing in the quoted documentation states thatStreams are better than loops, only that the former is coded to be as efficient as possible.
â Slaw
5 hours ago
add a comment |Â
"Lazy" here just means "only do work when necessary". Doing unnecessary work is less performant than only doing necessary work. I feel like you're stuck on the fact that everythingStreamcan do you can also do in loops. But that's irrelevant. Nothing in the quoted documentation states thatStreams are better than loops, only that the former is coded to be as efficient as possible.
â Slaw
5 hours ago
"Lazy" here just means "only do work when necessary". Doing unnecessary work is less performant than only doing necessary work. I feel like you're stuck on the fact that everything
Stream can do you can also do in loops. But that's irrelevant. Nothing in the quoted documentation states that Streams are better than loops, only that the former is coded to be as efficient as possible.â Slaw
5 hours ago
"Lazy" here just means "only do work when necessary". Doing unnecessary work is less performant than only doing necessary work. I feel like you're stuck on the fact that everything
Stream can do you can also do in loops. But that's irrelevant. Nothing in the quoted documentation states that Streams are better than loops, only that the former is coded to be as efficient as possible.â Slaw
5 hours ago
add a comment |Â
5 Answers
5
active
oldest
votes
up vote
4
down vote
Your terminal operation, toArray(), perhaps supports your argument given that it requires all elements of the stream.
Some terminal operations don't. And for these, it would be a waste if streams weren't lazily executed. Two examples:
//example 1: print first element of 1000 after transformations
IntStream.range(0, 1000)
.peek(System.out::println)
.mapToObj(String::valueOf)
.peek(System.out::println)
.findFirst()
.ifPresent(System.out::println);
//example 2: check if any value has an even key
boolean valid = records.
.map(this::heavyConversion)
.filter(this::checkWithWebService)
.mapToInt(Record::getKey)
.anyMatch(i -> i % 2 == 0)
The first stream will print:
0
0
0
That is, intermediate operations will be run just on one element. This is an important optimization. If it weren't lazy, then all the peek() calls would have to run on all elements (absolutely unnecessary as you're interested in just one element). Intermediate operations can be expensive (such as in the second example)
Short-circuiting terminal operation (of which toArray isn't) make this optimization possible.
Please see my comment on David Ehrmann's answer.
â VSO
5 hours ago
1
@VSO It's not about lazy loading, it's about lazy execution. Look at it as "executing only after an overall plan is created", rather than "executing until one element causes the exit condition to be met" (a loop will run on each element until this one causes the condition exit condition to be met, but streams will only run on the one element, skipping all other - wasteful - executions). I'll edit the answer.
â ernest_k
5 hours ago
1
I appreciate the help. Given "but streams will only run on the one element, skipping all other - wasteful - executions", how can the program now which executions are wasteful if it needs to first check a condition to determine that? Sorry, I know I sound nitpicky, but I am not following.
â VSO
5 hours ago
add a comment |Â
up vote
1
down vote
I have a real example from our code base, since I'm going to simplify it, not entirely sure you might like it or fully grasp it...
We have a service that needs a List<CustomService>, I am suppose to call it. Now in order to call it, I am going to a DB (much simpler that reality) and obtaining a List<DBObject>; in order to obtain a List<CustomService> from that, there are some heavy transformations that need to be done.
And here are my choices, transform in place and pass the list. Simple, yet, probably not that optimal. Second option, refactor the service, to accept a List<DBObject> and a Function<DBObject, CustomService>. And this sounds trivial, but it enables lazyness( among other things) . That service might sometimes need only a few elements from that List, or sometimes a max by some property, etc - thus no need for me to do the heavy transformation for all elements, this is where Stream API pull based laziness is a winner.
Before Streams existed, we used to use guava, it has Lists.transform( list, function) that was lazy too.
It's not a fundamental feature of streams as such, it could have been done even without guava, but it's s lot simpler that way. The example here provided with findFirst is great and the simplest to understand; this is the entire point of laziness, elements are pulled only when needed, they are not passed from an intermediate operation to another in chunks, but pass from one stage to another one at a time.
add a comment |Â
up vote
1
down vote
Laziness can be very useful for the users of your API, especially when the result of the Stream pipeline evaluation might be very large!
The simplest example is the Files.lines method in the Java Stream API itself. If you don't want to put the whole file into the memory and you only need to read the first N lines, then just write:
Stream<String> content = Files.lines(path); // lazy reading
List<String> res = content.limit(N).collect(Collectors.toList());
add a comment |Â
up vote
0
down vote
You're right that there won't be a benefit from map().reduce() or map().collect(), but you there's a pretty obvious benefit with findAny() findFirst(), anyMatch(), allMatch(), etc. Basically, any operation that can be short-circuited.
I addressed this in the question, I don't understand what this has to do with lazy loading. The same thing can be done in a loop by breaking out of it once a condition is met. How is lazy loading a factor in terminating execution? Items in a collection are always processed one-by-one, with or without lazy loading.
â VSO
5 hours ago
It was never claimed that laziness gave you an advantage that you couldnâÂÂt obtain with conventional control structures. ItâÂÂs still an advantage over a (hypothetical) eager stream.
â Ole V.V.
5 hours ago
@VSO let's think different. you have an eager system, how would source data get passed through different intermediate states of a stream pipeline?
â Eugene
4 hours ago
add a comment |Â
up vote
0
down vote
Good question.
Assuming you write textbook perfect code, the difference in performance between a properly optimized for and a stream is not noticeable (streams tend to be slightly better class loading wise, but the difference should not be noticeable in most cases).
Consider the following example.
// some lengthy computation
private static int doStuff(int i)
try Thread.sleep(1000); catch (InterruptedException e)
return i;
public static OptionalInt findFirstGreaterThanStream(int value)
return IntStream
.of(MY_INTS)
.map(Main::doStuff)
.filter(x -> x > value)
.findFirst();
public static OptionalInt findFirstGreaterThanFor(int value)
for (int i = 0; i < MY_INTS.length; i++)
int mapped = Main.doStuff(MY_INTS[i]);
if(mapped > value)
return OptionalInt.of(mapped);
return OptionalInt.empty();
Given the above methods, the next test should show they execute in about the same time.
public static void main(String args)
long begin;
long end;
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanStream(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanFor(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
OptionalInt[8]
5119
OptionalInt[8]
5001
Anyway, we spend most of the time in the doStuff method. Lets say we want to add more threads to the mix.
Adjusting the stream method is trivial (considering your operations meets the preconditions of parallel streams).
public static OptionalInt findFirstGreaterThanParallelStream(int value)
return IntStream
.of(MY_INTS)
.parallel()
.map(Main::doStuff)
.filter(x -> x > value)
.findFirst();
Achieving the same behavior without streams can be tricky.
public static OptionalInt findFirstGreaterThanParallelFor(int value, Executor executor)
AtomicInteger counter = new AtomicInteger(0);
CompletableFuture<OptionalInt> cf = CompletableFuture.supplyAsync(() ->
while(counter.get() != MY_INTS.length-1);
return OptionalInt.empty();
);
for (int i = 0; i < MY_INTS.length; i++)
final int current = MY_INTS[i];
executor.execute(() ->
int mapped = Main.doStuff(current);
if(mapped > value)
cf.complete(OptionalInt.of(mapped));
else
counter.incrementAndGet();
);
try
return cf.get();
catch (InterruptedException
The tests execute in about the same time again.
public static void main(String args)
long begin;
long end;
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanParallelStream(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
ExecutorService executor = Executors.newFixedThreadPool(10);
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanParallelFor(5678, executor));
end = System.currentTimeMillis();
System.out.println(end-begin);
executor.shutdown();
executor.awaitTermination(10, TimeUnit.SECONDS);
executor.shutdownNow();
OptionalInt[8]
1004
OptionalInt[8]
1004
In conclusion, although we don't squeeze a big performance benefit out of streams (considering you write excellent multi-threaded code in your for alternative), the code itself tends to be more maintainable.
A (slightly off-topic) final note:
As with programming languages, higher level abstractions (streams relative to fors) make stuff easier to develop at the cost of performance. We did not move away from assembly to procedural languages to object oriented languages because the later offered greater performance. We moved because it made us more productive (develop the same thing at a lower cost). If you are able to get the same performance out of a stream as you would do with a for and properly written multi-threaded code, I would say it's already a win.
I have no idea how this answers the question, you literally talk about "other" things here, not what the OP has asked
â Eugene
2 hours ago
and what is streams tend to be slightly better class loading wise suppose to mean?
â Eugene
2 hours ago
@Eugene From OP's question:If so, what's a real-world use case?. My example outlines a use case where you gain something out of using the streams with lazy loading (thefindFirstmethod in my example) versus mimicking the same behavior viaforloops.
â alexrolea
2 hours ago
that is exactly what the answer above already shows; but the fundamental issue is still there, the OP asks why is this better from a plain loop. Also care to explain the classloading thing?
â Eugene
2 hours ago
@Eugene could you please shed some light on how the answers above shows how a stream is better than a for (as opposed to plainly stating that)?
â alexrolea
1 hour ago
 |Â
show 1 more comment
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
4
down vote
Your terminal operation, toArray(), perhaps supports your argument given that it requires all elements of the stream.
Some terminal operations don't. And for these, it would be a waste if streams weren't lazily executed. Two examples:
//example 1: print first element of 1000 after transformations
IntStream.range(0, 1000)
.peek(System.out::println)
.mapToObj(String::valueOf)
.peek(System.out::println)
.findFirst()
.ifPresent(System.out::println);
//example 2: check if any value has an even key
boolean valid = records.
.map(this::heavyConversion)
.filter(this::checkWithWebService)
.mapToInt(Record::getKey)
.anyMatch(i -> i % 2 == 0)
The first stream will print:
0
0
0
That is, intermediate operations will be run just on one element. This is an important optimization. If it weren't lazy, then all the peek() calls would have to run on all elements (absolutely unnecessary as you're interested in just one element). Intermediate operations can be expensive (such as in the second example)
Short-circuiting terminal operation (of which toArray isn't) make this optimization possible.
Please see my comment on David Ehrmann's answer.
â VSO
5 hours ago
1
@VSO It's not about lazy loading, it's about lazy execution. Look at it as "executing only after an overall plan is created", rather than "executing until one element causes the exit condition to be met" (a loop will run on each element until this one causes the condition exit condition to be met, but streams will only run on the one element, skipping all other - wasteful - executions). I'll edit the answer.
â ernest_k
5 hours ago
1
I appreciate the help. Given "but streams will only run on the one element, skipping all other - wasteful - executions", how can the program now which executions are wasteful if it needs to first check a condition to determine that? Sorry, I know I sound nitpicky, but I am not following.
â VSO
5 hours ago
add a comment |Â
up vote
4
down vote
Your terminal operation, toArray(), perhaps supports your argument given that it requires all elements of the stream.
Some terminal operations don't. And for these, it would be a waste if streams weren't lazily executed. Two examples:
//example 1: print first element of 1000 after transformations
IntStream.range(0, 1000)
.peek(System.out::println)
.mapToObj(String::valueOf)
.peek(System.out::println)
.findFirst()
.ifPresent(System.out::println);
//example 2: check if any value has an even key
boolean valid = records.
.map(this::heavyConversion)
.filter(this::checkWithWebService)
.mapToInt(Record::getKey)
.anyMatch(i -> i % 2 == 0)
The first stream will print:
0
0
0
That is, intermediate operations will be run just on one element. This is an important optimization. If it weren't lazy, then all the peek() calls would have to run on all elements (absolutely unnecessary as you're interested in just one element). Intermediate operations can be expensive (such as in the second example)
Short-circuiting terminal operation (of which toArray isn't) make this optimization possible.
Please see my comment on David Ehrmann's answer.
â VSO
5 hours ago
1
@VSO It's not about lazy loading, it's about lazy execution. Look at it as "executing only after an overall plan is created", rather than "executing until one element causes the exit condition to be met" (a loop will run on each element until this one causes the condition exit condition to be met, but streams will only run on the one element, skipping all other - wasteful - executions). I'll edit the answer.
â ernest_k
5 hours ago
1
I appreciate the help. Given "but streams will only run on the one element, skipping all other - wasteful - executions", how can the program now which executions are wasteful if it needs to first check a condition to determine that? Sorry, I know I sound nitpicky, but I am not following.
â VSO
5 hours ago
add a comment |Â
up vote
4
down vote
up vote
4
down vote
Your terminal operation, toArray(), perhaps supports your argument given that it requires all elements of the stream.
Some terminal operations don't. And for these, it would be a waste if streams weren't lazily executed. Two examples:
//example 1: print first element of 1000 after transformations
IntStream.range(0, 1000)
.peek(System.out::println)
.mapToObj(String::valueOf)
.peek(System.out::println)
.findFirst()
.ifPresent(System.out::println);
//example 2: check if any value has an even key
boolean valid = records.
.map(this::heavyConversion)
.filter(this::checkWithWebService)
.mapToInt(Record::getKey)
.anyMatch(i -> i % 2 == 0)
The first stream will print:
0
0
0
That is, intermediate operations will be run just on one element. This is an important optimization. If it weren't lazy, then all the peek() calls would have to run on all elements (absolutely unnecessary as you're interested in just one element). Intermediate operations can be expensive (such as in the second example)
Short-circuiting terminal operation (of which toArray isn't) make this optimization possible.
Your terminal operation, toArray(), perhaps supports your argument given that it requires all elements of the stream.
Some terminal operations don't. And for these, it would be a waste if streams weren't lazily executed. Two examples:
//example 1: print first element of 1000 after transformations
IntStream.range(0, 1000)
.peek(System.out::println)
.mapToObj(String::valueOf)
.peek(System.out::println)
.findFirst()
.ifPresent(System.out::println);
//example 2: check if any value has an even key
boolean valid = records.
.map(this::heavyConversion)
.filter(this::checkWithWebService)
.mapToInt(Record::getKey)
.anyMatch(i -> i % 2 == 0)
The first stream will print:
0
0
0
That is, intermediate operations will be run just on one element. This is an important optimization. If it weren't lazy, then all the peek() calls would have to run on all elements (absolutely unnecessary as you're interested in just one element). Intermediate operations can be expensive (such as in the second example)
Short-circuiting terminal operation (of which toArray isn't) make this optimization possible.
answered 5 hours ago
ernest_k
14.9k41429
14.9k41429
Please see my comment on David Ehrmann's answer.
â VSO
5 hours ago
1
@VSO It's not about lazy loading, it's about lazy execution. Look at it as "executing only after an overall plan is created", rather than "executing until one element causes the exit condition to be met" (a loop will run on each element until this one causes the condition exit condition to be met, but streams will only run on the one element, skipping all other - wasteful - executions). I'll edit the answer.
â ernest_k
5 hours ago
1
I appreciate the help. Given "but streams will only run on the one element, skipping all other - wasteful - executions", how can the program now which executions are wasteful if it needs to first check a condition to determine that? Sorry, I know I sound nitpicky, but I am not following.
â VSO
5 hours ago
add a comment |Â
Please see my comment on David Ehrmann's answer.
â VSO
5 hours ago
1
@VSO It's not about lazy loading, it's about lazy execution. Look at it as "executing only after an overall plan is created", rather than "executing until one element causes the exit condition to be met" (a loop will run on each element until this one causes the condition exit condition to be met, but streams will only run on the one element, skipping all other - wasteful - executions). I'll edit the answer.
â ernest_k
5 hours ago
1
I appreciate the help. Given "but streams will only run on the one element, skipping all other - wasteful - executions", how can the program now which executions are wasteful if it needs to first check a condition to determine that? Sorry, I know I sound nitpicky, but I am not following.
â VSO
5 hours ago
Please see my comment on David Ehrmann's answer.
â VSO
5 hours ago
Please see my comment on David Ehrmann's answer.
â VSO
5 hours ago
1
1
@VSO It's not about lazy loading, it's about lazy execution. Look at it as "executing only after an overall plan is created", rather than "executing until one element causes the exit condition to be met" (a loop will run on each element until this one causes the condition exit condition to be met, but streams will only run on the one element, skipping all other - wasteful - executions). I'll edit the answer.
â ernest_k
5 hours ago
@VSO It's not about lazy loading, it's about lazy execution. Look at it as "executing only after an overall plan is created", rather than "executing until one element causes the exit condition to be met" (a loop will run on each element until this one causes the condition exit condition to be met, but streams will only run on the one element, skipping all other - wasteful - executions). I'll edit the answer.
â ernest_k
5 hours ago
1
1
I appreciate the help. Given "but streams will only run on the one element, skipping all other - wasteful - executions", how can the program now which executions are wasteful if it needs to first check a condition to determine that? Sorry, I know I sound nitpicky, but I am not following.
â VSO
5 hours ago
I appreciate the help. Given "but streams will only run on the one element, skipping all other - wasteful - executions", how can the program now which executions are wasteful if it needs to first check a condition to determine that? Sorry, I know I sound nitpicky, but I am not following.
â VSO
5 hours ago
add a comment |Â
up vote
1
down vote
I have a real example from our code base, since I'm going to simplify it, not entirely sure you might like it or fully grasp it...
We have a service that needs a List<CustomService>, I am suppose to call it. Now in order to call it, I am going to a DB (much simpler that reality) and obtaining a List<DBObject>; in order to obtain a List<CustomService> from that, there are some heavy transformations that need to be done.
And here are my choices, transform in place and pass the list. Simple, yet, probably not that optimal. Second option, refactor the service, to accept a List<DBObject> and a Function<DBObject, CustomService>. And this sounds trivial, but it enables lazyness( among other things) . That service might sometimes need only a few elements from that List, or sometimes a max by some property, etc - thus no need for me to do the heavy transformation for all elements, this is where Stream API pull based laziness is a winner.
Before Streams existed, we used to use guava, it has Lists.transform( list, function) that was lazy too.
It's not a fundamental feature of streams as such, it could have been done even without guava, but it's s lot simpler that way. The example here provided with findFirst is great and the simplest to understand; this is the entire point of laziness, elements are pulled only when needed, they are not passed from an intermediate operation to another in chunks, but pass from one stage to another one at a time.
add a comment |Â
up vote
1
down vote
I have a real example from our code base, since I'm going to simplify it, not entirely sure you might like it or fully grasp it...
We have a service that needs a List<CustomService>, I am suppose to call it. Now in order to call it, I am going to a DB (much simpler that reality) and obtaining a List<DBObject>; in order to obtain a List<CustomService> from that, there are some heavy transformations that need to be done.
And here are my choices, transform in place and pass the list. Simple, yet, probably not that optimal. Second option, refactor the service, to accept a List<DBObject> and a Function<DBObject, CustomService>. And this sounds trivial, but it enables lazyness( among other things) . That service might sometimes need only a few elements from that List, or sometimes a max by some property, etc - thus no need for me to do the heavy transformation for all elements, this is where Stream API pull based laziness is a winner.
Before Streams existed, we used to use guava, it has Lists.transform( list, function) that was lazy too.
It's not a fundamental feature of streams as such, it could have been done even without guava, but it's s lot simpler that way. The example here provided with findFirst is great and the simplest to understand; this is the entire point of laziness, elements are pulled only when needed, they are not passed from an intermediate operation to another in chunks, but pass from one stage to another one at a time.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
I have a real example from our code base, since I'm going to simplify it, not entirely sure you might like it or fully grasp it...
We have a service that needs a List<CustomService>, I am suppose to call it. Now in order to call it, I am going to a DB (much simpler that reality) and obtaining a List<DBObject>; in order to obtain a List<CustomService> from that, there are some heavy transformations that need to be done.
And here are my choices, transform in place and pass the list. Simple, yet, probably not that optimal. Second option, refactor the service, to accept a List<DBObject> and a Function<DBObject, CustomService>. And this sounds trivial, but it enables lazyness( among other things) . That service might sometimes need only a few elements from that List, or sometimes a max by some property, etc - thus no need for me to do the heavy transformation for all elements, this is where Stream API pull based laziness is a winner.
Before Streams existed, we used to use guava, it has Lists.transform( list, function) that was lazy too.
It's not a fundamental feature of streams as such, it could have been done even without guava, but it's s lot simpler that way. The example here provided with findFirst is great and the simplest to understand; this is the entire point of laziness, elements are pulled only when needed, they are not passed from an intermediate operation to another in chunks, but pass from one stage to another one at a time.
I have a real example from our code base, since I'm going to simplify it, not entirely sure you might like it or fully grasp it...
We have a service that needs a List<CustomService>, I am suppose to call it. Now in order to call it, I am going to a DB (much simpler that reality) and obtaining a List<DBObject>; in order to obtain a List<CustomService> from that, there are some heavy transformations that need to be done.
And here are my choices, transform in place and pass the list. Simple, yet, probably not that optimal. Second option, refactor the service, to accept a List<DBObject> and a Function<DBObject, CustomService>. And this sounds trivial, but it enables lazyness( among other things) . That service might sometimes need only a few elements from that List, or sometimes a max by some property, etc - thus no need for me to do the heavy transformation for all elements, this is where Stream API pull based laziness is a winner.
Before Streams existed, we used to use guava, it has Lists.transform( list, function) that was lazy too.
It's not a fundamental feature of streams as such, it could have been done even without guava, but it's s lot simpler that way. The example here provided with findFirst is great and the simplest to understand; this is the entire point of laziness, elements are pulled only when needed, they are not passed from an intermediate operation to another in chunks, but pass from one stage to another one at a time.
answered 1 hour ago
Eugene
62.4k986144
62.4k986144
add a comment |Â
add a comment |Â
up vote
1
down vote
Laziness can be very useful for the users of your API, especially when the result of the Stream pipeline evaluation might be very large!
The simplest example is the Files.lines method in the Java Stream API itself. If you don't want to put the whole file into the memory and you only need to read the first N lines, then just write:
Stream<String> content = Files.lines(path); // lazy reading
List<String> res = content.limit(N).collect(Collectors.toList());
add a comment |Â
up vote
1
down vote
Laziness can be very useful for the users of your API, especially when the result of the Stream pipeline evaluation might be very large!
The simplest example is the Files.lines method in the Java Stream API itself. If you don't want to put the whole file into the memory and you only need to read the first N lines, then just write:
Stream<String> content = Files.lines(path); // lazy reading
List<String> res = content.limit(N).collect(Collectors.toList());
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Laziness can be very useful for the users of your API, especially when the result of the Stream pipeline evaluation might be very large!
The simplest example is the Files.lines method in the Java Stream API itself. If you don't want to put the whole file into the memory and you only need to read the first N lines, then just write:
Stream<String> content = Files.lines(path); // lazy reading
List<String> res = content.limit(N).collect(Collectors.toList());
Laziness can be very useful for the users of your API, especially when the result of the Stream pipeline evaluation might be very large!
The simplest example is the Files.lines method in the Java Stream API itself. If you don't want to put the whole file into the memory and you only need to read the first N lines, then just write:
Stream<String> content = Files.lines(path); // lazy reading
List<String> res = content.limit(N).collect(Collectors.toList());
edited 52 mins ago
answered 1 hour ago
Oleksandr
7,11533366
7,11533366
add a comment |Â
add a comment |Â
up vote
0
down vote
You're right that there won't be a benefit from map().reduce() or map().collect(), but you there's a pretty obvious benefit with findAny() findFirst(), anyMatch(), allMatch(), etc. Basically, any operation that can be short-circuited.
I addressed this in the question, I don't understand what this has to do with lazy loading. The same thing can be done in a loop by breaking out of it once a condition is met. How is lazy loading a factor in terminating execution? Items in a collection are always processed one-by-one, with or without lazy loading.
â VSO
5 hours ago
It was never claimed that laziness gave you an advantage that you couldnâÂÂt obtain with conventional control structures. ItâÂÂs still an advantage over a (hypothetical) eager stream.
â Ole V.V.
5 hours ago
@VSO let's think different. you have an eager system, how would source data get passed through different intermediate states of a stream pipeline?
â Eugene
4 hours ago
add a comment |Â
up vote
0
down vote
You're right that there won't be a benefit from map().reduce() or map().collect(), but you there's a pretty obvious benefit with findAny() findFirst(), anyMatch(), allMatch(), etc. Basically, any operation that can be short-circuited.
I addressed this in the question, I don't understand what this has to do with lazy loading. The same thing can be done in a loop by breaking out of it once a condition is met. How is lazy loading a factor in terminating execution? Items in a collection are always processed one-by-one, with or without lazy loading.
â VSO
5 hours ago
It was never claimed that laziness gave you an advantage that you couldnâÂÂt obtain with conventional control structures. ItâÂÂs still an advantage over a (hypothetical) eager stream.
â Ole V.V.
5 hours ago
@VSO let's think different. you have an eager system, how would source data get passed through different intermediate states of a stream pipeline?
â Eugene
4 hours ago
add a comment |Â
up vote
0
down vote
up vote
0
down vote
You're right that there won't be a benefit from map().reduce() or map().collect(), but you there's a pretty obvious benefit with findAny() findFirst(), anyMatch(), allMatch(), etc. Basically, any operation that can be short-circuited.
You're right that there won't be a benefit from map().reduce() or map().collect(), but you there's a pretty obvious benefit with findAny() findFirst(), anyMatch(), allMatch(), etc. Basically, any operation that can be short-circuited.
answered 5 hours ago
David Ehrmann
5,46611827
5,46611827
I addressed this in the question, I don't understand what this has to do with lazy loading. The same thing can be done in a loop by breaking out of it once a condition is met. How is lazy loading a factor in terminating execution? Items in a collection are always processed one-by-one, with or without lazy loading.
â VSO
5 hours ago
It was never claimed that laziness gave you an advantage that you couldnâÂÂt obtain with conventional control structures. ItâÂÂs still an advantage over a (hypothetical) eager stream.
â Ole V.V.
5 hours ago
@VSO let's think different. you have an eager system, how would source data get passed through different intermediate states of a stream pipeline?
â Eugene
4 hours ago
add a comment |Â
I addressed this in the question, I don't understand what this has to do with lazy loading. The same thing can be done in a loop by breaking out of it once a condition is met. How is lazy loading a factor in terminating execution? Items in a collection are always processed one-by-one, with or without lazy loading.
â VSO
5 hours ago
It was never claimed that laziness gave you an advantage that you couldnâÂÂt obtain with conventional control structures. ItâÂÂs still an advantage over a (hypothetical) eager stream.
â Ole V.V.
5 hours ago
@VSO let's think different. you have an eager system, how would source data get passed through different intermediate states of a stream pipeline?
â Eugene
4 hours ago
I addressed this in the question, I don't understand what this has to do with lazy loading. The same thing can be done in a loop by breaking out of it once a condition is met. How is lazy loading a factor in terminating execution? Items in a collection are always processed one-by-one, with or without lazy loading.
â VSO
5 hours ago
I addressed this in the question, I don't understand what this has to do with lazy loading. The same thing can be done in a loop by breaking out of it once a condition is met. How is lazy loading a factor in terminating execution? Items in a collection are always processed one-by-one, with or without lazy loading.
â VSO
5 hours ago
It was never claimed that laziness gave you an advantage that you couldnâÂÂt obtain with conventional control structures. ItâÂÂs still an advantage over a (hypothetical) eager stream.
â Ole V.V.
5 hours ago
It was never claimed that laziness gave you an advantage that you couldnâÂÂt obtain with conventional control structures. ItâÂÂs still an advantage over a (hypothetical) eager stream.
â Ole V.V.
5 hours ago
@VSO let's think different. you have an eager system, how would source data get passed through different intermediate states of a stream pipeline?
â Eugene
4 hours ago
@VSO let's think different. you have an eager system, how would source data get passed through different intermediate states of a stream pipeline?
â Eugene
4 hours ago
add a comment |Â
up vote
0
down vote
Good question.
Assuming you write textbook perfect code, the difference in performance between a properly optimized for and a stream is not noticeable (streams tend to be slightly better class loading wise, but the difference should not be noticeable in most cases).
Consider the following example.
// some lengthy computation
private static int doStuff(int i)
try Thread.sleep(1000); catch (InterruptedException e)
return i;
public static OptionalInt findFirstGreaterThanStream(int value)
return IntStream
.of(MY_INTS)
.map(Main::doStuff)
.filter(x -> x > value)
.findFirst();
public static OptionalInt findFirstGreaterThanFor(int value)
for (int i = 0; i < MY_INTS.length; i++)
int mapped = Main.doStuff(MY_INTS[i]);
if(mapped > value)
return OptionalInt.of(mapped);
return OptionalInt.empty();
Given the above methods, the next test should show they execute in about the same time.
public static void main(String args)
long begin;
long end;
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanStream(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanFor(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
OptionalInt[8]
5119
OptionalInt[8]
5001
Anyway, we spend most of the time in the doStuff method. Lets say we want to add more threads to the mix.
Adjusting the stream method is trivial (considering your operations meets the preconditions of parallel streams).
public static OptionalInt findFirstGreaterThanParallelStream(int value)
return IntStream
.of(MY_INTS)
.parallel()
.map(Main::doStuff)
.filter(x -> x > value)
.findFirst();
Achieving the same behavior without streams can be tricky.
public static OptionalInt findFirstGreaterThanParallelFor(int value, Executor executor)
AtomicInteger counter = new AtomicInteger(0);
CompletableFuture<OptionalInt> cf = CompletableFuture.supplyAsync(() ->
while(counter.get() != MY_INTS.length-1);
return OptionalInt.empty();
);
for (int i = 0; i < MY_INTS.length; i++)
final int current = MY_INTS[i];
executor.execute(() ->
int mapped = Main.doStuff(current);
if(mapped > value)
cf.complete(OptionalInt.of(mapped));
else
counter.incrementAndGet();
);
try
return cf.get();
catch (InterruptedException
The tests execute in about the same time again.
public static void main(String args)
long begin;
long end;
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanParallelStream(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
ExecutorService executor = Executors.newFixedThreadPool(10);
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanParallelFor(5678, executor));
end = System.currentTimeMillis();
System.out.println(end-begin);
executor.shutdown();
executor.awaitTermination(10, TimeUnit.SECONDS);
executor.shutdownNow();
OptionalInt[8]
1004
OptionalInt[8]
1004
In conclusion, although we don't squeeze a big performance benefit out of streams (considering you write excellent multi-threaded code in your for alternative), the code itself tends to be more maintainable.
A (slightly off-topic) final note:
As with programming languages, higher level abstractions (streams relative to fors) make stuff easier to develop at the cost of performance. We did not move away from assembly to procedural languages to object oriented languages because the later offered greater performance. We moved because it made us more productive (develop the same thing at a lower cost). If you are able to get the same performance out of a stream as you would do with a for and properly written multi-threaded code, I would say it's already a win.
I have no idea how this answers the question, you literally talk about "other" things here, not what the OP has asked
â Eugene
2 hours ago
and what is streams tend to be slightly better class loading wise suppose to mean?
â Eugene
2 hours ago
@Eugene From OP's question:If so, what's a real-world use case?. My example outlines a use case where you gain something out of using the streams with lazy loading (thefindFirstmethod in my example) versus mimicking the same behavior viaforloops.
â alexrolea
2 hours ago
that is exactly what the answer above already shows; but the fundamental issue is still there, the OP asks why is this better from a plain loop. Also care to explain the classloading thing?
â Eugene
2 hours ago
@Eugene could you please shed some light on how the answers above shows how a stream is better than a for (as opposed to plainly stating that)?
â alexrolea
1 hour ago
 |Â
show 1 more comment
up vote
0
down vote
Good question.
Assuming you write textbook perfect code, the difference in performance between a properly optimized for and a stream is not noticeable (streams tend to be slightly better class loading wise, but the difference should not be noticeable in most cases).
Consider the following example.
// some lengthy computation
private static int doStuff(int i)
try Thread.sleep(1000); catch (InterruptedException e)
return i;
public static OptionalInt findFirstGreaterThanStream(int value)
return IntStream
.of(MY_INTS)
.map(Main::doStuff)
.filter(x -> x > value)
.findFirst();
public static OptionalInt findFirstGreaterThanFor(int value)
for (int i = 0; i < MY_INTS.length; i++)
int mapped = Main.doStuff(MY_INTS[i]);
if(mapped > value)
return OptionalInt.of(mapped);
return OptionalInt.empty();
Given the above methods, the next test should show they execute in about the same time.
public static void main(String args)
long begin;
long end;
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanStream(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanFor(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
OptionalInt[8]
5119
OptionalInt[8]
5001
Anyway, we spend most of the time in the doStuff method. Lets say we want to add more threads to the mix.
Adjusting the stream method is trivial (considering your operations meets the preconditions of parallel streams).
public static OptionalInt findFirstGreaterThanParallelStream(int value)
return IntStream
.of(MY_INTS)
.parallel()
.map(Main::doStuff)
.filter(x -> x > value)
.findFirst();
Achieving the same behavior without streams can be tricky.
public static OptionalInt findFirstGreaterThanParallelFor(int value, Executor executor)
AtomicInteger counter = new AtomicInteger(0);
CompletableFuture<OptionalInt> cf = CompletableFuture.supplyAsync(() ->
while(counter.get() != MY_INTS.length-1);
return OptionalInt.empty();
);
for (int i = 0; i < MY_INTS.length; i++)
final int current = MY_INTS[i];
executor.execute(() ->
int mapped = Main.doStuff(current);
if(mapped > value)
cf.complete(OptionalInt.of(mapped));
else
counter.incrementAndGet();
);
try
return cf.get();
catch (InterruptedException
The tests execute in about the same time again.
public static void main(String args)
long begin;
long end;
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanParallelStream(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
ExecutorService executor = Executors.newFixedThreadPool(10);
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanParallelFor(5678, executor));
end = System.currentTimeMillis();
System.out.println(end-begin);
executor.shutdown();
executor.awaitTermination(10, TimeUnit.SECONDS);
executor.shutdownNow();
OptionalInt[8]
1004
OptionalInt[8]
1004
In conclusion, although we don't squeeze a big performance benefit out of streams (considering you write excellent multi-threaded code in your for alternative), the code itself tends to be more maintainable.
A (slightly off-topic) final note:
As with programming languages, higher level abstractions (streams relative to fors) make stuff easier to develop at the cost of performance. We did not move away from assembly to procedural languages to object oriented languages because the later offered greater performance. We moved because it made us more productive (develop the same thing at a lower cost). If you are able to get the same performance out of a stream as you would do with a for and properly written multi-threaded code, I would say it's already a win.
I have no idea how this answers the question, you literally talk about "other" things here, not what the OP has asked
â Eugene
2 hours ago
and what is streams tend to be slightly better class loading wise suppose to mean?
â Eugene
2 hours ago
@Eugene From OP's question:If so, what's a real-world use case?. My example outlines a use case where you gain something out of using the streams with lazy loading (thefindFirstmethod in my example) versus mimicking the same behavior viaforloops.
â alexrolea
2 hours ago
that is exactly what the answer above already shows; but the fundamental issue is still there, the OP asks why is this better from a plain loop. Also care to explain the classloading thing?
â Eugene
2 hours ago
@Eugene could you please shed some light on how the answers above shows how a stream is better than a for (as opposed to plainly stating that)?
â alexrolea
1 hour ago
 |Â
show 1 more comment
up vote
0
down vote
up vote
0
down vote
Good question.
Assuming you write textbook perfect code, the difference in performance between a properly optimized for and a stream is not noticeable (streams tend to be slightly better class loading wise, but the difference should not be noticeable in most cases).
Consider the following example.
// some lengthy computation
private static int doStuff(int i)
try Thread.sleep(1000); catch (InterruptedException e)
return i;
public static OptionalInt findFirstGreaterThanStream(int value)
return IntStream
.of(MY_INTS)
.map(Main::doStuff)
.filter(x -> x > value)
.findFirst();
public static OptionalInt findFirstGreaterThanFor(int value)
for (int i = 0; i < MY_INTS.length; i++)
int mapped = Main.doStuff(MY_INTS[i]);
if(mapped > value)
return OptionalInt.of(mapped);
return OptionalInt.empty();
Given the above methods, the next test should show they execute in about the same time.
public static void main(String args)
long begin;
long end;
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanStream(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanFor(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
OptionalInt[8]
5119
OptionalInt[8]
5001
Anyway, we spend most of the time in the doStuff method. Lets say we want to add more threads to the mix.
Adjusting the stream method is trivial (considering your operations meets the preconditions of parallel streams).
public static OptionalInt findFirstGreaterThanParallelStream(int value)
return IntStream
.of(MY_INTS)
.parallel()
.map(Main::doStuff)
.filter(x -> x > value)
.findFirst();
Achieving the same behavior without streams can be tricky.
public static OptionalInt findFirstGreaterThanParallelFor(int value, Executor executor)
AtomicInteger counter = new AtomicInteger(0);
CompletableFuture<OptionalInt> cf = CompletableFuture.supplyAsync(() ->
while(counter.get() != MY_INTS.length-1);
return OptionalInt.empty();
);
for (int i = 0; i < MY_INTS.length; i++)
final int current = MY_INTS[i];
executor.execute(() ->
int mapped = Main.doStuff(current);
if(mapped > value)
cf.complete(OptionalInt.of(mapped));
else
counter.incrementAndGet();
);
try
return cf.get();
catch (InterruptedException
The tests execute in about the same time again.
public static void main(String args)
long begin;
long end;
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanParallelStream(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
ExecutorService executor = Executors.newFixedThreadPool(10);
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanParallelFor(5678, executor));
end = System.currentTimeMillis();
System.out.println(end-begin);
executor.shutdown();
executor.awaitTermination(10, TimeUnit.SECONDS);
executor.shutdownNow();
OptionalInt[8]
1004
OptionalInt[8]
1004
In conclusion, although we don't squeeze a big performance benefit out of streams (considering you write excellent multi-threaded code in your for alternative), the code itself tends to be more maintainable.
A (slightly off-topic) final note:
As with programming languages, higher level abstractions (streams relative to fors) make stuff easier to develop at the cost of performance. We did not move away from assembly to procedural languages to object oriented languages because the later offered greater performance. We moved because it made us more productive (develop the same thing at a lower cost). If you are able to get the same performance out of a stream as you would do with a for and properly written multi-threaded code, I would say it's already a win.
Good question.
Assuming you write textbook perfect code, the difference in performance between a properly optimized for and a stream is not noticeable (streams tend to be slightly better class loading wise, but the difference should not be noticeable in most cases).
Consider the following example.
// some lengthy computation
private static int doStuff(int i)
try Thread.sleep(1000); catch (InterruptedException e)
return i;
public static OptionalInt findFirstGreaterThanStream(int value)
return IntStream
.of(MY_INTS)
.map(Main::doStuff)
.filter(x -> x > value)
.findFirst();
public static OptionalInt findFirstGreaterThanFor(int value)
for (int i = 0; i < MY_INTS.length; i++)
int mapped = Main.doStuff(MY_INTS[i]);
if(mapped > value)
return OptionalInt.of(mapped);
return OptionalInt.empty();
Given the above methods, the next test should show they execute in about the same time.
public static void main(String args)
long begin;
long end;
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanStream(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanFor(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
OptionalInt[8]
5119
OptionalInt[8]
5001
Anyway, we spend most of the time in the doStuff method. Lets say we want to add more threads to the mix.
Adjusting the stream method is trivial (considering your operations meets the preconditions of parallel streams).
public static OptionalInt findFirstGreaterThanParallelStream(int value)
return IntStream
.of(MY_INTS)
.parallel()
.map(Main::doStuff)
.filter(x -> x > value)
.findFirst();
Achieving the same behavior without streams can be tricky.
public static OptionalInt findFirstGreaterThanParallelFor(int value, Executor executor)
AtomicInteger counter = new AtomicInteger(0);
CompletableFuture<OptionalInt> cf = CompletableFuture.supplyAsync(() ->
while(counter.get() != MY_INTS.length-1);
return OptionalInt.empty();
);
for (int i = 0; i < MY_INTS.length; i++)
final int current = MY_INTS[i];
executor.execute(() ->
int mapped = Main.doStuff(current);
if(mapped > value)
cf.complete(OptionalInt.of(mapped));
else
counter.incrementAndGet();
);
try
return cf.get();
catch (InterruptedException
The tests execute in about the same time again.
public static void main(String args)
long begin;
long end;
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanParallelStream(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
ExecutorService executor = Executors.newFixedThreadPool(10);
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanParallelFor(5678, executor));
end = System.currentTimeMillis();
System.out.println(end-begin);
executor.shutdown();
executor.awaitTermination(10, TimeUnit.SECONDS);
executor.shutdownNow();
OptionalInt[8]
1004
OptionalInt[8]
1004
In conclusion, although we don't squeeze a big performance benefit out of streams (considering you write excellent multi-threaded code in your for alternative), the code itself tends to be more maintainable.
A (slightly off-topic) final note:
As with programming languages, higher level abstractions (streams relative to fors) make stuff easier to develop at the cost of performance. We did not move away from assembly to procedural languages to object oriented languages because the later offered greater performance. We moved because it made us more productive (develop the same thing at a lower cost). If you are able to get the same performance out of a stream as you would do with a for and properly written multi-threaded code, I would say it's already a win.
edited 4 hours ago
answered 4 hours ago
alexrolea
1,416120
1,416120
I have no idea how this answers the question, you literally talk about "other" things here, not what the OP has asked
â Eugene
2 hours ago
and what is streams tend to be slightly better class loading wise suppose to mean?
â Eugene
2 hours ago
@Eugene From OP's question:If so, what's a real-world use case?. My example outlines a use case where you gain something out of using the streams with lazy loading (thefindFirstmethod in my example) versus mimicking the same behavior viaforloops.
â alexrolea
2 hours ago
that is exactly what the answer above already shows; but the fundamental issue is still there, the OP asks why is this better from a plain loop. Also care to explain the classloading thing?
â Eugene
2 hours ago
@Eugene could you please shed some light on how the answers above shows how a stream is better than a for (as opposed to plainly stating that)?
â alexrolea
1 hour ago
 |Â
show 1 more comment
I have no idea how this answers the question, you literally talk about "other" things here, not what the OP has asked
â Eugene
2 hours ago
and what is streams tend to be slightly better class loading wise suppose to mean?
â Eugene
2 hours ago
@Eugene From OP's question:If so, what's a real-world use case?. My example outlines a use case where you gain something out of using the streams with lazy loading (thefindFirstmethod in my example) versus mimicking the same behavior viaforloops.
â alexrolea
2 hours ago
that is exactly what the answer above already shows; but the fundamental issue is still there, the OP asks why is this better from a plain loop. Also care to explain the classloading thing?
â Eugene
2 hours ago
@Eugene could you please shed some light on how the answers above shows how a stream is better than a for (as opposed to plainly stating that)?
â alexrolea
1 hour ago
I have no idea how this answers the question, you literally talk about "other" things here, not what the OP has asked
â Eugene
2 hours ago
I have no idea how this answers the question, you literally talk about "other" things here, not what the OP has asked
â Eugene
2 hours ago
and what is streams tend to be slightly better class loading wise suppose to mean?
â Eugene
2 hours ago
and what is streams tend to be slightly better class loading wise suppose to mean?
â Eugene
2 hours ago
@Eugene From OP's question:
If so, what's a real-world use case?. My example outlines a use case where you gain something out of using the streams with lazy loading (the findFirst method in my example) versus mimicking the same behavior via for loops.â alexrolea
2 hours ago
@Eugene From OP's question:
If so, what's a real-world use case?. My example outlines a use case where you gain something out of using the streams with lazy loading (the findFirst method in my example) versus mimicking the same behavior via for loops.â alexrolea
2 hours ago
that is exactly what the answer above already shows; but the fundamental issue is still there, the OP asks why is this better from a plain loop. Also care to explain the classloading thing?
â Eugene
2 hours ago
that is exactly what the answer above already shows; but the fundamental issue is still there, the OP asks why is this better from a plain loop. Also care to explain the classloading thing?
â Eugene
2 hours ago
@Eugene could you please shed some light on how the answers above shows how a stream is better than a for (as opposed to plainly stating that)?
â alexrolea
1 hour ago
@Eugene could you please shed some light on how the answers above shows how a stream is better than a for (as opposed to plainly stating that)?
â alexrolea
1 hour ago
 |Â
show 1 more comment
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f52685535%2fis-java-8-stream-laziness-useless-in-practice%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password

"Lazy" here just means "only do work when necessary". Doing unnecessary work is less performant than only doing necessary work. I feel like you're stuck on the fact that everything
Streamcan do you can also do in loops. But that's irrelevant. Nothing in the quoted documentation states thatStreams are better than loops, only that the former is coded to be as efficient as possible.â Slaw
5 hours ago