Kafka Streams - Integral versus Separable handler for flatMapValues
I would like help deciding one of two paths I can follow from those more experienced with Kafka Streams in JAVA. I have two working JAVA apps that can take an inbound stream of integers and perform various calculations and tasks, creating four resultant outbound streams to different topics. The actual calc/tasks is not important, I am concerned
with the two possible methods I could use to define the handler that performs the math and any associated risks with my favorite.
Method 1 uses a separately defined function that is of type Iterable and returns a List type.
Method 2 uses the more common integral method that places the function within the KStream declaration.
I am very new to Kafka Streams and do not want to head down the wrong path. I like Method 1 because the code is very readable, easy to follow, and can have the handlers tested offline without needing to invoke traffic with streams.
Method 2 seems more common, but as the complexity grows, the code gets polluted in main(). Additionally I am boxed-in to testing algorithms using stream traffic, which slows development.
Method 1: Separable handlers (partial):
// Take inbound stream from math-input and perform transformations A-D, then write out to 4 streams.
KStream<String, String> source = src_builder.stream("math-input");
source.flatMapValues(value -> transformInput_A(Arrays.asList(value.split("\W+"))) ).to("math-output-A");
source.flatMapValues(value -> transformInput_B(Arrays.asList(value.split("\W+"))) ).to("math-output-B");
source.flatMapValues(value -> transformInput_C(Arrays.asList(value.split("\W+"))) ).to("math-output-C");
source.flatMapValues(value -> transformInput_D(Arrays.asList(value.split("\W+"))) ).to("math-output-D");
// More code here, removed for brevity.
// Transformation handlers A, B, C, and D.
// ******************************************************************
// Perform data transformation using method A
public static Iterable transformInput_A (List str_array)
// Imagine some very complex math here using the integer
// values. This could be 50+ lines of code.
for (int i = 0; i < str_array.size(); i++)
// grab values and perform ops
// Return results in string format
return math_results;
// End of Transformation Method A
// ******************************************************************
// Imagine similar handlers for methods B, C, and D below.
Method 2: Handlers internal to KStream declaration (partial):
// Take inbound stream from math-input and perform transformations A-D, then write out to 4 streams.
KStream<String, String> inputStream = src_builder.stream("math-input");
KStream<String, String> outputStream_A = inputStream.mapValues(new ValueMapper<String, String>()
@Override
public String apply(String s)
// Imagine some very complex math here using the integer
// values. This could be 50+ lines of code.
for (int i = 0; i < str_array.length; i++)
// grab values and perform ops
// Return results in Iterbale string format
return math_results;
);
// Send the data to the outbound topic A.
outputStream_A.to("math-output-A");
KStream<String, String> outputStream_B ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_B.to("math-output-B");
KStream<String, String> outputStream_C ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_C.to("math-output-C");
KStream<String, String> outputStream_D ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_D.to("math-output-D");
Other than my desire to keep main() neat and push the complexity out of view, am I heading in the wrong direction with Method 1?
apache-kafka apache-kafka-streams
add a comment |
I would like help deciding one of two paths I can follow from those more experienced with Kafka Streams in JAVA. I have two working JAVA apps that can take an inbound stream of integers and perform various calculations and tasks, creating four resultant outbound streams to different topics. The actual calc/tasks is not important, I am concerned
with the two possible methods I could use to define the handler that performs the math and any associated risks with my favorite.
Method 1 uses a separately defined function that is of type Iterable and returns a List type.
Method 2 uses the more common integral method that places the function within the KStream declaration.
I am very new to Kafka Streams and do not want to head down the wrong path. I like Method 1 because the code is very readable, easy to follow, and can have the handlers tested offline without needing to invoke traffic with streams.
Method 2 seems more common, but as the complexity grows, the code gets polluted in main(). Additionally I am boxed-in to testing algorithms using stream traffic, which slows development.
Method 1: Separable handlers (partial):
// Take inbound stream from math-input and perform transformations A-D, then write out to 4 streams.
KStream<String, String> source = src_builder.stream("math-input");
source.flatMapValues(value -> transformInput_A(Arrays.asList(value.split("\W+"))) ).to("math-output-A");
source.flatMapValues(value -> transformInput_B(Arrays.asList(value.split("\W+"))) ).to("math-output-B");
source.flatMapValues(value -> transformInput_C(Arrays.asList(value.split("\W+"))) ).to("math-output-C");
source.flatMapValues(value -> transformInput_D(Arrays.asList(value.split("\W+"))) ).to("math-output-D");
// More code here, removed for brevity.
// Transformation handlers A, B, C, and D.
// ******************************************************************
// Perform data transformation using method A
public static Iterable transformInput_A (List str_array)
// Imagine some very complex math here using the integer
// values. This could be 50+ lines of code.
for (int i = 0; i < str_array.size(); i++)
// grab values and perform ops
// Return results in string format
return math_results;
// End of Transformation Method A
// ******************************************************************
// Imagine similar handlers for methods B, C, and D below.
Method 2: Handlers internal to KStream declaration (partial):
// Take inbound stream from math-input and perform transformations A-D, then write out to 4 streams.
KStream<String, String> inputStream = src_builder.stream("math-input");
KStream<String, String> outputStream_A = inputStream.mapValues(new ValueMapper<String, String>()
@Override
public String apply(String s)
// Imagine some very complex math here using the integer
// values. This could be 50+ lines of code.
for (int i = 0; i < str_array.length; i++)
// grab values and perform ops
// Return results in Iterbale string format
return math_results;
);
// Send the data to the outbound topic A.
outputStream_A.to("math-output-A");
KStream<String, String> outputStream_B ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_B.to("math-output-B");
KStream<String, String> outputStream_C ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_C.to("math-output-C");
KStream<String, String> outputStream_D ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_D.to("math-output-D");
Other than my desire to keep main() neat and push the complexity out of view, am I heading in the wrong direction with Method 1?
apache-kafka apache-kafka-streams
Not sure if I understand the question. Also, why do you useflatMapValues()
in the first example, butmapValues()
in the second example?
– Matthias J. Sax
Nov 16 '18 at 1:43
1) Why two different calls: I was following documented examples. Those without an "apply()" method used flatMapValues. Those that invoked "apply()" used mapValues with a new ValueMapper declaration. 2) Genesis of the question: Am I incurring risk of memory overflow, stack issues, et cetera with a separate handler?
– user3155285
Nov 16 '18 at 13:34
Both,mapValuee()
andflatMapValuee()
take aValueMapper
and both will callValueMapper#apply()
. So I still cannot follow what you are saying. The difference is, thatmapValues()
is a 1:1 operator, meaning for each input record there will be exactly one output record.flatMapValues()
is a 1:n operator though, thus each input record can result 0,1, or more output records. It's semantically two different things. Also, in both example you have the same number of handlers. If you callvalue -> transformInput_A()
, this is just syntactic sugar fornew ValueMapper() ...
.
– Matthias J. Sax
Nov 16 '18 at 16:54
add a comment |
I would like help deciding one of two paths I can follow from those more experienced with Kafka Streams in JAVA. I have two working JAVA apps that can take an inbound stream of integers and perform various calculations and tasks, creating four resultant outbound streams to different topics. The actual calc/tasks is not important, I am concerned
with the two possible methods I could use to define the handler that performs the math and any associated risks with my favorite.
Method 1 uses a separately defined function that is of type Iterable and returns a List type.
Method 2 uses the more common integral method that places the function within the KStream declaration.
I am very new to Kafka Streams and do not want to head down the wrong path. I like Method 1 because the code is very readable, easy to follow, and can have the handlers tested offline without needing to invoke traffic with streams.
Method 2 seems more common, but as the complexity grows, the code gets polluted in main(). Additionally I am boxed-in to testing algorithms using stream traffic, which slows development.
Method 1: Separable handlers (partial):
// Take inbound stream from math-input and perform transformations A-D, then write out to 4 streams.
KStream<String, String> source = src_builder.stream("math-input");
source.flatMapValues(value -> transformInput_A(Arrays.asList(value.split("\W+"))) ).to("math-output-A");
source.flatMapValues(value -> transformInput_B(Arrays.asList(value.split("\W+"))) ).to("math-output-B");
source.flatMapValues(value -> transformInput_C(Arrays.asList(value.split("\W+"))) ).to("math-output-C");
source.flatMapValues(value -> transformInput_D(Arrays.asList(value.split("\W+"))) ).to("math-output-D");
// More code here, removed for brevity.
// Transformation handlers A, B, C, and D.
// ******************************************************************
// Perform data transformation using method A
public static Iterable transformInput_A (List str_array)
// Imagine some very complex math here using the integer
// values. This could be 50+ lines of code.
for (int i = 0; i < str_array.size(); i++)
// grab values and perform ops
// Return results in string format
return math_results;
// End of Transformation Method A
// ******************************************************************
// Imagine similar handlers for methods B, C, and D below.
Method 2: Handlers internal to KStream declaration (partial):
// Take inbound stream from math-input and perform transformations A-D, then write out to 4 streams.
KStream<String, String> inputStream = src_builder.stream("math-input");
KStream<String, String> outputStream_A = inputStream.mapValues(new ValueMapper<String, String>()
@Override
public String apply(String s)
// Imagine some very complex math here using the integer
// values. This could be 50+ lines of code.
for (int i = 0; i < str_array.length; i++)
// grab values and perform ops
// Return results in Iterbale string format
return math_results;
);
// Send the data to the outbound topic A.
outputStream_A.to("math-output-A");
KStream<String, String> outputStream_B ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_B.to("math-output-B");
KStream<String, String> outputStream_C ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_C.to("math-output-C");
KStream<String, String> outputStream_D ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_D.to("math-output-D");
Other than my desire to keep main() neat and push the complexity out of view, am I heading in the wrong direction with Method 1?
apache-kafka apache-kafka-streams
I would like help deciding one of two paths I can follow from those more experienced with Kafka Streams in JAVA. I have two working JAVA apps that can take an inbound stream of integers and perform various calculations and tasks, creating four resultant outbound streams to different topics. The actual calc/tasks is not important, I am concerned
with the two possible methods I could use to define the handler that performs the math and any associated risks with my favorite.
Method 1 uses a separately defined function that is of type Iterable and returns a List type.
Method 2 uses the more common integral method that places the function within the KStream declaration.
I am very new to Kafka Streams and do not want to head down the wrong path. I like Method 1 because the code is very readable, easy to follow, and can have the handlers tested offline without needing to invoke traffic with streams.
Method 2 seems more common, but as the complexity grows, the code gets polluted in main(). Additionally I am boxed-in to testing algorithms using stream traffic, which slows development.
Method 1: Separable handlers (partial):
// Take inbound stream from math-input and perform transformations A-D, then write out to 4 streams.
KStream<String, String> source = src_builder.stream("math-input");
source.flatMapValues(value -> transformInput_A(Arrays.asList(value.split("\W+"))) ).to("math-output-A");
source.flatMapValues(value -> transformInput_B(Arrays.asList(value.split("\W+"))) ).to("math-output-B");
source.flatMapValues(value -> transformInput_C(Arrays.asList(value.split("\W+"))) ).to("math-output-C");
source.flatMapValues(value -> transformInput_D(Arrays.asList(value.split("\W+"))) ).to("math-output-D");
// More code here, removed for brevity.
// Transformation handlers A, B, C, and D.
// ******************************************************************
// Perform data transformation using method A
public static Iterable transformInput_A (List str_array)
// Imagine some very complex math here using the integer
// values. This could be 50+ lines of code.
for (int i = 0; i < str_array.size(); i++)
// grab values and perform ops
// Return results in string format
return math_results;
// End of Transformation Method A
// ******************************************************************
// Imagine similar handlers for methods B, C, and D below.
Method 2: Handlers internal to KStream declaration (partial):
// Take inbound stream from math-input and perform transformations A-D, then write out to 4 streams.
KStream<String, String> inputStream = src_builder.stream("math-input");
KStream<String, String> outputStream_A = inputStream.mapValues(new ValueMapper<String, String>()
@Override
public String apply(String s)
// Imagine some very complex math here using the integer
// values. This could be 50+ lines of code.
for (int i = 0; i < str_array.length; i++)
// grab values and perform ops
// Return results in Iterbale string format
return math_results;
);
// Send the data to the outbound topic A.
outputStream_A.to("math-output-A");
KStream<String, String> outputStream_B ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_B.to("math-output-B");
KStream<String, String> outputStream_C ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_C.to("math-output-C");
KStream<String, String> outputStream_D ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_D.to("math-output-D");
Other than my desire to keep main() neat and push the complexity out of view, am I heading in the wrong direction with Method 1?
// Take inbound stream from math-input and perform transformations A-D, then write out to 4 streams.
KStream<String, String> source = src_builder.stream("math-input");
source.flatMapValues(value -> transformInput_A(Arrays.asList(value.split("\W+"))) ).to("math-output-A");
source.flatMapValues(value -> transformInput_B(Arrays.asList(value.split("\W+"))) ).to("math-output-B");
source.flatMapValues(value -> transformInput_C(Arrays.asList(value.split("\W+"))) ).to("math-output-C");
source.flatMapValues(value -> transformInput_D(Arrays.asList(value.split("\W+"))) ).to("math-output-D");
// More code here, removed for brevity.
// Transformation handlers A, B, C, and D.
// ******************************************************************
// Perform data transformation using method A
public static Iterable transformInput_A (List str_array)
// Imagine some very complex math here using the integer
// values. This could be 50+ lines of code.
for (int i = 0; i < str_array.size(); i++)
// grab values and perform ops
// Return results in string format
return math_results;
// End of Transformation Method A
// ******************************************************************
// Imagine similar handlers for methods B, C, and D below.
// Take inbound stream from math-input and perform transformations A-D, then write out to 4 streams.
KStream<String, String> source = src_builder.stream("math-input");
source.flatMapValues(value -> transformInput_A(Arrays.asList(value.split("\W+"))) ).to("math-output-A");
source.flatMapValues(value -> transformInput_B(Arrays.asList(value.split("\W+"))) ).to("math-output-B");
source.flatMapValues(value -> transformInput_C(Arrays.asList(value.split("\W+"))) ).to("math-output-C");
source.flatMapValues(value -> transformInput_D(Arrays.asList(value.split("\W+"))) ).to("math-output-D");
// More code here, removed for brevity.
// Transformation handlers A, B, C, and D.
// ******************************************************************
// Perform data transformation using method A
public static Iterable transformInput_A (List str_array)
// Imagine some very complex math here using the integer
// values. This could be 50+ lines of code.
for (int i = 0; i < str_array.size(); i++)
// grab values and perform ops
// Return results in string format
return math_results;
// End of Transformation Method A
// ******************************************************************
// Imagine similar handlers for methods B, C, and D below.
// Take inbound stream from math-input and perform transformations A-D, then write out to 4 streams.
KStream<String, String> inputStream = src_builder.stream("math-input");
KStream<String, String> outputStream_A = inputStream.mapValues(new ValueMapper<String, String>()
@Override
public String apply(String s)
// Imagine some very complex math here using the integer
// values. This could be 50+ lines of code.
for (int i = 0; i < str_array.length; i++)
// grab values and perform ops
// Return results in Iterbale string format
return math_results;
);
// Send the data to the outbound topic A.
outputStream_A.to("math-output-A");
KStream<String, String> outputStream_B ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_B.to("math-output-B");
KStream<String, String> outputStream_C ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_C.to("math-output-C");
KStream<String, String> outputStream_D ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_D.to("math-output-D");
// Take inbound stream from math-input and perform transformations A-D, then write out to 4 streams.
KStream<String, String> inputStream = src_builder.stream("math-input");
KStream<String, String> outputStream_A = inputStream.mapValues(new ValueMapper<String, String>()
@Override
public String apply(String s)
// Imagine some very complex math here using the integer
// values. This could be 50+ lines of code.
for (int i = 0; i < str_array.length; i++)
// grab values and perform ops
// Return results in Iterbale string format
return math_results;
);
// Send the data to the outbound topic A.
outputStream_A.to("math-output-A");
KStream<String, String> outputStream_B ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_B.to("math-output-B");
KStream<String, String> outputStream_C ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_C.to("math-output-C");
KStream<String, String> outputStream_D ....
// Use ValueMapper in the KStream declaration just like above. 50+ lines of code
outputStream_D.to("math-output-D");
apache-kafka apache-kafka-streams
apache-kafka apache-kafka-streams
asked Nov 15 '18 at 14:22
user3155285user3155285
164
164
Not sure if I understand the question. Also, why do you useflatMapValues()
in the first example, butmapValues()
in the second example?
– Matthias J. Sax
Nov 16 '18 at 1:43
1) Why two different calls: I was following documented examples. Those without an "apply()" method used flatMapValues. Those that invoked "apply()" used mapValues with a new ValueMapper declaration. 2) Genesis of the question: Am I incurring risk of memory overflow, stack issues, et cetera with a separate handler?
– user3155285
Nov 16 '18 at 13:34
Both,mapValuee()
andflatMapValuee()
take aValueMapper
and both will callValueMapper#apply()
. So I still cannot follow what you are saying. The difference is, thatmapValues()
is a 1:1 operator, meaning for each input record there will be exactly one output record.flatMapValues()
is a 1:n operator though, thus each input record can result 0,1, or more output records. It's semantically two different things. Also, in both example you have the same number of handlers. If you callvalue -> transformInput_A()
, this is just syntactic sugar fornew ValueMapper() ...
.
– Matthias J. Sax
Nov 16 '18 at 16:54
add a comment |
Not sure if I understand the question. Also, why do you useflatMapValues()
in the first example, butmapValues()
in the second example?
– Matthias J. Sax
Nov 16 '18 at 1:43
1) Why two different calls: I was following documented examples. Those without an "apply()" method used flatMapValues. Those that invoked "apply()" used mapValues with a new ValueMapper declaration. 2) Genesis of the question: Am I incurring risk of memory overflow, stack issues, et cetera with a separate handler?
– user3155285
Nov 16 '18 at 13:34
Both,mapValuee()
andflatMapValuee()
take aValueMapper
and both will callValueMapper#apply()
. So I still cannot follow what you are saying. The difference is, thatmapValues()
is a 1:1 operator, meaning for each input record there will be exactly one output record.flatMapValues()
is a 1:n operator though, thus each input record can result 0,1, or more output records. It's semantically two different things. Also, in both example you have the same number of handlers. If you callvalue -> transformInput_A()
, this is just syntactic sugar fornew ValueMapper() ...
.
– Matthias J. Sax
Nov 16 '18 at 16:54
Not sure if I understand the question. Also, why do you use
flatMapValues()
in the first example, but mapValues()
in the second example?– Matthias J. Sax
Nov 16 '18 at 1:43
Not sure if I understand the question. Also, why do you use
flatMapValues()
in the first example, but mapValues()
in the second example?– Matthias J. Sax
Nov 16 '18 at 1:43
1) Why two different calls: I was following documented examples. Those without an "apply()" method used flatMapValues. Those that invoked "apply()" used mapValues with a new ValueMapper declaration. 2) Genesis of the question: Am I incurring risk of memory overflow, stack issues, et cetera with a separate handler?
– user3155285
Nov 16 '18 at 13:34
1) Why two different calls: I was following documented examples. Those without an "apply()" method used flatMapValues. Those that invoked "apply()" used mapValues with a new ValueMapper declaration. 2) Genesis of the question: Am I incurring risk of memory overflow, stack issues, et cetera with a separate handler?
– user3155285
Nov 16 '18 at 13:34
Both,
mapValuee()
and flatMapValuee()
take a ValueMapper
and both will call ValueMapper#apply()
. So I still cannot follow what you are saying. The difference is, that mapValues()
is a 1:1 operator, meaning for each input record there will be exactly one output record. flatMapValues()
is a 1:n operator though, thus each input record can result 0,1, or more output records. It's semantically two different things. Also, in both example you have the same number of handlers. If you call value -> transformInput_A()
, this is just syntactic sugar for new ValueMapper() ...
.– Matthias J. Sax
Nov 16 '18 at 16:54
Both,
mapValuee()
and flatMapValuee()
take a ValueMapper
and both will call ValueMapper#apply()
. So I still cannot follow what you are saying. The difference is, that mapValues()
is a 1:1 operator, meaning for each input record there will be exactly one output record. flatMapValues()
is a 1:n operator though, thus each input record can result 0,1, or more output records. It's semantically two different things. Also, in both example you have the same number of handlers. If you call value -> transformInput_A()
, this is just syntactic sugar for new ValueMapper() ...
.– Matthias J. Sax
Nov 16 '18 at 16:54
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53321522%2fkafka-streams-integral-versus-separable-handler-for-flatmapvalues%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53321522%2fkafka-streams-integral-versus-separable-handler-for-flatmapvalues%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Not sure if I understand the question. Also, why do you use
flatMapValues()
in the first example, butmapValues()
in the second example?– Matthias J. Sax
Nov 16 '18 at 1:43
1) Why two different calls: I was following documented examples. Those without an "apply()" method used flatMapValues. Those that invoked "apply()" used mapValues with a new ValueMapper declaration. 2) Genesis of the question: Am I incurring risk of memory overflow, stack issues, et cetera with a separate handler?
– user3155285
Nov 16 '18 at 13:34
Both,
mapValuee()
andflatMapValuee()
take aValueMapper
and both will callValueMapper#apply()
. So I still cannot follow what you are saying. The difference is, thatmapValues()
is a 1:1 operator, meaning for each input record there will be exactly one output record.flatMapValues()
is a 1:n operator though, thus each input record can result 0,1, or more output records. It's semantically two different things. Also, in both example you have the same number of handlers. If you callvalue -> transformInput_A()
, this is just syntactic sugar fornew ValueMapper() ...
.– Matthias J. Sax
Nov 16 '18 at 16:54