Spring Batch Remote Partitioning










0














I would like to understand the integration between DeployerPartitionHandler and DeployerStepExecutionHandler during Remote Partitioning.



How does the start time, end time, execution status of the parent task execution is updated when there are multiple workers?



What happens if one of the worker process becomes unresponsive because of some external reasons? Is there a way to handle this situation programmatically? i.e., to kill the unresponsive process and fail the step.



Thanks in advance for inputs!!










share|improve this question























  • DeployerPartitionHandler is an API from the Spring Cloud Task project. I suggest you tag this question with spring-cloud-task so that someone from SCT team can help you.
    – Mahmoud Ben Hassine
    Nov 13 '18 at 7:36










  • Thank you! I just added the spring-cloud-task tag
    – Sabari
    Nov 13 '18 at 13:00
















0














I would like to understand the integration between DeployerPartitionHandler and DeployerStepExecutionHandler during Remote Partitioning.



How does the start time, end time, execution status of the parent task execution is updated when there are multiple workers?



What happens if one of the worker process becomes unresponsive because of some external reasons? Is there a way to handle this situation programmatically? i.e., to kill the unresponsive process and fail the step.



Thanks in advance for inputs!!










share|improve this question























  • DeployerPartitionHandler is an API from the Spring Cloud Task project. I suggest you tag this question with spring-cloud-task so that someone from SCT team can help you.
    – Mahmoud Ben Hassine
    Nov 13 '18 at 7:36










  • Thank you! I just added the spring-cloud-task tag
    – Sabari
    Nov 13 '18 at 13:00














0












0








0







I would like to understand the integration between DeployerPartitionHandler and DeployerStepExecutionHandler during Remote Partitioning.



How does the start time, end time, execution status of the parent task execution is updated when there are multiple workers?



What happens if one of the worker process becomes unresponsive because of some external reasons? Is there a way to handle this situation programmatically? i.e., to kill the unresponsive process and fail the step.



Thanks in advance for inputs!!










share|improve this question















I would like to understand the integration between DeployerPartitionHandler and DeployerStepExecutionHandler during Remote Partitioning.



How does the start time, end time, execution status of the parent task execution is updated when there are multiple workers?



What happens if one of the worker process becomes unresponsive because of some external reasons? Is there a way to handle this situation programmatically? i.e., to kill the unresponsive process and fail the step.



Thanks in advance for inputs!!







java spring-boot spring-batch spring-cloud-task






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 13 '18 at 12:59







Sabari

















asked Nov 13 '18 at 2:59









SabariSabari

72




72











  • DeployerPartitionHandler is an API from the Spring Cloud Task project. I suggest you tag this question with spring-cloud-task so that someone from SCT team can help you.
    – Mahmoud Ben Hassine
    Nov 13 '18 at 7:36










  • Thank you! I just added the spring-cloud-task tag
    – Sabari
    Nov 13 '18 at 13:00

















  • DeployerPartitionHandler is an API from the Spring Cloud Task project. I suggest you tag this question with spring-cloud-task so that someone from SCT team can help you.
    – Mahmoud Ben Hassine
    Nov 13 '18 at 7:36










  • Thank you! I just added the spring-cloud-task tag
    – Sabari
    Nov 13 '18 at 13:00
















DeployerPartitionHandler is an API from the Spring Cloud Task project. I suggest you tag this question with spring-cloud-task so that someone from SCT team can help you.
– Mahmoud Ben Hassine
Nov 13 '18 at 7:36




DeployerPartitionHandler is an API from the Spring Cloud Task project. I suggest you tag this question with spring-cloud-task so that someone from SCT team can help you.
– Mahmoud Ben Hassine
Nov 13 '18 at 7:36












Thank you! I just added the spring-cloud-task tag
– Sabari
Nov 13 '18 at 13:00





Thank you! I just added the spring-cloud-task tag
– Sabari
Nov 13 '18 at 13:00













1 Answer
1






active

oldest

votes


















0














You have a number of questions here so let me answer them one at a time.



How does the start time, end time, execution status of the parent task execution is updated when there are multiple workers?



All components within this architecture are tasks. The parent is a task, the workers are each tasks, so they all update the task repository independently. The parent application will mark the start time at the beginning of the task (before any CommandLineRunner or ApplicationRunner implementations are called). It will update the end time and results once all the workers are done (since the remote partitioned step won't complete until all the workers have completed or timed out).



What happens if one of the worker process becomes unresponsive because of some external reasons?



The deployers used by the DeployerPartitionHandler depend on a platform (CloudFoundry, Kubernetes, etc) for production use. Each of these platforms handle hung processes in their own way so the answer to this question is really platform specific. In most cases, if a process is identified as not healthy (by whatever definition the platform uses) it will be shut down.



Is there a way to handle this situation programmatically? i.e., to kill the unresponsive process and fail the step.



If a partition fails during the execution, the parent will be also marked as failed and can be restarted. On a restart (by default), only the failed partitions will be re-run. Any partitions that are already complete will not be re-executed.






share|improve this answer




















  • Thank you! for the details. It clarifies my doubts and really helps understand the process.
    – Sabari
    Nov 14 '18 at 18:55










Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53273148%2fspring-batch-remote-partitioning%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














You have a number of questions here so let me answer them one at a time.



How does the start time, end time, execution status of the parent task execution is updated when there are multiple workers?



All components within this architecture are tasks. The parent is a task, the workers are each tasks, so they all update the task repository independently. The parent application will mark the start time at the beginning of the task (before any CommandLineRunner or ApplicationRunner implementations are called). It will update the end time and results once all the workers are done (since the remote partitioned step won't complete until all the workers have completed or timed out).



What happens if one of the worker process becomes unresponsive because of some external reasons?



The deployers used by the DeployerPartitionHandler depend on a platform (CloudFoundry, Kubernetes, etc) for production use. Each of these platforms handle hung processes in their own way so the answer to this question is really platform specific. In most cases, if a process is identified as not healthy (by whatever definition the platform uses) it will be shut down.



Is there a way to handle this situation programmatically? i.e., to kill the unresponsive process and fail the step.



If a partition fails during the execution, the parent will be also marked as failed and can be restarted. On a restart (by default), only the failed partitions will be re-run. Any partitions that are already complete will not be re-executed.






share|improve this answer




















  • Thank you! for the details. It clarifies my doubts and really helps understand the process.
    – Sabari
    Nov 14 '18 at 18:55















0














You have a number of questions here so let me answer them one at a time.



How does the start time, end time, execution status of the parent task execution is updated when there are multiple workers?



All components within this architecture are tasks. The parent is a task, the workers are each tasks, so they all update the task repository independently. The parent application will mark the start time at the beginning of the task (before any CommandLineRunner or ApplicationRunner implementations are called). It will update the end time and results once all the workers are done (since the remote partitioned step won't complete until all the workers have completed or timed out).



What happens if one of the worker process becomes unresponsive because of some external reasons?



The deployers used by the DeployerPartitionHandler depend on a platform (CloudFoundry, Kubernetes, etc) for production use. Each of these platforms handle hung processes in their own way so the answer to this question is really platform specific. In most cases, if a process is identified as not healthy (by whatever definition the platform uses) it will be shut down.



Is there a way to handle this situation programmatically? i.e., to kill the unresponsive process and fail the step.



If a partition fails during the execution, the parent will be also marked as failed and can be restarted. On a restart (by default), only the failed partitions will be re-run. Any partitions that are already complete will not be re-executed.






share|improve this answer




















  • Thank you! for the details. It clarifies my doubts and really helps understand the process.
    – Sabari
    Nov 14 '18 at 18:55













0












0








0






You have a number of questions here so let me answer them one at a time.



How does the start time, end time, execution status of the parent task execution is updated when there are multiple workers?



All components within this architecture are tasks. The parent is a task, the workers are each tasks, so they all update the task repository independently. The parent application will mark the start time at the beginning of the task (before any CommandLineRunner or ApplicationRunner implementations are called). It will update the end time and results once all the workers are done (since the remote partitioned step won't complete until all the workers have completed or timed out).



What happens if one of the worker process becomes unresponsive because of some external reasons?



The deployers used by the DeployerPartitionHandler depend on a platform (CloudFoundry, Kubernetes, etc) for production use. Each of these platforms handle hung processes in their own way so the answer to this question is really platform specific. In most cases, if a process is identified as not healthy (by whatever definition the platform uses) it will be shut down.



Is there a way to handle this situation programmatically? i.e., to kill the unresponsive process and fail the step.



If a partition fails during the execution, the parent will be also marked as failed and can be restarted. On a restart (by default), only the failed partitions will be re-run. Any partitions that are already complete will not be re-executed.






share|improve this answer












You have a number of questions here so let me answer them one at a time.



How does the start time, end time, execution status of the parent task execution is updated when there are multiple workers?



All components within this architecture are tasks. The parent is a task, the workers are each tasks, so they all update the task repository independently. The parent application will mark the start time at the beginning of the task (before any CommandLineRunner or ApplicationRunner implementations are called). It will update the end time and results once all the workers are done (since the remote partitioned step won't complete until all the workers have completed or timed out).



What happens if one of the worker process becomes unresponsive because of some external reasons?



The deployers used by the DeployerPartitionHandler depend on a platform (CloudFoundry, Kubernetes, etc) for production use. Each of these platforms handle hung processes in their own way so the answer to this question is really platform specific. In most cases, if a process is identified as not healthy (by whatever definition the platform uses) it will be shut down.



Is there a way to handle this situation programmatically? i.e., to kill the unresponsive process and fail the step.



If a partition fails during the execution, the parent will be also marked as failed and can be restarted. On a restart (by default), only the failed partitions will be re-run. Any partitions that are already complete will not be re-executed.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 13 '18 at 13:48









Michael MinellaMichael Minella

14.2k23548




14.2k23548











  • Thank you! for the details. It clarifies my doubts and really helps understand the process.
    – Sabari
    Nov 14 '18 at 18:55
















  • Thank you! for the details. It clarifies my doubts and really helps understand the process.
    – Sabari
    Nov 14 '18 at 18:55















Thank you! for the details. It clarifies my doubts and really helps understand the process.
– Sabari
Nov 14 '18 at 18:55




Thank you! for the details. It clarifies my doubts and really helps understand the process.
– Sabari
Nov 14 '18 at 18:55

















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53273148%2fspring-batch-remote-partitioning%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

Barbados

How to read a connectionString WITH PROVIDER in .NET Core?

Node.js Script on GitHub Pages or Amazon S3