Hadoop Scheduling | 2 extremes | Availability and Scarcity of Resources









up vote
0
down vote

favorite












I have the following 6 Datanodes(dn):



  • dn1 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn2 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn3 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn4 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn5 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn6 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

Case 1 (Availability)



State of the system
===================
dn1 has 1 mapper running from another job Y; so 2 mapper slots are free
dn2 has 1 mapper running from another job Y; so 2 mapper slots are free
dn3 has 1 mapper running from another job Y; so 2 mapper slots are free
dn4 has 1 mapper running from another job Y; so 2 mapper slots are free
dn5 has 0 mappers running; so 3 mapper slots are free
dn6 has 0 mappers running; so 3 mapper slots are free
State of my input file
======================
I have a file that is distributed in 3 64MB blocks with RF 3 in the following way:
R1(dn1,dn2,dn3)
R2(dn2,dn3,dn4)
R3(dn3,dn4,dn5)


When I run a job X on this file, 3 mappers need to be created corresponding to the 3 data blocks.



Questions



In FIFO: Is Job X still put on the queue waiting for Y to finish considering its a FIFO scheduler and other jobs are running even though "there are other mapper slots free in the same machine" or the FIFO logic kicks in only when no more resources are available in the system and the jobs consequently has to be put on the queue?
In Capacity Scheduler: What would the behavior be?
In Fair Share Scheduler: What would the behavior be?


Case 2 (Scarcity)



State of the system
===================
dn1 has 3 mappers running from another job Y; so 0 mapper slots are free
dn2 has 3 mappers running from another job Y; so 0 mapper slots are free
dn3 has 3 mappers running from another job Y; so 0 mapper slots are free
dn4 has 3 mappers running from another job Y; so 0 mapper slots are free
dn5 has 3 mappers running from another job Y; so 0 mapper slots are free
dn6 has 0 mappers running; so 3 mapper slots are free


I have a file that is distributed in 3 64MB blocks with RF 3 in the following way:



R1(dn1,dn2,dn3) 
R2(dn2,dn3,dn4)
R3(dn3,dn4,dn5)


When I run a job X on this file, 3 mappers need to be created corresponding to the 3 data blocks.



Questions



What happens now:



 - Are the 3 mapper tasks created on dn6 (which does not have any of the data blocks of the input file yet) and corresponding data block transferred over the network from say dn1 to dn6?
- If yes, does this same behaviour show in the case of all the three schedulers: FIFO/Capacity/Fair Share?
- If no, then can you elaborate on the behaviour shown for this use case in case of:
- FIFO Scheduler
- Capacity Scheduler
- Fair Share Scheduler









share|improve this question























  • Datanodes don't control job placement. The Nodemanagers do. 6G on the machines also doesn't mean you can allocate 6G worth of MapReduce containers on them
    – cricket_007
    Nov 10 at 20:34










  • @cricket_007 I completely understand; I was using the term dn loosely for the slave machines. And yes, I agree 6G on the machines does not really translate to 6 containers...as you have other daemons running as well like DN, NM, AM etc....what I meant here was "6G 6 cores available for MR". If we abstract out the 6G and 6 core part, and just say that each machine has 3 Mapper and 3 Reducer Slots, then how do the schedulers behave in the 2 cases above?
    – Sheel Pancholi
    Nov 11 at 5:19











  • Well, FIFO is the easiest - it waits for jobs to complete before continuing. Capacity will perform bin-packing, and Fair will preempt/pause/kill other jobs, I believe
    – cricket_007
    Nov 11 at 7:49










  • Thanks @cricket_007. There are 2 cases. In the first case, there are enough mapper slots free in dn1 to dn4 (read slave 1 to slave 4) despite job Y running; are you saying FIFO still does not schedule the new job X? In the second case, irrespective of the scheduling type, there are no mapper slots available in dn1 to dn5. dn6 is free. But dn6 does not have the data blocks required for job X. Can't the job be scheduled to run on this free slave dn6 now and have data blocks/input splits from the relevant slave nodes be transferred over the network into the memory of the mapper tasks in dn6?
    – Sheel Pancholi
    Nov 11 at 9:29










  • Are we assuming that all "slots" require the same amount of memory to be put into them? And the input data is also consistently sized? I think FIFO packs in until no more resources left, then others are waiting. For the others, Capacity and Fair allow individual queues to used to set percentage of memory aside. Bursting workload queues are allowed to take resources away from the others, so it's not just based on single "slots" of any one node, rather all available slots of the whole cluster, for a given queue
    – cricket_007
    Nov 11 at 17:39














up vote
0
down vote

favorite












I have the following 6 Datanodes(dn):



  • dn1 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn2 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn3 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn4 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn5 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn6 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

Case 1 (Availability)



State of the system
===================
dn1 has 1 mapper running from another job Y; so 2 mapper slots are free
dn2 has 1 mapper running from another job Y; so 2 mapper slots are free
dn3 has 1 mapper running from another job Y; so 2 mapper slots are free
dn4 has 1 mapper running from another job Y; so 2 mapper slots are free
dn5 has 0 mappers running; so 3 mapper slots are free
dn6 has 0 mappers running; so 3 mapper slots are free
State of my input file
======================
I have a file that is distributed in 3 64MB blocks with RF 3 in the following way:
R1(dn1,dn2,dn3)
R2(dn2,dn3,dn4)
R3(dn3,dn4,dn5)


When I run a job X on this file, 3 mappers need to be created corresponding to the 3 data blocks.



Questions



In FIFO: Is Job X still put on the queue waiting for Y to finish considering its a FIFO scheduler and other jobs are running even though "there are other mapper slots free in the same machine" or the FIFO logic kicks in only when no more resources are available in the system and the jobs consequently has to be put on the queue?
In Capacity Scheduler: What would the behavior be?
In Fair Share Scheduler: What would the behavior be?


Case 2 (Scarcity)



State of the system
===================
dn1 has 3 mappers running from another job Y; so 0 mapper slots are free
dn2 has 3 mappers running from another job Y; so 0 mapper slots are free
dn3 has 3 mappers running from another job Y; so 0 mapper slots are free
dn4 has 3 mappers running from another job Y; so 0 mapper slots are free
dn5 has 3 mappers running from another job Y; so 0 mapper slots are free
dn6 has 0 mappers running; so 3 mapper slots are free


I have a file that is distributed in 3 64MB blocks with RF 3 in the following way:



R1(dn1,dn2,dn3) 
R2(dn2,dn3,dn4)
R3(dn3,dn4,dn5)


When I run a job X on this file, 3 mappers need to be created corresponding to the 3 data blocks.



Questions



What happens now:



 - Are the 3 mapper tasks created on dn6 (which does not have any of the data blocks of the input file yet) and corresponding data block transferred over the network from say dn1 to dn6?
- If yes, does this same behaviour show in the case of all the three schedulers: FIFO/Capacity/Fair Share?
- If no, then can you elaborate on the behaviour shown for this use case in case of:
- FIFO Scheduler
- Capacity Scheduler
- Fair Share Scheduler









share|improve this question























  • Datanodes don't control job placement. The Nodemanagers do. 6G on the machines also doesn't mean you can allocate 6G worth of MapReduce containers on them
    – cricket_007
    Nov 10 at 20:34










  • @cricket_007 I completely understand; I was using the term dn loosely for the slave machines. And yes, I agree 6G on the machines does not really translate to 6 containers...as you have other daemons running as well like DN, NM, AM etc....what I meant here was "6G 6 cores available for MR". If we abstract out the 6G and 6 core part, and just say that each machine has 3 Mapper and 3 Reducer Slots, then how do the schedulers behave in the 2 cases above?
    – Sheel Pancholi
    Nov 11 at 5:19











  • Well, FIFO is the easiest - it waits for jobs to complete before continuing. Capacity will perform bin-packing, and Fair will preempt/pause/kill other jobs, I believe
    – cricket_007
    Nov 11 at 7:49










  • Thanks @cricket_007. There are 2 cases. In the first case, there are enough mapper slots free in dn1 to dn4 (read slave 1 to slave 4) despite job Y running; are you saying FIFO still does not schedule the new job X? In the second case, irrespective of the scheduling type, there are no mapper slots available in dn1 to dn5. dn6 is free. But dn6 does not have the data blocks required for job X. Can't the job be scheduled to run on this free slave dn6 now and have data blocks/input splits from the relevant slave nodes be transferred over the network into the memory of the mapper tasks in dn6?
    – Sheel Pancholi
    Nov 11 at 9:29










  • Are we assuming that all "slots" require the same amount of memory to be put into them? And the input data is also consistently sized? I think FIFO packs in until no more resources left, then others are waiting. For the others, Capacity and Fair allow individual queues to used to set percentage of memory aside. Bursting workload queues are allowed to take resources away from the others, so it's not just based on single "slots" of any one node, rather all available slots of the whole cluster, for a given queue
    – cricket_007
    Nov 11 at 17:39












up vote
0
down vote

favorite









up vote
0
down vote

favorite











I have the following 6 Datanodes(dn):



  • dn1 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn2 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn3 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn4 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn5 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn6 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

Case 1 (Availability)



State of the system
===================
dn1 has 1 mapper running from another job Y; so 2 mapper slots are free
dn2 has 1 mapper running from another job Y; so 2 mapper slots are free
dn3 has 1 mapper running from another job Y; so 2 mapper slots are free
dn4 has 1 mapper running from another job Y; so 2 mapper slots are free
dn5 has 0 mappers running; so 3 mapper slots are free
dn6 has 0 mappers running; so 3 mapper slots are free
State of my input file
======================
I have a file that is distributed in 3 64MB blocks with RF 3 in the following way:
R1(dn1,dn2,dn3)
R2(dn2,dn3,dn4)
R3(dn3,dn4,dn5)


When I run a job X on this file, 3 mappers need to be created corresponding to the 3 data blocks.



Questions



In FIFO: Is Job X still put on the queue waiting for Y to finish considering its a FIFO scheduler and other jobs are running even though "there are other mapper slots free in the same machine" or the FIFO logic kicks in only when no more resources are available in the system and the jobs consequently has to be put on the queue?
In Capacity Scheduler: What would the behavior be?
In Fair Share Scheduler: What would the behavior be?


Case 2 (Scarcity)



State of the system
===================
dn1 has 3 mappers running from another job Y; so 0 mapper slots are free
dn2 has 3 mappers running from another job Y; so 0 mapper slots are free
dn3 has 3 mappers running from another job Y; so 0 mapper slots are free
dn4 has 3 mappers running from another job Y; so 0 mapper slots are free
dn5 has 3 mappers running from another job Y; so 0 mapper slots are free
dn6 has 0 mappers running; so 3 mapper slots are free


I have a file that is distributed in 3 64MB blocks with RF 3 in the following way:



R1(dn1,dn2,dn3) 
R2(dn2,dn3,dn4)
R3(dn3,dn4,dn5)


When I run a job X on this file, 3 mappers need to be created corresponding to the 3 data blocks.



Questions



What happens now:



 - Are the 3 mapper tasks created on dn6 (which does not have any of the data blocks of the input file yet) and corresponding data block transferred over the network from say dn1 to dn6?
- If yes, does this same behaviour show in the case of all the three schedulers: FIFO/Capacity/Fair Share?
- If no, then can you elaborate on the behaviour shown for this use case in case of:
- FIFO Scheduler
- Capacity Scheduler
- Fair Share Scheduler









share|improve this question















I have the following 6 Datanodes(dn):



  • dn1 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn2 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn3 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn4 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn5 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

  • dn6 - 6 cores 6GB - 3 Map Slots and 3 Reduce Slots

Case 1 (Availability)



State of the system
===================
dn1 has 1 mapper running from another job Y; so 2 mapper slots are free
dn2 has 1 mapper running from another job Y; so 2 mapper slots are free
dn3 has 1 mapper running from another job Y; so 2 mapper slots are free
dn4 has 1 mapper running from another job Y; so 2 mapper slots are free
dn5 has 0 mappers running; so 3 mapper slots are free
dn6 has 0 mappers running; so 3 mapper slots are free
State of my input file
======================
I have a file that is distributed in 3 64MB blocks with RF 3 in the following way:
R1(dn1,dn2,dn3)
R2(dn2,dn3,dn4)
R3(dn3,dn4,dn5)


When I run a job X on this file, 3 mappers need to be created corresponding to the 3 data blocks.



Questions



In FIFO: Is Job X still put on the queue waiting for Y to finish considering its a FIFO scheduler and other jobs are running even though "there are other mapper slots free in the same machine" or the FIFO logic kicks in only when no more resources are available in the system and the jobs consequently has to be put on the queue?
In Capacity Scheduler: What would the behavior be?
In Fair Share Scheduler: What would the behavior be?


Case 2 (Scarcity)



State of the system
===================
dn1 has 3 mappers running from another job Y; so 0 mapper slots are free
dn2 has 3 mappers running from another job Y; so 0 mapper slots are free
dn3 has 3 mappers running from another job Y; so 0 mapper slots are free
dn4 has 3 mappers running from another job Y; so 0 mapper slots are free
dn5 has 3 mappers running from another job Y; so 0 mapper slots are free
dn6 has 0 mappers running; so 3 mapper slots are free


I have a file that is distributed in 3 64MB blocks with RF 3 in the following way:



R1(dn1,dn2,dn3) 
R2(dn2,dn3,dn4)
R3(dn3,dn4,dn5)


When I run a job X on this file, 3 mappers need to be created corresponding to the 3 data blocks.



Questions



What happens now:



 - Are the 3 mapper tasks created on dn6 (which does not have any of the data blocks of the input file yet) and corresponding data block transferred over the network from say dn1 to dn6?
- If yes, does this same behaviour show in the case of all the three schedulers: FIFO/Capacity/Fair Share?
- If no, then can you elaborate on the behaviour shown for this use case in case of:
- FIFO Scheduler
- Capacity Scheduler
- Fair Share Scheduler






hadoop mapreduce hadoop2






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 11 at 9:35

























asked Nov 10 at 15:28









Sheel Pancholi

429




429











  • Datanodes don't control job placement. The Nodemanagers do. 6G on the machines also doesn't mean you can allocate 6G worth of MapReduce containers on them
    – cricket_007
    Nov 10 at 20:34










  • @cricket_007 I completely understand; I was using the term dn loosely for the slave machines. And yes, I agree 6G on the machines does not really translate to 6 containers...as you have other daemons running as well like DN, NM, AM etc....what I meant here was "6G 6 cores available for MR". If we abstract out the 6G and 6 core part, and just say that each machine has 3 Mapper and 3 Reducer Slots, then how do the schedulers behave in the 2 cases above?
    – Sheel Pancholi
    Nov 11 at 5:19











  • Well, FIFO is the easiest - it waits for jobs to complete before continuing. Capacity will perform bin-packing, and Fair will preempt/pause/kill other jobs, I believe
    – cricket_007
    Nov 11 at 7:49










  • Thanks @cricket_007. There are 2 cases. In the first case, there are enough mapper slots free in dn1 to dn4 (read slave 1 to slave 4) despite job Y running; are you saying FIFO still does not schedule the new job X? In the second case, irrespective of the scheduling type, there are no mapper slots available in dn1 to dn5. dn6 is free. But dn6 does not have the data blocks required for job X. Can't the job be scheduled to run on this free slave dn6 now and have data blocks/input splits from the relevant slave nodes be transferred over the network into the memory of the mapper tasks in dn6?
    – Sheel Pancholi
    Nov 11 at 9:29










  • Are we assuming that all "slots" require the same amount of memory to be put into them? And the input data is also consistently sized? I think FIFO packs in until no more resources left, then others are waiting. For the others, Capacity and Fair allow individual queues to used to set percentage of memory aside. Bursting workload queues are allowed to take resources away from the others, so it's not just based on single "slots" of any one node, rather all available slots of the whole cluster, for a given queue
    – cricket_007
    Nov 11 at 17:39
















  • Datanodes don't control job placement. The Nodemanagers do. 6G on the machines also doesn't mean you can allocate 6G worth of MapReduce containers on them
    – cricket_007
    Nov 10 at 20:34










  • @cricket_007 I completely understand; I was using the term dn loosely for the slave machines. And yes, I agree 6G on the machines does not really translate to 6 containers...as you have other daemons running as well like DN, NM, AM etc....what I meant here was "6G 6 cores available for MR". If we abstract out the 6G and 6 core part, and just say that each machine has 3 Mapper and 3 Reducer Slots, then how do the schedulers behave in the 2 cases above?
    – Sheel Pancholi
    Nov 11 at 5:19











  • Well, FIFO is the easiest - it waits for jobs to complete before continuing. Capacity will perform bin-packing, and Fair will preempt/pause/kill other jobs, I believe
    – cricket_007
    Nov 11 at 7:49










  • Thanks @cricket_007. There are 2 cases. In the first case, there are enough mapper slots free in dn1 to dn4 (read slave 1 to slave 4) despite job Y running; are you saying FIFO still does not schedule the new job X? In the second case, irrespective of the scheduling type, there are no mapper slots available in dn1 to dn5. dn6 is free. But dn6 does not have the data blocks required for job X. Can't the job be scheduled to run on this free slave dn6 now and have data blocks/input splits from the relevant slave nodes be transferred over the network into the memory of the mapper tasks in dn6?
    – Sheel Pancholi
    Nov 11 at 9:29










  • Are we assuming that all "slots" require the same amount of memory to be put into them? And the input data is also consistently sized? I think FIFO packs in until no more resources left, then others are waiting. For the others, Capacity and Fair allow individual queues to used to set percentage of memory aside. Bursting workload queues are allowed to take resources away from the others, so it's not just based on single "slots" of any one node, rather all available slots of the whole cluster, for a given queue
    – cricket_007
    Nov 11 at 17:39















Datanodes don't control job placement. The Nodemanagers do. 6G on the machines also doesn't mean you can allocate 6G worth of MapReduce containers on them
– cricket_007
Nov 10 at 20:34




Datanodes don't control job placement. The Nodemanagers do. 6G on the machines also doesn't mean you can allocate 6G worth of MapReduce containers on them
– cricket_007
Nov 10 at 20:34












@cricket_007 I completely understand; I was using the term dn loosely for the slave machines. And yes, I agree 6G on the machines does not really translate to 6 containers...as you have other daemons running as well like DN, NM, AM etc....what I meant here was "6G 6 cores available for MR". If we abstract out the 6G and 6 core part, and just say that each machine has 3 Mapper and 3 Reducer Slots, then how do the schedulers behave in the 2 cases above?
– Sheel Pancholi
Nov 11 at 5:19





@cricket_007 I completely understand; I was using the term dn loosely for the slave machines. And yes, I agree 6G on the machines does not really translate to 6 containers...as you have other daemons running as well like DN, NM, AM etc....what I meant here was "6G 6 cores available for MR". If we abstract out the 6G and 6 core part, and just say that each machine has 3 Mapper and 3 Reducer Slots, then how do the schedulers behave in the 2 cases above?
– Sheel Pancholi
Nov 11 at 5:19













Well, FIFO is the easiest - it waits for jobs to complete before continuing. Capacity will perform bin-packing, and Fair will preempt/pause/kill other jobs, I believe
– cricket_007
Nov 11 at 7:49




Well, FIFO is the easiest - it waits for jobs to complete before continuing. Capacity will perform bin-packing, and Fair will preempt/pause/kill other jobs, I believe
– cricket_007
Nov 11 at 7:49












Thanks @cricket_007. There are 2 cases. In the first case, there are enough mapper slots free in dn1 to dn4 (read slave 1 to slave 4) despite job Y running; are you saying FIFO still does not schedule the new job X? In the second case, irrespective of the scheduling type, there are no mapper slots available in dn1 to dn5. dn6 is free. But dn6 does not have the data blocks required for job X. Can't the job be scheduled to run on this free slave dn6 now and have data blocks/input splits from the relevant slave nodes be transferred over the network into the memory of the mapper tasks in dn6?
– Sheel Pancholi
Nov 11 at 9:29




Thanks @cricket_007. There are 2 cases. In the first case, there are enough mapper slots free in dn1 to dn4 (read slave 1 to slave 4) despite job Y running; are you saying FIFO still does not schedule the new job X? In the second case, irrespective of the scheduling type, there are no mapper slots available in dn1 to dn5. dn6 is free. But dn6 does not have the data blocks required for job X. Can't the job be scheduled to run on this free slave dn6 now and have data blocks/input splits from the relevant slave nodes be transferred over the network into the memory of the mapper tasks in dn6?
– Sheel Pancholi
Nov 11 at 9:29












Are we assuming that all "slots" require the same amount of memory to be put into them? And the input data is also consistently sized? I think FIFO packs in until no more resources left, then others are waiting. For the others, Capacity and Fair allow individual queues to used to set percentage of memory aside. Bursting workload queues are allowed to take resources away from the others, so it's not just based on single "slots" of any one node, rather all available slots of the whole cluster, for a given queue
– cricket_007
Nov 11 at 17:39




Are we assuming that all "slots" require the same amount of memory to be put into them? And the input data is also consistently sized? I think FIFO packs in until no more resources left, then others are waiting. For the others, Capacity and Fair allow individual queues to used to set percentage of memory aside. Bursting workload queues are allowed to take resources away from the others, so it's not just based on single "slots" of any one node, rather all available slots of the whole cluster, for a given queue
– cricket_007
Nov 11 at 17:39

















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53240428%2fhadoop-scheduling-2-extremes-availability-and-scarcity-of-resources%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53240428%2fhadoop-scheduling-2-extremes-availability-and-scarcity-of-resources%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

How to read a connectionString WITH PROVIDER in .NET Core?

Node.js Script on GitHub Pages or Amazon S3

Museum of Modern and Contemporary Art of Trento and Rovereto