Slow Dask performance compared to native sklearn










3















I'm new to using Dask but have experienced painfully slow performance when attempting to re-write native sklearn functions in Dask. I've simplified the use-case as much as possible in hope of getting some help.



Using standard sklearn/numpy/pandas etc I have the following:



df = pd.read_csv(location, index_col=False) # A ~75MB CSV
# Build feature list and dependent variables, code irrelevant

from sklearn import linear_model
model = linear_model.Lasso(alpha=0.1, normalize=False, max_iter=100, tol=Tol)
model.fit(features.values, dependent)
print(model.coef_)
print(model.intercept_)


This takes a few seconds to compute. I then have the following in Dask:



# Read in CSV and prepare params like before but using dask arrays/dataframes instead

with joblib.parallel_backend('dask'):
from dask_glm.estimators import LinearRegression
# Coerce data
X = self.features.to_dask_array(lengths=True)
y = self.dependents

# Build regression
lr = LinearRegression(fit_intercept=True, solver='admm', tol=self.tolerance, regularizer='l1', max_iter=100, lamduh=0.1)
lr.fit(X, y)

print(lr.coef_)
print(lr.intercept_)


Which takes ages to compute (about 30 minutes). I only have 1 Dask worker in my development cluster but that has 16GB ram and unbounded CPU.



Has anyone any idea why this is so slow?



Hopefully my code omissions aren't significant!



NB: This is the simplest use-case before people ask why even use Dask - this was used as a proof of concept exercise to check that things would function as expected.










share|improve this question



















  • 2





    You are comparing two completely different algorithms (hint: Coordinate descent/first-order vs. Newton/second-order=hessian-opt).

    – sascha
    Nov 15 '18 at 13:52












  • @sascha Sorry was supposed to read admm - although what you said may still be true! Left over from me trying to figure out the speed

    – Sykomaniac
    Nov 15 '18 at 14:03












  • In addition to the above (different algorithms), are you getting burned on IPC overhead?

    – shadowtalker
    Nov 15 '18 at 14:09















3















I'm new to using Dask but have experienced painfully slow performance when attempting to re-write native sklearn functions in Dask. I've simplified the use-case as much as possible in hope of getting some help.



Using standard sklearn/numpy/pandas etc I have the following:



df = pd.read_csv(location, index_col=False) # A ~75MB CSV
# Build feature list and dependent variables, code irrelevant

from sklearn import linear_model
model = linear_model.Lasso(alpha=0.1, normalize=False, max_iter=100, tol=Tol)
model.fit(features.values, dependent)
print(model.coef_)
print(model.intercept_)


This takes a few seconds to compute. I then have the following in Dask:



# Read in CSV and prepare params like before but using dask arrays/dataframes instead

with joblib.parallel_backend('dask'):
from dask_glm.estimators import LinearRegression
# Coerce data
X = self.features.to_dask_array(lengths=True)
y = self.dependents

# Build regression
lr = LinearRegression(fit_intercept=True, solver='admm', tol=self.tolerance, regularizer='l1', max_iter=100, lamduh=0.1)
lr.fit(X, y)

print(lr.coef_)
print(lr.intercept_)


Which takes ages to compute (about 30 minutes). I only have 1 Dask worker in my development cluster but that has 16GB ram and unbounded CPU.



Has anyone any idea why this is so slow?



Hopefully my code omissions aren't significant!



NB: This is the simplest use-case before people ask why even use Dask - this was used as a proof of concept exercise to check that things would function as expected.










share|improve this question



















  • 2





    You are comparing two completely different algorithms (hint: Coordinate descent/first-order vs. Newton/second-order=hessian-opt).

    – sascha
    Nov 15 '18 at 13:52












  • @sascha Sorry was supposed to read admm - although what you said may still be true! Left over from me trying to figure out the speed

    – Sykomaniac
    Nov 15 '18 at 14:03












  • In addition to the above (different algorithms), are you getting burned on IPC overhead?

    – shadowtalker
    Nov 15 '18 at 14:09













3












3








3


1






I'm new to using Dask but have experienced painfully slow performance when attempting to re-write native sklearn functions in Dask. I've simplified the use-case as much as possible in hope of getting some help.



Using standard sklearn/numpy/pandas etc I have the following:



df = pd.read_csv(location, index_col=False) # A ~75MB CSV
# Build feature list and dependent variables, code irrelevant

from sklearn import linear_model
model = linear_model.Lasso(alpha=0.1, normalize=False, max_iter=100, tol=Tol)
model.fit(features.values, dependent)
print(model.coef_)
print(model.intercept_)


This takes a few seconds to compute. I then have the following in Dask:



# Read in CSV and prepare params like before but using dask arrays/dataframes instead

with joblib.parallel_backend('dask'):
from dask_glm.estimators import LinearRegression
# Coerce data
X = self.features.to_dask_array(lengths=True)
y = self.dependents

# Build regression
lr = LinearRegression(fit_intercept=True, solver='admm', tol=self.tolerance, regularizer='l1', max_iter=100, lamduh=0.1)
lr.fit(X, y)

print(lr.coef_)
print(lr.intercept_)


Which takes ages to compute (about 30 minutes). I only have 1 Dask worker in my development cluster but that has 16GB ram and unbounded CPU.



Has anyone any idea why this is so slow?



Hopefully my code omissions aren't significant!



NB: This is the simplest use-case before people ask why even use Dask - this was used as a proof of concept exercise to check that things would function as expected.










share|improve this question
















I'm new to using Dask but have experienced painfully slow performance when attempting to re-write native sklearn functions in Dask. I've simplified the use-case as much as possible in hope of getting some help.



Using standard sklearn/numpy/pandas etc I have the following:



df = pd.read_csv(location, index_col=False) # A ~75MB CSV
# Build feature list and dependent variables, code irrelevant

from sklearn import linear_model
model = linear_model.Lasso(alpha=0.1, normalize=False, max_iter=100, tol=Tol)
model.fit(features.values, dependent)
print(model.coef_)
print(model.intercept_)


This takes a few seconds to compute. I then have the following in Dask:



# Read in CSV and prepare params like before but using dask arrays/dataframes instead

with joblib.parallel_backend('dask'):
from dask_glm.estimators import LinearRegression
# Coerce data
X = self.features.to_dask_array(lengths=True)
y = self.dependents

# Build regression
lr = LinearRegression(fit_intercept=True, solver='admm', tol=self.tolerance, regularizer='l1', max_iter=100, lamduh=0.1)
lr.fit(X, y)

print(lr.coef_)
print(lr.intercept_)


Which takes ages to compute (about 30 minutes). I only have 1 Dask worker in my development cluster but that has 16GB ram and unbounded CPU.



Has anyone any idea why this is so slow?



Hopefully my code omissions aren't significant!



NB: This is the simplest use-case before people ask why even use Dask - this was used as a proof of concept exercise to check that things would function as expected.







python scikit-learn dask






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 15 '18 at 14:04







Sykomaniac

















asked Nov 15 '18 at 13:32









SykomaniacSykomaniac

6511




6511







  • 2





    You are comparing two completely different algorithms (hint: Coordinate descent/first-order vs. Newton/second-order=hessian-opt).

    – sascha
    Nov 15 '18 at 13:52












  • @sascha Sorry was supposed to read admm - although what you said may still be true! Left over from me trying to figure out the speed

    – Sykomaniac
    Nov 15 '18 at 14:03












  • In addition to the above (different algorithms), are you getting burned on IPC overhead?

    – shadowtalker
    Nov 15 '18 at 14:09












  • 2





    You are comparing two completely different algorithms (hint: Coordinate descent/first-order vs. Newton/second-order=hessian-opt).

    – sascha
    Nov 15 '18 at 13:52












  • @sascha Sorry was supposed to read admm - although what you said may still be true! Left over from me trying to figure out the speed

    – Sykomaniac
    Nov 15 '18 at 14:03












  • In addition to the above (different algorithms), are you getting burned on IPC overhead?

    – shadowtalker
    Nov 15 '18 at 14:09







2




2





You are comparing two completely different algorithms (hint: Coordinate descent/first-order vs. Newton/second-order=hessian-opt).

– sascha
Nov 15 '18 at 13:52






You are comparing two completely different algorithms (hint: Coordinate descent/first-order vs. Newton/second-order=hessian-opt).

– sascha
Nov 15 '18 at 13:52














@sascha Sorry was supposed to read admm - although what you said may still be true! Left over from me trying to figure out the speed

– Sykomaniac
Nov 15 '18 at 14:03






@sascha Sorry was supposed to read admm - although what you said may still be true! Left over from me trying to figure out the speed

– Sykomaniac
Nov 15 '18 at 14:03














In addition to the above (different algorithms), are you getting burned on IPC overhead?

– shadowtalker
Nov 15 '18 at 14:09





In addition to the above (different algorithms), are you getting burned on IPC overhead?

– shadowtalker
Nov 15 '18 at 14:09












1 Answer
1






active

oldest

votes


















2














A quote from the documentation you may want to consider:




For large arguments that are used by multiple tasks, it may be more efficient to pre-scatter the data to every worker, rather than serializing it once for every task. This can be done using the scatter keyword argument, which takes an iterable of objects to send to each worker.




But in general, Dask has a lot of diagnostics available to you, especially the scheduler's dashboard, to help figure out what your workers are doing and how time is being spent - you would do well to investigate it. Other system-wide factors are also very important, as with any computation: how close are you coming to your memory capacity, for instance?



In general, though, Dask is not magic, and when data fits comfortable into memory anyway, there will certainly be cases where dask add significant overhead. Read the documentation carefully on the intended use for the method you are considering - is it supposed to speed things up, or merely allow you to process more data than would normally fit on your system?






share|improve this answer






















    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53320649%2fslow-dask-performance-compared-to-native-sklearn%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2














    A quote from the documentation you may want to consider:




    For large arguments that are used by multiple tasks, it may be more efficient to pre-scatter the data to every worker, rather than serializing it once for every task. This can be done using the scatter keyword argument, which takes an iterable of objects to send to each worker.




    But in general, Dask has a lot of diagnostics available to you, especially the scheduler's dashboard, to help figure out what your workers are doing and how time is being spent - you would do well to investigate it. Other system-wide factors are also very important, as with any computation: how close are you coming to your memory capacity, for instance?



    In general, though, Dask is not magic, and when data fits comfortable into memory anyway, there will certainly be cases where dask add significant overhead. Read the documentation carefully on the intended use for the method you are considering - is it supposed to speed things up, or merely allow you to process more data than would normally fit on your system?






    share|improve this answer



























      2














      A quote from the documentation you may want to consider:




      For large arguments that are used by multiple tasks, it may be more efficient to pre-scatter the data to every worker, rather than serializing it once for every task. This can be done using the scatter keyword argument, which takes an iterable of objects to send to each worker.




      But in general, Dask has a lot of diagnostics available to you, especially the scheduler's dashboard, to help figure out what your workers are doing and how time is being spent - you would do well to investigate it. Other system-wide factors are also very important, as with any computation: how close are you coming to your memory capacity, for instance?



      In general, though, Dask is not magic, and when data fits comfortable into memory anyway, there will certainly be cases where dask add significant overhead. Read the documentation carefully on the intended use for the method you are considering - is it supposed to speed things up, or merely allow you to process more data than would normally fit on your system?






      share|improve this answer

























        2












        2








        2







        A quote from the documentation you may want to consider:




        For large arguments that are used by multiple tasks, it may be more efficient to pre-scatter the data to every worker, rather than serializing it once for every task. This can be done using the scatter keyword argument, which takes an iterable of objects to send to each worker.




        But in general, Dask has a lot of diagnostics available to you, especially the scheduler's dashboard, to help figure out what your workers are doing and how time is being spent - you would do well to investigate it. Other system-wide factors are also very important, as with any computation: how close are you coming to your memory capacity, for instance?



        In general, though, Dask is not magic, and when data fits comfortable into memory anyway, there will certainly be cases where dask add significant overhead. Read the documentation carefully on the intended use for the method you are considering - is it supposed to speed things up, or merely allow you to process more data than would normally fit on your system?






        share|improve this answer













        A quote from the documentation you may want to consider:




        For large arguments that are used by multiple tasks, it may be more efficient to pre-scatter the data to every worker, rather than serializing it once for every task. This can be done using the scatter keyword argument, which takes an iterable of objects to send to each worker.




        But in general, Dask has a lot of diagnostics available to you, especially the scheduler's dashboard, to help figure out what your workers are doing and how time is being spent - you would do well to investigate it. Other system-wide factors are also very important, as with any computation: how close are you coming to your memory capacity, for instance?



        In general, though, Dask is not magic, and when data fits comfortable into memory anyway, there will certainly be cases where dask add significant overhead. Read the documentation carefully on the intended use for the method you are considering - is it supposed to speed things up, or merely allow you to process more data than would normally fit on your system?







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 18 '18 at 17:35









        mdurantmdurant

        11.3k11741




        11.3k11741





























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53320649%2fslow-dask-performance-compared-to-native-sklearn%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            這個網誌中的熱門文章

            How to read a connectionString WITH PROVIDER in .NET Core?

            Node.js Script on GitHub Pages or Amazon S3

            Museum of Modern and Contemporary Art of Trento and Rovereto