aggregate data for last seven day for each date










2















I have a dataset:



 app id geo date count
90 NO 2018-09-04 27
66 HK 2018-09-03 2
66 HK 2018-09-02 4
80 QA 2018-04-22 5
85 MA 2018-04-20 1
80 BR 2018-04-19 68


I am trying to generate a field which would aggregate data for each date for last seven days. My dataset should look like that:



 app id geo date count count_last_7_days
90 NO 2018-09-04 27 33
66 HK 2018-09-03 2 6
66 HK 2018-09-02 4 4
80 QA 2018-04-22 5 74
85 MA 2018-04-20 1 69
80 BR 2018-04-19 68 68


I am trying this code:



 df['date'] = pd.to_datetime(df['date']) - pd.to_timedelta(7, unit='d')
df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='W')]) .
['count'].sum().reset_index().sort_values('date')


But even thought I use Grouper with weekly frequency (freq='W' ), It considers start of the week on Sunday and I don't have 7 days lag for non-Sunday entries.



Please, suggest how I can calculate that field.










share|improve this question






















  • What if you change it to df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='D')])

    – pygo
    Nov 14 '18 at 17:17















2















I have a dataset:



 app id geo date count
90 NO 2018-09-04 27
66 HK 2018-09-03 2
66 HK 2018-09-02 4
80 QA 2018-04-22 5
85 MA 2018-04-20 1
80 BR 2018-04-19 68


I am trying to generate a field which would aggregate data for each date for last seven days. My dataset should look like that:



 app id geo date count count_last_7_days
90 NO 2018-09-04 27 33
66 HK 2018-09-03 2 6
66 HK 2018-09-02 4 4
80 QA 2018-04-22 5 74
85 MA 2018-04-20 1 69
80 BR 2018-04-19 68 68


I am trying this code:



 df['date'] = pd.to_datetime(df['date']) - pd.to_timedelta(7, unit='d')
df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='W')]) .
['count'].sum().reset_index().sort_values('date')


But even thought I use Grouper with weekly frequency (freq='W' ), It considers start of the week on Sunday and I don't have 7 days lag for non-Sunday entries.



Please, suggest how I can calculate that field.










share|improve this question






















  • What if you change it to df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='D')])

    – pygo
    Nov 14 '18 at 17:17













2












2








2








I have a dataset:



 app id geo date count
90 NO 2018-09-04 27
66 HK 2018-09-03 2
66 HK 2018-09-02 4
80 QA 2018-04-22 5
85 MA 2018-04-20 1
80 BR 2018-04-19 68


I am trying to generate a field which would aggregate data for each date for last seven days. My dataset should look like that:



 app id geo date count count_last_7_days
90 NO 2018-09-04 27 33
66 HK 2018-09-03 2 6
66 HK 2018-09-02 4 4
80 QA 2018-04-22 5 74
85 MA 2018-04-20 1 69
80 BR 2018-04-19 68 68


I am trying this code:



 df['date'] = pd.to_datetime(df['date']) - pd.to_timedelta(7, unit='d')
df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='W')]) .
['count'].sum().reset_index().sort_values('date')


But even thought I use Grouper with weekly frequency (freq='W' ), It considers start of the week on Sunday and I don't have 7 days lag for non-Sunday entries.



Please, suggest how I can calculate that field.










share|improve this question














I have a dataset:



 app id geo date count
90 NO 2018-09-04 27
66 HK 2018-09-03 2
66 HK 2018-09-02 4
80 QA 2018-04-22 5
85 MA 2018-04-20 1
80 BR 2018-04-19 68


I am trying to generate a field which would aggregate data for each date for last seven days. My dataset should look like that:



 app id geo date count count_last_7_days
90 NO 2018-09-04 27 33
66 HK 2018-09-03 2 6
66 HK 2018-09-02 4 4
80 QA 2018-04-22 5 74
85 MA 2018-04-20 1 69
80 BR 2018-04-19 68 68


I am trying this code:



 df['date'] = pd.to_datetime(df['date']) - pd.to_timedelta(7, unit='d')
df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='W')]) .
['count'].sum().reset_index().sort_values('date')


But even thought I use Grouper with weekly frequency (freq='W' ), It considers start of the week on Sunday and I don't have 7 days lag for non-Sunday entries.



Please, suggest how I can calculate that field.







python pandas date grouping






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 14 '18 at 17:04









Liza CheLiza Che

163




163












  • What if you change it to df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='D')])

    – pygo
    Nov 14 '18 at 17:17

















  • What if you change it to df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='D')])

    – pygo
    Nov 14 '18 at 17:17
















What if you change it to df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='D')])

– pygo
Nov 14 '18 at 17:17





What if you change it to df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='D')])

– pygo
Nov 14 '18 at 17:17












1 Answer
1






active

oldest

votes


















0














A dirty one-liner would be



import numpy as np
df['count_last_7_days'] = [np.sum(df['count'][np.logical_and(df['date'][i] - df['date'] < pd.to_timedelta(7,unit='d'),df['date'][i] - df['date'] >= pd.to_timedelta(0,unit='d'))]) for i in range(df.shape[0])]


Note that I converted the time column to datetime using pd.to_datetime() first.



What this does is: for each day it finds all other rows within the desired one-week timespan, flags them with a boolean value and sums them after






share|improve this answer






















    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53305375%2faggregate-data-for-last-seven-day-for-each-date%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    A dirty one-liner would be



    import numpy as np
    df['count_last_7_days'] = [np.sum(df['count'][np.logical_and(df['date'][i] - df['date'] < pd.to_timedelta(7,unit='d'),df['date'][i] - df['date'] >= pd.to_timedelta(0,unit='d'))]) for i in range(df.shape[0])]


    Note that I converted the time column to datetime using pd.to_datetime() first.



    What this does is: for each day it finds all other rows within the desired one-week timespan, flags them with a boolean value and sums them after






    share|improve this answer



























      0














      A dirty one-liner would be



      import numpy as np
      df['count_last_7_days'] = [np.sum(df['count'][np.logical_and(df['date'][i] - df['date'] < pd.to_timedelta(7,unit='d'),df['date'][i] - df['date'] >= pd.to_timedelta(0,unit='d'))]) for i in range(df.shape[0])]


      Note that I converted the time column to datetime using pd.to_datetime() first.



      What this does is: for each day it finds all other rows within the desired one-week timespan, flags them with a boolean value and sums them after






      share|improve this answer

























        0












        0








        0







        A dirty one-liner would be



        import numpy as np
        df['count_last_7_days'] = [np.sum(df['count'][np.logical_and(df['date'][i] - df['date'] < pd.to_timedelta(7,unit='d'),df['date'][i] - df['date'] >= pd.to_timedelta(0,unit='d'))]) for i in range(df.shape[0])]


        Note that I converted the time column to datetime using pd.to_datetime() first.



        What this does is: for each day it finds all other rows within the desired one-week timespan, flags them with a boolean value and sums them after






        share|improve this answer













        A dirty one-liner would be



        import numpy as np
        df['count_last_7_days'] = [np.sum(df['count'][np.logical_and(df['date'][i] - df['date'] < pd.to_timedelta(7,unit='d'),df['date'][i] - df['date'] >= pd.to_timedelta(0,unit='d'))]) for i in range(df.shape[0])]


        Note that I converted the time column to datetime using pd.to_datetime() first.



        What this does is: for each day it finds all other rows within the desired one-week timespan, flags them with a boolean value and sums them after







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 15 '18 at 8:54









        Lukas ThalerLukas Thaler

        2399




        2399





























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53305375%2faggregate-data-for-last-seven-day-for-each-date%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            這個網誌中的熱門文章

            How to read a connectionString WITH PROVIDER in .NET Core?

            Museum of Modern and Contemporary Art of Trento and Rovereto

            In R, how to develop a multiplot heatmap.2 figure showing key labels successfully