spark how to reduce by column of data type is date










0















I am working on a DataFrame looks as follows:



-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-01 03:54:00 | 3 |
| 2014-12-01 04:54:00 | 4 |
| 2014-12-01 05:54:00 | 5 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-02 02:54:00 | 7 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------


the number of samples on each day is pretty random.



I want to get just one sample on each day, such as:



-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------


I don't care about which sample I get from one day, but
I want to make sure to get one so there is no day duplication
on "time" column.










share|improve this question
























  • which programming language are you using?

    – Psidom
    Nov 13 '18 at 18:49















0















I am working on a DataFrame looks as follows:



-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-01 03:54:00 | 3 |
| 2014-12-01 04:54:00 | 4 |
| 2014-12-01 05:54:00 | 5 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-02 02:54:00 | 7 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------


the number of samples on each day is pretty random.



I want to get just one sample on each day, such as:



-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------


I don't care about which sample I get from one day, but
I want to make sure to get one so there is no day duplication
on "time" column.










share|improve this question
























  • which programming language are you using?

    – Psidom
    Nov 13 '18 at 18:49













0












0








0


1






I am working on a DataFrame looks as follows:



-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-01 03:54:00 | 3 |
| 2014-12-01 04:54:00 | 4 |
| 2014-12-01 05:54:00 | 5 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-02 02:54:00 | 7 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------


the number of samples on each day is pretty random.



I want to get just one sample on each day, such as:



-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------


I don't care about which sample I get from one day, but
I want to make sure to get one so there is no day duplication
on "time" column.










share|improve this question
















I am working on a DataFrame looks as follows:



-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-01 03:54:00 | 3 |
| 2014-12-01 04:54:00 | 4 |
| 2014-12-01 05:54:00 | 5 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-02 02:54:00 | 7 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------


the number of samples on each day is pretty random.



I want to get just one sample on each day, such as:



-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------


I don't care about which sample I get from one day, but
I want to make sure to get one so there is no day duplication
on "time" column.







apache-spark apache-zeppelin






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 13 '18 at 18:44









Psidom

124k1286128




124k1286128










asked Nov 13 '18 at 18:41









Bingchen LiuBingchen Liu

141




141












  • which programming language are you using?

    – Psidom
    Nov 13 '18 at 18:49

















  • which programming language are you using?

    – Psidom
    Nov 13 '18 at 18:49
















which programming language are you using?

– Psidom
Nov 13 '18 at 18:49





which programming language are you using?

– Psidom
Nov 13 '18 at 18:49












2 Answers
2






active

oldest

votes


















1














You can create a date column first and then dropDuplicates based on the date column; Example with pyspark, the syntax should be similar if you are using scala or java:



import pyspark.sql.functions as f
df.withColumn('date', f.to_date('time', 'yyyy-MM-dd HH:mm:ss'))
.dropDuplicates(['date']).drop('date').show()
+-------------------+-----+
| time|value|
+-------------------+-----+
|2014-12-02 02:54:00| 6|
|2014-12-03 02:54:00| 8|
|2014-12-01 02:54:00| 2|
+-------------------+-----+





share|improve this answer






























    1














    You can use window functions, generate row_number by partitioning on date value and filter on row_number=1



    Check this out:



    val df = Seq(("2014-12-01 02:54:00","2"),("2014-12-01 03:54:00","3"),("2014-12-01 04:54:00","4"),("2014-12-01 05:54:00","5"),("2014-12-02 02:54:00","6"),("2014-12-02 02:54:00","7"),("2014-12-03 02:54:00","8"))
    .toDF("time","value")
    df.withColumn("time",'time.cast("timestamp")).withColumn("value",'value.cast("int"))
    df.createOrReplaceTempView("timetab")
    spark.sql(
    """ with order_ts( select time, value , row_number() over(partition by date_format(time,"yyyyMMdd") order by value ) as rn from timetab)
    select time,value from order_ts where rn=1
    """).show(false)


    Output:



    +-------------------+-----+
    |time |value|
    +-------------------+-----+
    |2014-12-02 02:54:00|6 |
    |2014-12-01 02:54:00|2 |
    |2014-12-03 02:54:00|8 |
    +-------------------+-----+





    share|improve this answer






















      Your Answer






      StackExchange.ifUsing("editor", function ()
      StackExchange.using("externalEditor", function ()
      StackExchange.using("snippets", function ()
      StackExchange.snippets.init();
      );
      );
      , "code-snippets");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "1"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53287563%2fspark-how-to-reduce-by-column-of-data-type-is-date%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      1














      You can create a date column first and then dropDuplicates based on the date column; Example with pyspark, the syntax should be similar if you are using scala or java:



      import pyspark.sql.functions as f
      df.withColumn('date', f.to_date('time', 'yyyy-MM-dd HH:mm:ss'))
      .dropDuplicates(['date']).drop('date').show()
      +-------------------+-----+
      | time|value|
      +-------------------+-----+
      |2014-12-02 02:54:00| 6|
      |2014-12-03 02:54:00| 8|
      |2014-12-01 02:54:00| 2|
      +-------------------+-----+





      share|improve this answer



























        1














        You can create a date column first and then dropDuplicates based on the date column; Example with pyspark, the syntax should be similar if you are using scala or java:



        import pyspark.sql.functions as f
        df.withColumn('date', f.to_date('time', 'yyyy-MM-dd HH:mm:ss'))
        .dropDuplicates(['date']).drop('date').show()
        +-------------------+-----+
        | time|value|
        +-------------------+-----+
        |2014-12-02 02:54:00| 6|
        |2014-12-03 02:54:00| 8|
        |2014-12-01 02:54:00| 2|
        +-------------------+-----+





        share|improve this answer

























          1












          1








          1







          You can create a date column first and then dropDuplicates based on the date column; Example with pyspark, the syntax should be similar if you are using scala or java:



          import pyspark.sql.functions as f
          df.withColumn('date', f.to_date('time', 'yyyy-MM-dd HH:mm:ss'))
          .dropDuplicates(['date']).drop('date').show()
          +-------------------+-----+
          | time|value|
          +-------------------+-----+
          |2014-12-02 02:54:00| 6|
          |2014-12-03 02:54:00| 8|
          |2014-12-01 02:54:00| 2|
          +-------------------+-----+





          share|improve this answer













          You can create a date column first and then dropDuplicates based on the date column; Example with pyspark, the syntax should be similar if you are using scala or java:



          import pyspark.sql.functions as f
          df.withColumn('date', f.to_date('time', 'yyyy-MM-dd HH:mm:ss'))
          .dropDuplicates(['date']).drop('date').show()
          +-------------------+-----+
          | time|value|
          +-------------------+-----+
          |2014-12-02 02:54:00| 6|
          |2014-12-03 02:54:00| 8|
          |2014-12-01 02:54:00| 2|
          +-------------------+-----+






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 13 '18 at 18:58









          PsidomPsidom

          124k1286128




          124k1286128























              1














              You can use window functions, generate row_number by partitioning on date value and filter on row_number=1



              Check this out:



              val df = Seq(("2014-12-01 02:54:00","2"),("2014-12-01 03:54:00","3"),("2014-12-01 04:54:00","4"),("2014-12-01 05:54:00","5"),("2014-12-02 02:54:00","6"),("2014-12-02 02:54:00","7"),("2014-12-03 02:54:00","8"))
              .toDF("time","value")
              df.withColumn("time",'time.cast("timestamp")).withColumn("value",'value.cast("int"))
              df.createOrReplaceTempView("timetab")
              spark.sql(
              """ with order_ts( select time, value , row_number() over(partition by date_format(time,"yyyyMMdd") order by value ) as rn from timetab)
              select time,value from order_ts where rn=1
              """).show(false)


              Output:



              +-------------------+-----+
              |time |value|
              +-------------------+-----+
              |2014-12-02 02:54:00|6 |
              |2014-12-01 02:54:00|2 |
              |2014-12-03 02:54:00|8 |
              +-------------------+-----+





              share|improve this answer



























                1














                You can use window functions, generate row_number by partitioning on date value and filter on row_number=1



                Check this out:



                val df = Seq(("2014-12-01 02:54:00","2"),("2014-12-01 03:54:00","3"),("2014-12-01 04:54:00","4"),("2014-12-01 05:54:00","5"),("2014-12-02 02:54:00","6"),("2014-12-02 02:54:00","7"),("2014-12-03 02:54:00","8"))
                .toDF("time","value")
                df.withColumn("time",'time.cast("timestamp")).withColumn("value",'value.cast("int"))
                df.createOrReplaceTempView("timetab")
                spark.sql(
                """ with order_ts( select time, value , row_number() over(partition by date_format(time,"yyyyMMdd") order by value ) as rn from timetab)
                select time,value from order_ts where rn=1
                """).show(false)


                Output:



                +-------------------+-----+
                |time |value|
                +-------------------+-----+
                |2014-12-02 02:54:00|6 |
                |2014-12-01 02:54:00|2 |
                |2014-12-03 02:54:00|8 |
                +-------------------+-----+





                share|improve this answer

























                  1












                  1








                  1







                  You can use window functions, generate row_number by partitioning on date value and filter on row_number=1



                  Check this out:



                  val df = Seq(("2014-12-01 02:54:00","2"),("2014-12-01 03:54:00","3"),("2014-12-01 04:54:00","4"),("2014-12-01 05:54:00","5"),("2014-12-02 02:54:00","6"),("2014-12-02 02:54:00","7"),("2014-12-03 02:54:00","8"))
                  .toDF("time","value")
                  df.withColumn("time",'time.cast("timestamp")).withColumn("value",'value.cast("int"))
                  df.createOrReplaceTempView("timetab")
                  spark.sql(
                  """ with order_ts( select time, value , row_number() over(partition by date_format(time,"yyyyMMdd") order by value ) as rn from timetab)
                  select time,value from order_ts where rn=1
                  """).show(false)


                  Output:



                  +-------------------+-----+
                  |time |value|
                  +-------------------+-----+
                  |2014-12-02 02:54:00|6 |
                  |2014-12-01 02:54:00|2 |
                  |2014-12-03 02:54:00|8 |
                  +-------------------+-----+





                  share|improve this answer













                  You can use window functions, generate row_number by partitioning on date value and filter on row_number=1



                  Check this out:



                  val df = Seq(("2014-12-01 02:54:00","2"),("2014-12-01 03:54:00","3"),("2014-12-01 04:54:00","4"),("2014-12-01 05:54:00","5"),("2014-12-02 02:54:00","6"),("2014-12-02 02:54:00","7"),("2014-12-03 02:54:00","8"))
                  .toDF("time","value")
                  df.withColumn("time",'time.cast("timestamp")).withColumn("value",'value.cast("int"))
                  df.createOrReplaceTempView("timetab")
                  spark.sql(
                  """ with order_ts( select time, value , row_number() over(partition by date_format(time,"yyyyMMdd") order by value ) as rn from timetab)
                  select time,value from order_ts where rn=1
                  """).show(false)


                  Output:



                  +-------------------+-----+
                  |time |value|
                  +-------------------+-----+
                  |2014-12-02 02:54:00|6 |
                  |2014-12-01 02:54:00|2 |
                  |2014-12-03 02:54:00|8 |
                  +-------------------+-----+






                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 14 '18 at 3:01









                  stack0114106stack0114106

                  3,3052417




                  3,3052417



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Stack Overflow!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53287563%2fspark-how-to-reduce-by-column-of-data-type-is-date%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      這個網誌中的熱門文章

                      How to read a connectionString WITH PROVIDER in .NET Core?

                      Node.js Script on GitHub Pages or Amazon S3

                      Museum of Modern and Contemporary Art of Trento and Rovereto