spark how to reduce by column of data type is date
I am working on a DataFrame looks as follows:
-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-01 03:54:00 | 3 |
| 2014-12-01 04:54:00 | 4 |
| 2014-12-01 05:54:00 | 5 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-02 02:54:00 | 7 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------
the number of samples on each day is pretty random.
I want to get just one sample on each day, such as:
-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------
I don't care about which sample I get from one day, but
I want to make sure to get one so there is no day duplication
on "time" column.
apache-spark apache-zeppelin
add a comment |
I am working on a DataFrame looks as follows:
-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-01 03:54:00 | 3 |
| 2014-12-01 04:54:00 | 4 |
| 2014-12-01 05:54:00 | 5 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-02 02:54:00 | 7 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------
the number of samples on each day is pretty random.
I want to get just one sample on each day, such as:
-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------
I don't care about which sample I get from one day, but
I want to make sure to get one so there is no day duplication
on "time" column.
apache-spark apache-zeppelin
which programming language are you using?
– Psidom
Nov 13 '18 at 18:49
add a comment |
I am working on a DataFrame looks as follows:
-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-01 03:54:00 | 3 |
| 2014-12-01 04:54:00 | 4 |
| 2014-12-01 05:54:00 | 5 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-02 02:54:00 | 7 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------
the number of samples on each day is pretty random.
I want to get just one sample on each day, such as:
-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------
I don't care about which sample I get from one day, but
I want to make sure to get one so there is no day duplication
on "time" column.
apache-spark apache-zeppelin
I am working on a DataFrame looks as follows:
-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-01 03:54:00 | 3 |
| 2014-12-01 04:54:00 | 4 |
| 2014-12-01 05:54:00 | 5 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-02 02:54:00 | 7 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------
the number of samples on each day is pretty random.
I want to get just one sample on each day, such as:
-------------------------------
| time | value |
-------------------------------
| 2014-12-01 02:54:00 | 2 |
| 2014-12-02 02:54:00 | 6 |
| 2014-12-03 02:54:00 | 8 |
-------------------------------
I don't care about which sample I get from one day, but
I want to make sure to get one so there is no day duplication
on "time" column.
apache-spark apache-zeppelin
apache-spark apache-zeppelin
edited Nov 13 '18 at 18:44
Psidom
124k1286128
124k1286128
asked Nov 13 '18 at 18:41
Bingchen LiuBingchen Liu
141
141
which programming language are you using?
– Psidom
Nov 13 '18 at 18:49
add a comment |
which programming language are you using?
– Psidom
Nov 13 '18 at 18:49
which programming language are you using?
– Psidom
Nov 13 '18 at 18:49
which programming language are you using?
– Psidom
Nov 13 '18 at 18:49
add a comment |
2 Answers
2
active
oldest
votes
You can create a date column first and then dropDuplicates
based on the date
column; Example with pyspark
, the syntax should be similar if you are using scala
or java
:
import pyspark.sql.functions as f
df.withColumn('date', f.to_date('time', 'yyyy-MM-dd HH:mm:ss'))
.dropDuplicates(['date']).drop('date').show()
+-------------------+-----+
| time|value|
+-------------------+-----+
|2014-12-02 02:54:00| 6|
|2014-12-03 02:54:00| 8|
|2014-12-01 02:54:00| 2|
+-------------------+-----+
add a comment |
You can use window functions, generate row_number by partitioning on date value and filter on row_number=1
Check this out:
val df = Seq(("2014-12-01 02:54:00","2"),("2014-12-01 03:54:00","3"),("2014-12-01 04:54:00","4"),("2014-12-01 05:54:00","5"),("2014-12-02 02:54:00","6"),("2014-12-02 02:54:00","7"),("2014-12-03 02:54:00","8"))
.toDF("time","value")
df.withColumn("time",'time.cast("timestamp")).withColumn("value",'value.cast("int"))
df.createOrReplaceTempView("timetab")
spark.sql(
""" with order_ts( select time, value , row_number() over(partition by date_format(time,"yyyyMMdd") order by value ) as rn from timetab)
select time,value from order_ts where rn=1
""").show(false)
Output:
+-------------------+-----+
|time |value|
+-------------------+-----+
|2014-12-02 02:54:00|6 |
|2014-12-01 02:54:00|2 |
|2014-12-03 02:54:00|8 |
+-------------------+-----+
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53287563%2fspark-how-to-reduce-by-column-of-data-type-is-date%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can create a date column first and then dropDuplicates
based on the date
column; Example with pyspark
, the syntax should be similar if you are using scala
or java
:
import pyspark.sql.functions as f
df.withColumn('date', f.to_date('time', 'yyyy-MM-dd HH:mm:ss'))
.dropDuplicates(['date']).drop('date').show()
+-------------------+-----+
| time|value|
+-------------------+-----+
|2014-12-02 02:54:00| 6|
|2014-12-03 02:54:00| 8|
|2014-12-01 02:54:00| 2|
+-------------------+-----+
add a comment |
You can create a date column first and then dropDuplicates
based on the date
column; Example with pyspark
, the syntax should be similar if you are using scala
or java
:
import pyspark.sql.functions as f
df.withColumn('date', f.to_date('time', 'yyyy-MM-dd HH:mm:ss'))
.dropDuplicates(['date']).drop('date').show()
+-------------------+-----+
| time|value|
+-------------------+-----+
|2014-12-02 02:54:00| 6|
|2014-12-03 02:54:00| 8|
|2014-12-01 02:54:00| 2|
+-------------------+-----+
add a comment |
You can create a date column first and then dropDuplicates
based on the date
column; Example with pyspark
, the syntax should be similar if you are using scala
or java
:
import pyspark.sql.functions as f
df.withColumn('date', f.to_date('time', 'yyyy-MM-dd HH:mm:ss'))
.dropDuplicates(['date']).drop('date').show()
+-------------------+-----+
| time|value|
+-------------------+-----+
|2014-12-02 02:54:00| 6|
|2014-12-03 02:54:00| 8|
|2014-12-01 02:54:00| 2|
+-------------------+-----+
You can create a date column first and then dropDuplicates
based on the date
column; Example with pyspark
, the syntax should be similar if you are using scala
or java
:
import pyspark.sql.functions as f
df.withColumn('date', f.to_date('time', 'yyyy-MM-dd HH:mm:ss'))
.dropDuplicates(['date']).drop('date').show()
+-------------------+-----+
| time|value|
+-------------------+-----+
|2014-12-02 02:54:00| 6|
|2014-12-03 02:54:00| 8|
|2014-12-01 02:54:00| 2|
+-------------------+-----+
answered Nov 13 '18 at 18:58
PsidomPsidom
124k1286128
124k1286128
add a comment |
add a comment |
You can use window functions, generate row_number by partitioning on date value and filter on row_number=1
Check this out:
val df = Seq(("2014-12-01 02:54:00","2"),("2014-12-01 03:54:00","3"),("2014-12-01 04:54:00","4"),("2014-12-01 05:54:00","5"),("2014-12-02 02:54:00","6"),("2014-12-02 02:54:00","7"),("2014-12-03 02:54:00","8"))
.toDF("time","value")
df.withColumn("time",'time.cast("timestamp")).withColumn("value",'value.cast("int"))
df.createOrReplaceTempView("timetab")
spark.sql(
""" with order_ts( select time, value , row_number() over(partition by date_format(time,"yyyyMMdd") order by value ) as rn from timetab)
select time,value from order_ts where rn=1
""").show(false)
Output:
+-------------------+-----+
|time |value|
+-------------------+-----+
|2014-12-02 02:54:00|6 |
|2014-12-01 02:54:00|2 |
|2014-12-03 02:54:00|8 |
+-------------------+-----+
add a comment |
You can use window functions, generate row_number by partitioning on date value and filter on row_number=1
Check this out:
val df = Seq(("2014-12-01 02:54:00","2"),("2014-12-01 03:54:00","3"),("2014-12-01 04:54:00","4"),("2014-12-01 05:54:00","5"),("2014-12-02 02:54:00","6"),("2014-12-02 02:54:00","7"),("2014-12-03 02:54:00","8"))
.toDF("time","value")
df.withColumn("time",'time.cast("timestamp")).withColumn("value",'value.cast("int"))
df.createOrReplaceTempView("timetab")
spark.sql(
""" with order_ts( select time, value , row_number() over(partition by date_format(time,"yyyyMMdd") order by value ) as rn from timetab)
select time,value from order_ts where rn=1
""").show(false)
Output:
+-------------------+-----+
|time |value|
+-------------------+-----+
|2014-12-02 02:54:00|6 |
|2014-12-01 02:54:00|2 |
|2014-12-03 02:54:00|8 |
+-------------------+-----+
add a comment |
You can use window functions, generate row_number by partitioning on date value and filter on row_number=1
Check this out:
val df = Seq(("2014-12-01 02:54:00","2"),("2014-12-01 03:54:00","3"),("2014-12-01 04:54:00","4"),("2014-12-01 05:54:00","5"),("2014-12-02 02:54:00","6"),("2014-12-02 02:54:00","7"),("2014-12-03 02:54:00","8"))
.toDF("time","value")
df.withColumn("time",'time.cast("timestamp")).withColumn("value",'value.cast("int"))
df.createOrReplaceTempView("timetab")
spark.sql(
""" with order_ts( select time, value , row_number() over(partition by date_format(time,"yyyyMMdd") order by value ) as rn from timetab)
select time,value from order_ts where rn=1
""").show(false)
Output:
+-------------------+-----+
|time |value|
+-------------------+-----+
|2014-12-02 02:54:00|6 |
|2014-12-01 02:54:00|2 |
|2014-12-03 02:54:00|8 |
+-------------------+-----+
You can use window functions, generate row_number by partitioning on date value and filter on row_number=1
Check this out:
val df = Seq(("2014-12-01 02:54:00","2"),("2014-12-01 03:54:00","3"),("2014-12-01 04:54:00","4"),("2014-12-01 05:54:00","5"),("2014-12-02 02:54:00","6"),("2014-12-02 02:54:00","7"),("2014-12-03 02:54:00","8"))
.toDF("time","value")
df.withColumn("time",'time.cast("timestamp")).withColumn("value",'value.cast("int"))
df.createOrReplaceTempView("timetab")
spark.sql(
""" with order_ts( select time, value , row_number() over(partition by date_format(time,"yyyyMMdd") order by value ) as rn from timetab)
select time,value from order_ts where rn=1
""").show(false)
Output:
+-------------------+-----+
|time |value|
+-------------------+-----+
|2014-12-02 02:54:00|6 |
|2014-12-01 02:54:00|2 |
|2014-12-03 02:54:00|8 |
+-------------------+-----+
answered Nov 14 '18 at 3:01
stack0114106stack0114106
3,3052417
3,3052417
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53287563%2fspark-how-to-reduce-by-column-of-data-type-is-date%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
which programming language are you using?
– Psidom
Nov 13 '18 at 18:49