dataframe from hive table to iterate through each element for some operation and write in df,rdd,list










-4















I have a DF with input data as below:



+----+----+
|col1|col2|
+----+--------+
| abc|2E2J2K2F|
| bcd| 2K3D|
+----+--------+


My expected expected output is:



+-----+-----+
| col1| col2|
+----+------+
| abc| 2E|
| abc| 2J|
| abc| 2K|
| abc| 2F|
| bcd| 2K|
| bcd| 3D|
+----+------+
+----+------+









share|improve this question
























  • |col1|col2| +----+--------+ | abc|2E2J2K2F| | bcd| 2K3D| +----+--------+ expected output +-----+-----+ | col1| col2| +----+------+ | abc| 2E| | abc| 2J| | abc| 2K| | abc| 2F| | bcd| 2K| | bcd| 3D| +----+------+ +----+------+

    – Arif Rizwan
    Nov 14 '18 at 22:26
















-4















I have a DF with input data as below:



+----+----+
|col1|col2|
+----+--------+
| abc|2E2J2K2F|
| bcd| 2K3D|
+----+--------+


My expected expected output is:



+-----+-----+
| col1| col2|
+----+------+
| abc| 2E|
| abc| 2J|
| abc| 2K|
| abc| 2F|
| bcd| 2K|
| bcd| 3D|
+----+------+
+----+------+









share|improve this question
























  • |col1|col2| +----+--------+ | abc|2E2J2K2F| | bcd| 2K3D| +----+--------+ expected output +-----+-----+ | col1| col2| +----+------+ | abc| 2E| | abc| 2J| | abc| 2K| | abc| 2F| | bcd| 2K| | bcd| 3D| +----+------+ +----+------+

    – Arif Rizwan
    Nov 14 '18 at 22:26














-4












-4








-4








I have a DF with input data as below:



+----+----+
|col1|col2|
+----+--------+
| abc|2E2J2K2F|
| bcd| 2K3D|
+----+--------+


My expected expected output is:



+-----+-----+
| col1| col2|
+----+------+
| abc| 2E|
| abc| 2J|
| abc| 2K|
| abc| 2F|
| bcd| 2K|
| bcd| 3D|
+----+------+
+----+------+









share|improve this question
















I have a DF with input data as below:



+----+----+
|col1|col2|
+----+--------+
| abc|2E2J2K2F|
| bcd| 2K3D|
+----+--------+


My expected expected output is:



+-----+-----+
| col1| col2|
+----+------+
| abc| 2E|
| abc| 2J|
| abc| 2K|
| abc| 2F|
| bcd| 2K|
| bcd| 3D|
+----+------+
+----+------+






scala list apache-spark dataframe rdd






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 15 '18 at 0:28









bcperth

2,0351614




2,0351614










asked Nov 14 '18 at 22:21









Arif RizwanArif Rizwan

32




32












  • |col1|col2| +----+--------+ | abc|2E2J2K2F| | bcd| 2K3D| +----+--------+ expected output +-----+-----+ | col1| col2| +----+------+ | abc| 2E| | abc| 2J| | abc| 2K| | abc| 2F| | bcd| 2K| | bcd| 3D| +----+------+ +----+------+

    – Arif Rizwan
    Nov 14 '18 at 22:26


















  • |col1|col2| +----+--------+ | abc|2E2J2K2F| | bcd| 2K3D| +----+--------+ expected output +-----+-----+ | col1| col2| +----+------+ | abc| 2E| | abc| 2J| | abc| 2K| | abc| 2F| | bcd| 2K| | bcd| 3D| +----+------+ +----+------+

    – Arif Rizwan
    Nov 14 '18 at 22:26

















|col1|col2| +----+--------+ | abc|2E2J2K2F| | bcd| 2K3D| +----+--------+ expected output +-----+-----+ | col1| col2| +----+------+ | abc| 2E| | abc| 2J| | abc| 2K| | abc| 2F| | bcd| 2K| | bcd| 3D| +----+------+ +----+------+

– Arif Rizwan
Nov 14 '18 at 22:26






|col1|col2| +----+--------+ | abc|2E2J2K2F| | bcd| 2K3D| +----+--------+ expected output +-----+-----+ | col1| col2| +----+------+ | abc| 2E| | abc| 2J| | abc| 2K| | abc| 2F| | bcd| 2K| | bcd| 3D| +----+------+ +----+------+

– Arif Rizwan
Nov 14 '18 at 22:26













2 Answers
2






active

oldest

votes


















0














Use udf() for splitting the string and then explode it. Check this out:



scala> val df = Seq(("abc","2E2J2K2F"),("bcd","2K3D")).toDF("col1","col2")
df: org.apache.spark.sql.DataFrame = [col1: string, col2: string]

scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
split2: (x: String)Array[String]

scala> val myudf_split2 = udf ( split2(_:String):Array[String] )
myudf_split2: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,ArrayType(StringType,true),Some(List(StringType)))

scala> df.withColumn("newcol",explode(myudf_split2('col2))).select("col1","newcol").show
+----+------+
|col1|newcol|
+----+------+
| abc| 2E|
| abc| 2J|
| abc| 2K|
| abc| 2F|
| bcd| 2K|
| bcd| 3D|
+----+------+


scala>


Update:



the split2() is just splitting the string by 2 bytes each and creating an array.
The explode functions duplicates the row based on the length of the array, giving each index value for all the rows.



scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
split2: (x: String)Array[String]

scala> split2("12345678")
res168: Array[String] = Array(12, 34, 56, 78)

scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
split2: (x: String)Array[String]

scala> split2("12345678")
res168: Array[String] = Array(12, 34, 56, 78)

scala> "12345678".sliding(4,4).toArray
res171: Array[String] = Array(1234, 5678)





share|improve this answer

























  • rror: type mismatch; [ERROR] found : Symbol [ERROR] required: org.apache.spark.sql.Column [ERROR] df.withColumn("newcol",explode(myudf_split2('col2))).select("col1","newcol").show [ERROR] ^ [ERROR] one error found i have scala 2.12.x and spark 1.6.x(working environment)

    – Arif Rizwan
    Nov 15 '18 at 14:32












  • what is your df.schema?

    – stack0114106
    Nov 15 '18 at 14:35











  • spark 1.6 old.. I don't think it supports udf functions..please consider using spark 2.x versions

    – stack0114106
    Nov 15 '18 at 14:38











  • Sorry, couldn't able to do as we have CDH 5.14. parcel,if any any alternative please suggest

    – Arif Rizwan
    Nov 15 '18 at 14:44












  • this cloudera version might have spark 2.x.. you might be launching "spark-shell", pls try launching using "spark2-shell". Also in which step you are getting error? is it udf()?

    – stack0114106
    Nov 15 '18 at 14:48


















0














val df = Seq(("abc","2E2J2K2F"),("bcd","2K3D")).toDF("col1","col2")
df: org.apache.spark.sql.DataFrame = [col1: string, col2: string]



scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
split2: (x: String)Array[String]



scala> val myudf_split2 = udf ( split2(_:String):Array[String] )
myudf_split2: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(,ArrayType(StringType,true),Some(List(StringType)))



scala> df.withColumn("newcol",explode(myudf_split2(df.col("col2")))).select("col1","newcol").show



+----+------+
|col1|newcol|
+----+------+
| abc| 2E|
| abc| 2J|
| abc| 2K|
| abc| 2F|
| bcd| 2K|
| bcd| 3D|
+----+------+






share|improve this answer






















    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53309613%2fdataframe-from-hive-table-to-iterate-through-each-element-for-some-operation-and%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Use udf() for splitting the string and then explode it. Check this out:



    scala> val df = Seq(("abc","2E2J2K2F"),("bcd","2K3D")).toDF("col1","col2")
    df: org.apache.spark.sql.DataFrame = [col1: string, col2: string]

    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]

    scala> val myudf_split2 = udf ( split2(_:String):Array[String] )
    myudf_split2: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,ArrayType(StringType,true),Some(List(StringType)))

    scala> df.withColumn("newcol",explode(myudf_split2('col2))).select("col1","newcol").show
    +----+------+
    |col1|newcol|
    +----+------+
    | abc| 2E|
    | abc| 2J|
    | abc| 2K|
    | abc| 2F|
    | bcd| 2K|
    | bcd| 3D|
    +----+------+


    scala>


    Update:



    the split2() is just splitting the string by 2 bytes each and creating an array.
    The explode functions duplicates the row based on the length of the array, giving each index value for all the rows.



    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]

    scala> split2("12345678")
    res168: Array[String] = Array(12, 34, 56, 78)

    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]

    scala> split2("12345678")
    res168: Array[String] = Array(12, 34, 56, 78)

    scala> "12345678".sliding(4,4).toArray
    res171: Array[String] = Array(1234, 5678)





    share|improve this answer

























    • rror: type mismatch; [ERROR] found : Symbol [ERROR] required: org.apache.spark.sql.Column [ERROR] df.withColumn("newcol",explode(myudf_split2('col2))).select("col1","newcol").show [ERROR] ^ [ERROR] one error found i have scala 2.12.x and spark 1.6.x(working environment)

      – Arif Rizwan
      Nov 15 '18 at 14:32












    • what is your df.schema?

      – stack0114106
      Nov 15 '18 at 14:35











    • spark 1.6 old.. I don't think it supports udf functions..please consider using spark 2.x versions

      – stack0114106
      Nov 15 '18 at 14:38











    • Sorry, couldn't able to do as we have CDH 5.14. parcel,if any any alternative please suggest

      – Arif Rizwan
      Nov 15 '18 at 14:44












    • this cloudera version might have spark 2.x.. you might be launching "spark-shell", pls try launching using "spark2-shell". Also in which step you are getting error? is it udf()?

      – stack0114106
      Nov 15 '18 at 14:48















    0














    Use udf() for splitting the string and then explode it. Check this out:



    scala> val df = Seq(("abc","2E2J2K2F"),("bcd","2K3D")).toDF("col1","col2")
    df: org.apache.spark.sql.DataFrame = [col1: string, col2: string]

    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]

    scala> val myudf_split2 = udf ( split2(_:String):Array[String] )
    myudf_split2: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,ArrayType(StringType,true),Some(List(StringType)))

    scala> df.withColumn("newcol",explode(myudf_split2('col2))).select("col1","newcol").show
    +----+------+
    |col1|newcol|
    +----+------+
    | abc| 2E|
    | abc| 2J|
    | abc| 2K|
    | abc| 2F|
    | bcd| 2K|
    | bcd| 3D|
    +----+------+


    scala>


    Update:



    the split2() is just splitting the string by 2 bytes each and creating an array.
    The explode functions duplicates the row based on the length of the array, giving each index value for all the rows.



    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]

    scala> split2("12345678")
    res168: Array[String] = Array(12, 34, 56, 78)

    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]

    scala> split2("12345678")
    res168: Array[String] = Array(12, 34, 56, 78)

    scala> "12345678".sliding(4,4).toArray
    res171: Array[String] = Array(1234, 5678)





    share|improve this answer

























    • rror: type mismatch; [ERROR] found : Symbol [ERROR] required: org.apache.spark.sql.Column [ERROR] df.withColumn("newcol",explode(myudf_split2('col2))).select("col1","newcol").show [ERROR] ^ [ERROR] one error found i have scala 2.12.x and spark 1.6.x(working environment)

      – Arif Rizwan
      Nov 15 '18 at 14:32












    • what is your df.schema?

      – stack0114106
      Nov 15 '18 at 14:35











    • spark 1.6 old.. I don't think it supports udf functions..please consider using spark 2.x versions

      – stack0114106
      Nov 15 '18 at 14:38











    • Sorry, couldn't able to do as we have CDH 5.14. parcel,if any any alternative please suggest

      – Arif Rizwan
      Nov 15 '18 at 14:44












    • this cloudera version might have spark 2.x.. you might be launching "spark-shell", pls try launching using "spark2-shell". Also in which step you are getting error? is it udf()?

      – stack0114106
      Nov 15 '18 at 14:48













    0












    0








    0







    Use udf() for splitting the string and then explode it. Check this out:



    scala> val df = Seq(("abc","2E2J2K2F"),("bcd","2K3D")).toDF("col1","col2")
    df: org.apache.spark.sql.DataFrame = [col1: string, col2: string]

    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]

    scala> val myudf_split2 = udf ( split2(_:String):Array[String] )
    myudf_split2: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,ArrayType(StringType,true),Some(List(StringType)))

    scala> df.withColumn("newcol",explode(myudf_split2('col2))).select("col1","newcol").show
    +----+------+
    |col1|newcol|
    +----+------+
    | abc| 2E|
    | abc| 2J|
    | abc| 2K|
    | abc| 2F|
    | bcd| 2K|
    | bcd| 3D|
    +----+------+


    scala>


    Update:



    the split2() is just splitting the string by 2 bytes each and creating an array.
    The explode functions duplicates the row based on the length of the array, giving each index value for all the rows.



    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]

    scala> split2("12345678")
    res168: Array[String] = Array(12, 34, 56, 78)

    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]

    scala> split2("12345678")
    res168: Array[String] = Array(12, 34, 56, 78)

    scala> "12345678".sliding(4,4).toArray
    res171: Array[String] = Array(1234, 5678)





    share|improve this answer















    Use udf() for splitting the string and then explode it. Check this out:



    scala> val df = Seq(("abc","2E2J2K2F"),("bcd","2K3D")).toDF("col1","col2")
    df: org.apache.spark.sql.DataFrame = [col1: string, col2: string]

    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]

    scala> val myudf_split2 = udf ( split2(_:String):Array[String] )
    myudf_split2: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,ArrayType(StringType,true),Some(List(StringType)))

    scala> df.withColumn("newcol",explode(myudf_split2('col2))).select("col1","newcol").show
    +----+------+
    |col1|newcol|
    +----+------+
    | abc| 2E|
    | abc| 2J|
    | abc| 2K|
    | abc| 2F|
    | bcd| 2K|
    | bcd| 3D|
    +----+------+


    scala>


    Update:



    the split2() is just splitting the string by 2 bytes each and creating an array.
    The explode functions duplicates the row based on the length of the array, giving each index value for all the rows.



    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]

    scala> split2("12345678")
    res168: Array[String] = Array(12, 34, 56, 78)

    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]

    scala> split2("12345678")
    res168: Array[String] = Array(12, 34, 56, 78)

    scala> "12345678".sliding(4,4).toArray
    res171: Array[String] = Array(1234, 5678)






    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Nov 15 '18 at 15:19

























    answered Nov 15 '18 at 10:16









    stack0114106stack0114106

    3,8582420




    3,8582420












    • rror: type mismatch; [ERROR] found : Symbol [ERROR] required: org.apache.spark.sql.Column [ERROR] df.withColumn("newcol",explode(myudf_split2('col2))).select("col1","newcol").show [ERROR] ^ [ERROR] one error found i have scala 2.12.x and spark 1.6.x(working environment)

      – Arif Rizwan
      Nov 15 '18 at 14:32












    • what is your df.schema?

      – stack0114106
      Nov 15 '18 at 14:35











    • spark 1.6 old.. I don't think it supports udf functions..please consider using spark 2.x versions

      – stack0114106
      Nov 15 '18 at 14:38











    • Sorry, couldn't able to do as we have CDH 5.14. parcel,if any any alternative please suggest

      – Arif Rizwan
      Nov 15 '18 at 14:44












    • this cloudera version might have spark 2.x.. you might be launching "spark-shell", pls try launching using "spark2-shell". Also in which step you are getting error? is it udf()?

      – stack0114106
      Nov 15 '18 at 14:48

















    • rror: type mismatch; [ERROR] found : Symbol [ERROR] required: org.apache.spark.sql.Column [ERROR] df.withColumn("newcol",explode(myudf_split2('col2))).select("col1","newcol").show [ERROR] ^ [ERROR] one error found i have scala 2.12.x and spark 1.6.x(working environment)

      – Arif Rizwan
      Nov 15 '18 at 14:32












    • what is your df.schema?

      – stack0114106
      Nov 15 '18 at 14:35











    • spark 1.6 old.. I don't think it supports udf functions..please consider using spark 2.x versions

      – stack0114106
      Nov 15 '18 at 14:38











    • Sorry, couldn't able to do as we have CDH 5.14. parcel,if any any alternative please suggest

      – Arif Rizwan
      Nov 15 '18 at 14:44












    • this cloudera version might have spark 2.x.. you might be launching "spark-shell", pls try launching using "spark2-shell". Also in which step you are getting error? is it udf()?

      – stack0114106
      Nov 15 '18 at 14:48
















    rror: type mismatch; [ERROR] found : Symbol [ERROR] required: org.apache.spark.sql.Column [ERROR] df.withColumn("newcol",explode(myudf_split2('col2))).select("col1","newcol").show [ERROR] ^ [ERROR] one error found i have scala 2.12.x and spark 1.6.x(working environment)

    – Arif Rizwan
    Nov 15 '18 at 14:32






    rror: type mismatch; [ERROR] found : Symbol [ERROR] required: org.apache.spark.sql.Column [ERROR] df.withColumn("newcol",explode(myudf_split2('col2))).select("col1","newcol").show [ERROR] ^ [ERROR] one error found i have scala 2.12.x and spark 1.6.x(working environment)

    – Arif Rizwan
    Nov 15 '18 at 14:32














    what is your df.schema?

    – stack0114106
    Nov 15 '18 at 14:35





    what is your df.schema?

    – stack0114106
    Nov 15 '18 at 14:35













    spark 1.6 old.. I don't think it supports udf functions..please consider using spark 2.x versions

    – stack0114106
    Nov 15 '18 at 14:38





    spark 1.6 old.. I don't think it supports udf functions..please consider using spark 2.x versions

    – stack0114106
    Nov 15 '18 at 14:38













    Sorry, couldn't able to do as we have CDH 5.14. parcel,if any any alternative please suggest

    – Arif Rizwan
    Nov 15 '18 at 14:44






    Sorry, couldn't able to do as we have CDH 5.14. parcel,if any any alternative please suggest

    – Arif Rizwan
    Nov 15 '18 at 14:44














    this cloudera version might have spark 2.x.. you might be launching "spark-shell", pls try launching using "spark2-shell". Also in which step you are getting error? is it udf()?

    – stack0114106
    Nov 15 '18 at 14:48





    this cloudera version might have spark 2.x.. you might be launching "spark-shell", pls try launching using "spark2-shell". Also in which step you are getting error? is it udf()?

    – stack0114106
    Nov 15 '18 at 14:48













    0














    val df = Seq(("abc","2E2J2K2F"),("bcd","2K3D")).toDF("col1","col2")
    df: org.apache.spark.sql.DataFrame = [col1: string, col2: string]



    scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
    split2: (x: String)Array[String]



    scala> val myudf_split2 = udf ( split2(_:String):Array[String] )
    myudf_split2: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(,ArrayType(StringType,true),Some(List(StringType)))



    scala> df.withColumn("newcol",explode(myudf_split2(df.col("col2")))).select("col1","newcol").show



    +----+------+
    |col1|newcol|
    +----+------+
    | abc| 2E|
    | abc| 2J|
    | abc| 2K|
    | abc| 2F|
    | bcd| 2K|
    | bcd| 3D|
    +----+------+






    share|improve this answer



























      0














      val df = Seq(("abc","2E2J2K2F"),("bcd","2K3D")).toDF("col1","col2")
      df: org.apache.spark.sql.DataFrame = [col1: string, col2: string]



      scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
      split2: (x: String)Array[String]



      scala> val myudf_split2 = udf ( split2(_:String):Array[String] )
      myudf_split2: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(,ArrayType(StringType,true),Some(List(StringType)))



      scala> df.withColumn("newcol",explode(myudf_split2(df.col("col2")))).select("col1","newcol").show



      +----+------+
      |col1|newcol|
      +----+------+
      | abc| 2E|
      | abc| 2J|
      | abc| 2K|
      | abc| 2F|
      | bcd| 2K|
      | bcd| 3D|
      +----+------+






      share|improve this answer

























        0












        0








        0







        val df = Seq(("abc","2E2J2K2F"),("bcd","2K3D")).toDF("col1","col2")
        df: org.apache.spark.sql.DataFrame = [col1: string, col2: string]



        scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
        split2: (x: String)Array[String]



        scala> val myudf_split2 = udf ( split2(_:String):Array[String] )
        myudf_split2: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(,ArrayType(StringType,true),Some(List(StringType)))



        scala> df.withColumn("newcol",explode(myudf_split2(df.col("col2")))).select("col1","newcol").show



        +----+------+
        |col1|newcol|
        +----+------+
        | abc| 2E|
        | abc| 2J|
        | abc| 2K|
        | abc| 2F|
        | bcd| 2K|
        | bcd| 3D|
        +----+------+






        share|improve this answer













        val df = Seq(("abc","2E2J2K2F"),("bcd","2K3D")).toDF("col1","col2")
        df: org.apache.spark.sql.DataFrame = [col1: string, col2: string]



        scala> def split2(x:String):Array[String] = x.sliding(2,2).toArray
        split2: (x: String)Array[String]



        scala> val myudf_split2 = udf ( split2(_:String):Array[String] )
        myudf_split2: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(,ArrayType(StringType,true),Some(List(StringType)))



        scala> df.withColumn("newcol",explode(myudf_split2(df.col("col2")))).select("col1","newcol").show



        +----+------+
        |col1|newcol|
        +----+------+
        | abc| 2E|
        | abc| 2J|
        | abc| 2K|
        | abc| 2F|
        | bcd| 2K|
        | bcd| 3D|
        +----+------+







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 15 '18 at 15:16









        Arif RizwanArif Rizwan

        32




        32



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53309613%2fdataframe-from-hive-table-to-iterate-through-each-element-for-some-operation-and%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            這個網誌中的熱門文章

            How to read a connectionString WITH PROVIDER in .NET Core?

            Museum of Modern and Contemporary Art of Trento and Rovereto

            In R, how to develop a multiplot heatmap.2 figure showing key labels successfully