Databrick csv cannot find local file










0















In a program I have csv extracted from excel, I need to upload the csv to hdfs and save it as parquet format, doesn't matter with python version or spark version, no scala please.



Almost all discussions I came across are about databrick, however, it seems cannot find the file, here is the code and error:



df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load("file:///home/rxie/csv_out/wamp.csv")


Error:




java.io.FileNotFoundException: File file:/home/rxie/csv_out/wamp.csv
does not exist




The file path:



ls -la /home/rxie/csv_out/wamp.csv
-rw-r--r-- 1 rxie linuxusers 2896878 Nov 12 14:59 /home/rxie/csv_out/wamp.csv


Thank you.










share|improve this question


























    0















    In a program I have csv extracted from excel, I need to upload the csv to hdfs and save it as parquet format, doesn't matter with python version or spark version, no scala please.



    Almost all discussions I came across are about databrick, however, it seems cannot find the file, here is the code and error:



    df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load("file:///home/rxie/csv_out/wamp.csv")


    Error:




    java.io.FileNotFoundException: File file:/home/rxie/csv_out/wamp.csv
    does not exist




    The file path:



    ls -la /home/rxie/csv_out/wamp.csv
    -rw-r--r-- 1 rxie linuxusers 2896878 Nov 12 14:59 /home/rxie/csv_out/wamp.csv


    Thank you.










    share|improve this question
























      0












      0








      0








      In a program I have csv extracted from excel, I need to upload the csv to hdfs and save it as parquet format, doesn't matter with python version or spark version, no scala please.



      Almost all discussions I came across are about databrick, however, it seems cannot find the file, here is the code and error:



      df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load("file:///home/rxie/csv_out/wamp.csv")


      Error:




      java.io.FileNotFoundException: File file:/home/rxie/csv_out/wamp.csv
      does not exist




      The file path:



      ls -la /home/rxie/csv_out/wamp.csv
      -rw-r--r-- 1 rxie linuxusers 2896878 Nov 12 14:59 /home/rxie/csv_out/wamp.csv


      Thank you.










      share|improve this question














      In a program I have csv extracted from excel, I need to upload the csv to hdfs and save it as parquet format, doesn't matter with python version or spark version, no scala please.



      Almost all discussions I came across are about databrick, however, it seems cannot find the file, here is the code and error:



      df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load("file:///home/rxie/csv_out/wamp.csv")


      Error:




      java.io.FileNotFoundException: File file:/home/rxie/csv_out/wamp.csv
      does not exist




      The file path:



      ls -la /home/rxie/csv_out/wamp.csv
      -rw-r--r-- 1 rxie linuxusers 2896878 Nov 12 14:59 /home/rxie/csv_out/wamp.csv


      Thank you.







      csv databricks






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 13 '18 at 20:58









      mdivkmdivk

      60021124




      60021124






















          2 Answers
          2






          active

          oldest

          votes


















          0














          I found the issue now!



          The reason why it errors out of file not found is actually correct, because I was using Spark Context with setMaster("yarn-cluster"), that means all worker nodes will look for the csv file, of course all worker nodes (except the one starting the program where the csv resides) do not have this file and hence error out. What I really should do is to use setMaster("local").



          FIX:



          conf = SparkConf().setAppName('test').setMaster("local")
          sc = SparkContext(conf=conf)
          sqlContext = SQLContext(sc)
          csv = "file:///home/rxie/csv_out/wamp.csv"
          df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load(csv)





          share|improve this answer






























            0














            Yes, you are right, the file should be present at all worker nodes.
            well. you can still read a local file in yarn cluster mode. you just need to add your file using addFile.



            spark.sparkContext.addFile("file:///your local file path ")


            spark will copy the file to each node where executor will be created and can be able to process your file in cluster mode as well.
            I am using spark 2.3 version so you can change your spark context accordingly but addFile method remains same.



            try this with your yarn (cluster mode) and let me know if it works for you.






            share|improve this answer























            • Thank you for your input, Vikrant.

              – mdivk
              Dec 12 '18 at 14:12











            • wc. Let me know if it works for you

              – vikrant rana
              Dec 12 '18 at 14:28











            • @mdivk.. did you checked using addFile method?is it working for you?

              – vikrant rana
              Dec 17 '18 at 16:53










            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53289410%2fdatabrick-csv-cannot-find-local-file%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            I found the issue now!



            The reason why it errors out of file not found is actually correct, because I was using Spark Context with setMaster("yarn-cluster"), that means all worker nodes will look for the csv file, of course all worker nodes (except the one starting the program where the csv resides) do not have this file and hence error out. What I really should do is to use setMaster("local").



            FIX:



            conf = SparkConf().setAppName('test').setMaster("local")
            sc = SparkContext(conf=conf)
            sqlContext = SQLContext(sc)
            csv = "file:///home/rxie/csv_out/wamp.csv"
            df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load(csv)





            share|improve this answer



























              0














              I found the issue now!



              The reason why it errors out of file not found is actually correct, because I was using Spark Context with setMaster("yarn-cluster"), that means all worker nodes will look for the csv file, of course all worker nodes (except the one starting the program where the csv resides) do not have this file and hence error out. What I really should do is to use setMaster("local").



              FIX:



              conf = SparkConf().setAppName('test').setMaster("local")
              sc = SparkContext(conf=conf)
              sqlContext = SQLContext(sc)
              csv = "file:///home/rxie/csv_out/wamp.csv"
              df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load(csv)





              share|improve this answer

























                0












                0








                0







                I found the issue now!



                The reason why it errors out of file not found is actually correct, because I was using Spark Context with setMaster("yarn-cluster"), that means all worker nodes will look for the csv file, of course all worker nodes (except the one starting the program where the csv resides) do not have this file and hence error out. What I really should do is to use setMaster("local").



                FIX:



                conf = SparkConf().setAppName('test').setMaster("local")
                sc = SparkContext(conf=conf)
                sqlContext = SQLContext(sc)
                csv = "file:///home/rxie/csv_out/wamp.csv"
                df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load(csv)





                share|improve this answer













                I found the issue now!



                The reason why it errors out of file not found is actually correct, because I was using Spark Context with setMaster("yarn-cluster"), that means all worker nodes will look for the csv file, of course all worker nodes (except the one starting the program where the csv resides) do not have this file and hence error out. What I really should do is to use setMaster("local").



                FIX:



                conf = SparkConf().setAppName('test').setMaster("local")
                sc = SparkContext(conf=conf)
                sqlContext = SQLContext(sc)
                csv = "file:///home/rxie/csv_out/wamp.csv"
                df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load(csv)






                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 14 '18 at 2:18









                mdivkmdivk

                60021124




                60021124























                    0














                    Yes, you are right, the file should be present at all worker nodes.
                    well. you can still read a local file in yarn cluster mode. you just need to add your file using addFile.



                    spark.sparkContext.addFile("file:///your local file path ")


                    spark will copy the file to each node where executor will be created and can be able to process your file in cluster mode as well.
                    I am using spark 2.3 version so you can change your spark context accordingly but addFile method remains same.



                    try this with your yarn (cluster mode) and let me know if it works for you.






                    share|improve this answer























                    • Thank you for your input, Vikrant.

                      – mdivk
                      Dec 12 '18 at 14:12











                    • wc. Let me know if it works for you

                      – vikrant rana
                      Dec 12 '18 at 14:28











                    • @mdivk.. did you checked using addFile method?is it working for you?

                      – vikrant rana
                      Dec 17 '18 at 16:53















                    0














                    Yes, you are right, the file should be present at all worker nodes.
                    well. you can still read a local file in yarn cluster mode. you just need to add your file using addFile.



                    spark.sparkContext.addFile("file:///your local file path ")


                    spark will copy the file to each node where executor will be created and can be able to process your file in cluster mode as well.
                    I am using spark 2.3 version so you can change your spark context accordingly but addFile method remains same.



                    try this with your yarn (cluster mode) and let me know if it works for you.






                    share|improve this answer























                    • Thank you for your input, Vikrant.

                      – mdivk
                      Dec 12 '18 at 14:12











                    • wc. Let me know if it works for you

                      – vikrant rana
                      Dec 12 '18 at 14:28











                    • @mdivk.. did you checked using addFile method?is it working for you?

                      – vikrant rana
                      Dec 17 '18 at 16:53













                    0












                    0








                    0







                    Yes, you are right, the file should be present at all worker nodes.
                    well. you can still read a local file in yarn cluster mode. you just need to add your file using addFile.



                    spark.sparkContext.addFile("file:///your local file path ")


                    spark will copy the file to each node where executor will be created and can be able to process your file in cluster mode as well.
                    I am using spark 2.3 version so you can change your spark context accordingly but addFile method remains same.



                    try this with your yarn (cluster mode) and let me know if it works for you.






                    share|improve this answer













                    Yes, you are right, the file should be present at all worker nodes.
                    well. you can still read a local file in yarn cluster mode. you just need to add your file using addFile.



                    spark.sparkContext.addFile("file:///your local file path ")


                    spark will copy the file to each node where executor will be created and can be able to process your file in cluster mode as well.
                    I am using spark 2.3 version so you can change your spark context accordingly but addFile method remains same.



                    try this with your yarn (cluster mode) and let me know if it works for you.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Dec 12 '18 at 4:38









                    vikrant ranavikrant rana

                    6221215




                    6221215












                    • Thank you for your input, Vikrant.

                      – mdivk
                      Dec 12 '18 at 14:12











                    • wc. Let me know if it works for you

                      – vikrant rana
                      Dec 12 '18 at 14:28











                    • @mdivk.. did you checked using addFile method?is it working for you?

                      – vikrant rana
                      Dec 17 '18 at 16:53

















                    • Thank you for your input, Vikrant.

                      – mdivk
                      Dec 12 '18 at 14:12











                    • wc. Let me know if it works for you

                      – vikrant rana
                      Dec 12 '18 at 14:28











                    • @mdivk.. did you checked using addFile method?is it working for you?

                      – vikrant rana
                      Dec 17 '18 at 16:53
















                    Thank you for your input, Vikrant.

                    – mdivk
                    Dec 12 '18 at 14:12





                    Thank you for your input, Vikrant.

                    – mdivk
                    Dec 12 '18 at 14:12













                    wc. Let me know if it works for you

                    – vikrant rana
                    Dec 12 '18 at 14:28





                    wc. Let me know if it works for you

                    – vikrant rana
                    Dec 12 '18 at 14:28













                    @mdivk.. did you checked using addFile method?is it working for you?

                    – vikrant rana
                    Dec 17 '18 at 16:53





                    @mdivk.. did you checked using addFile method?is it working for you?

                    – vikrant rana
                    Dec 17 '18 at 16:53

















                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53289410%2fdatabrick-csv-cannot-find-local-file%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    這個網誌中的熱門文章

                    How to read a connectionString WITH PROVIDER in .NET Core?

                    Node.js Script on GitHub Pages or Amazon S3

                    Museum of Modern and Contemporary Art of Trento and Rovereto