Spark multi-tenant files normalization into the common schema










0















I have S3 where all files in different formats and from different clients are stored and new files arrive.



Files from different clients are stored under the CLIENT_ID subfolder. Inside of these subfolders files has the same format. But from folder to folder the file format may differ. For example, in folder CLIENT_1 we have CSV files separated by "," in CLIENT_2 we have CSV files separated by "|", in CLIENT_N we have JSON files and so on.



I can have thousands of such folders and I need to monitor/ETL all of them (process existing files and continuous process newly arrived files in these folders). After the ETL of these files, I want to have the normalized information in my common format and store somewhere in the database in common table.



Please advise how to properly implement this architecture with AWS and Apache Spark.



I guess I can try to implement it with Spark Streaming and the Databricks S3-SQS connector https://docs.databricks.com/spark/latest/structured-streaming/sqs.html but I don't understand where the transformation logic should be placed when using the Databricks S3-SQS connector.



Also, it is not clear or can I monitor the different S3 folders with the Databricks S3-SQS connector and provide the different spark.readStream configurations in order to be able to load the files with different schemas and file formats.



Also, is it a good idea to have thousands of different spark.readStream instances that will monitor thousands AWS S3 folders independently, like:



spark.readStream 
.format("s3-sqs")
.option("fileFormat", "json")
.option("queueUrl", ...)
.schema(...)
.load()


Please advise. I'll highly appreciate any help on this. Thanks!










share|improve this question


























    0















    I have S3 where all files in different formats and from different clients are stored and new files arrive.



    Files from different clients are stored under the CLIENT_ID subfolder. Inside of these subfolders files has the same format. But from folder to folder the file format may differ. For example, in folder CLIENT_1 we have CSV files separated by "," in CLIENT_2 we have CSV files separated by "|", in CLIENT_N we have JSON files and so on.



    I can have thousands of such folders and I need to monitor/ETL all of them (process existing files and continuous process newly arrived files in these folders). After the ETL of these files, I want to have the normalized information in my common format and store somewhere in the database in common table.



    Please advise how to properly implement this architecture with AWS and Apache Spark.



    I guess I can try to implement it with Spark Streaming and the Databricks S3-SQS connector https://docs.databricks.com/spark/latest/structured-streaming/sqs.html but I don't understand where the transformation logic should be placed when using the Databricks S3-SQS connector.



    Also, it is not clear or can I monitor the different S3 folders with the Databricks S3-SQS connector and provide the different spark.readStream configurations in order to be able to load the files with different schemas and file formats.



    Also, is it a good idea to have thousands of different spark.readStream instances that will monitor thousands AWS S3 folders independently, like:



    spark.readStream 
    .format("s3-sqs")
    .option("fileFormat", "json")
    .option("queueUrl", ...)
    .schema(...)
    .load()


    Please advise. I'll highly appreciate any help on this. Thanks!










    share|improve this question
























      0












      0








      0








      I have S3 where all files in different formats and from different clients are stored and new files arrive.



      Files from different clients are stored under the CLIENT_ID subfolder. Inside of these subfolders files has the same format. But from folder to folder the file format may differ. For example, in folder CLIENT_1 we have CSV files separated by "," in CLIENT_2 we have CSV files separated by "|", in CLIENT_N we have JSON files and so on.



      I can have thousands of such folders and I need to monitor/ETL all of them (process existing files and continuous process newly arrived files in these folders). After the ETL of these files, I want to have the normalized information in my common format and store somewhere in the database in common table.



      Please advise how to properly implement this architecture with AWS and Apache Spark.



      I guess I can try to implement it with Spark Streaming and the Databricks S3-SQS connector https://docs.databricks.com/spark/latest/structured-streaming/sqs.html but I don't understand where the transformation logic should be placed when using the Databricks S3-SQS connector.



      Also, it is not clear or can I monitor the different S3 folders with the Databricks S3-SQS connector and provide the different spark.readStream configurations in order to be able to load the files with different schemas and file formats.



      Also, is it a good idea to have thousands of different spark.readStream instances that will monitor thousands AWS S3 folders independently, like:



      spark.readStream 
      .format("s3-sqs")
      .option("fileFormat", "json")
      .option("queueUrl", ...)
      .schema(...)
      .load()


      Please advise. I'll highly appreciate any help on this. Thanks!










      share|improve this question














      I have S3 where all files in different formats and from different clients are stored and new files arrive.



      Files from different clients are stored under the CLIENT_ID subfolder. Inside of these subfolders files has the same format. But from folder to folder the file format may differ. For example, in folder CLIENT_1 we have CSV files separated by "," in CLIENT_2 we have CSV files separated by "|", in CLIENT_N we have JSON files and so on.



      I can have thousands of such folders and I need to monitor/ETL all of them (process existing files and continuous process newly arrived files in these folders). After the ETL of these files, I want to have the normalized information in my common format and store somewhere in the database in common table.



      Please advise how to properly implement this architecture with AWS and Apache Spark.



      I guess I can try to implement it with Spark Streaming and the Databricks S3-SQS connector https://docs.databricks.com/spark/latest/structured-streaming/sqs.html but I don't understand where the transformation logic should be placed when using the Databricks S3-SQS connector.



      Also, it is not clear or can I monitor the different S3 folders with the Databricks S3-SQS connector and provide the different spark.readStream configurations in order to be able to load the files with different schemas and file formats.



      Also, is it a good idea to have thousands of different spark.readStream instances that will monitor thousands AWS S3 folders independently, like:



      spark.readStream 
      .format("s3-sqs")
      .option("fileFormat", "json")
      .option("queueUrl", ...)
      .schema(...)
      .load()


      Please advise. I'll highly appreciate any help on this. Thanks!







      apache-spark amazon-s3 spark-streaming amazon-sqs






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 15 '18 at 11:31









      alexanoidalexanoid

      7,6621388194




      7,6621388194






















          0






          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53318516%2fspark-multi-tenant-files-normalization-into-the-common-schema%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53318516%2fspark-multi-tenant-files-normalization-into-the-common-schema%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          Barbados

          How to read a connectionString WITH PROVIDER in .NET Core?

          Node.js Script on GitHub Pages or Amazon S3