HDR images through Core Image?










12















Is it possible to process(filter) HDR images through Core Image? I couldn't find much documentation on this, so I was wondering if someone possibly had an answer to it. I do know that it is possible to do the working space computations with RGBAh when you initialize a CIContext, so I figured that if we could do computations with floating point image formats, that it should be possible..



What, if it is not possible, are alternatives if you want to produce HDR effects on iOS?



EDIT: I thought I'd try to be a bit more concise. It is to my understanding that HDR images can be clamped and saved as .jpg, .png, and other image formats by clamping the pixel values. However, I'm more interested in doing tone mapping through Core Image on a HDR image that has not been converted yet. The issue is encoding a CIImage with a HDR image, supposedly with the .hdr extention.



EDIT2: Maybe it would be useful to useful to use CGImageCreate , along with CGDataProviderCreateWithFilename ?










share|improve this question




























    12















    Is it possible to process(filter) HDR images through Core Image? I couldn't find much documentation on this, so I was wondering if someone possibly had an answer to it. I do know that it is possible to do the working space computations with RGBAh when you initialize a CIContext, so I figured that if we could do computations with floating point image formats, that it should be possible..



    What, if it is not possible, are alternatives if you want to produce HDR effects on iOS?



    EDIT: I thought I'd try to be a bit more concise. It is to my understanding that HDR images can be clamped and saved as .jpg, .png, and other image formats by clamping the pixel values. However, I'm more interested in doing tone mapping through Core Image on a HDR image that has not been converted yet. The issue is encoding a CIImage with a HDR image, supposedly with the .hdr extention.



    EDIT2: Maybe it would be useful to useful to use CGImageCreate , along with CGDataProviderCreateWithFilename ?










    share|improve this question


























      12












      12








      12


      1






      Is it possible to process(filter) HDR images through Core Image? I couldn't find much documentation on this, so I was wondering if someone possibly had an answer to it. I do know that it is possible to do the working space computations with RGBAh when you initialize a CIContext, so I figured that if we could do computations with floating point image formats, that it should be possible..



      What, if it is not possible, are alternatives if you want to produce HDR effects on iOS?



      EDIT: I thought I'd try to be a bit more concise. It is to my understanding that HDR images can be clamped and saved as .jpg, .png, and other image formats by clamping the pixel values. However, I'm more interested in doing tone mapping through Core Image on a HDR image that has not been converted yet. The issue is encoding a CIImage with a HDR image, supposedly with the .hdr extention.



      EDIT2: Maybe it would be useful to useful to use CGImageCreate , along with CGDataProviderCreateWithFilename ?










      share|improve this question
















      Is it possible to process(filter) HDR images through Core Image? I couldn't find much documentation on this, so I was wondering if someone possibly had an answer to it. I do know that it is possible to do the working space computations with RGBAh when you initialize a CIContext, so I figured that if we could do computations with floating point image formats, that it should be possible..



      What, if it is not possible, are alternatives if you want to produce HDR effects on iOS?



      EDIT: I thought I'd try to be a bit more concise. It is to my understanding that HDR images can be clamped and saved as .jpg, .png, and other image formats by clamping the pixel values. However, I'm more interested in doing tone mapping through Core Image on a HDR image that has not been converted yet. The issue is encoding a CIImage with a HDR image, supposedly with the .hdr extention.



      EDIT2: Maybe it would be useful to useful to use CGImageCreate , along with CGDataProviderCreateWithFilename ?







      ios core-graphics core-image






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Aug 19 '16 at 2:15







      DaveNine

















      asked Aug 16 '16 at 6:23









      DaveNineDaveNine

      119116




      119116






















          3 Answers
          3






          active

          oldest

          votes


















          4





          +25









          I hope you have basic understanding of how HDR works. An HDR file is generated by capturing 2 or more images at different exposures and combining it. So even if there's something like .HDR file, it would be a container format with more than one jpg in it. Technically you can not give two image files at once as an input to a generic CIFilter.



          And in iOS, as I remember, it's not possible to access original set of photos of an HDR but the processed final output. Even if you could, you'd have to manually do the HDR process and generate a single HDR png/jpg anyway before feeding it to a CIFilter.






          share|improve this answer























          • Once combined though, the resulting file can't be used in any way to supply pixel information? Perhaps we have such a file externally, hence the .hdr extension that I've seen used in sample MATLAB code for tone mapping, one cannot simply supply this file and encode the resulting image into a CIImage? Maybe it isn't the purpose of Core Image to do tone mapping on HDR images then? Even though, it technically is possible, with the use of their custom filters.

            – DaveNine
            Aug 19 '16 at 6:17











          • @DavidSacco first problem is ,in iOS there's no way to access originally captured array of images that has different exposures. And once final HDR is processed , it doesn't hold pixel information of source files because its technically impossible to store such data in 3 RGB channels. In iOS "HDR" labeled images are nothing but JPEGs , that has already been processed using multiple photos with different exposures.

            – sleepwalkerfx
            Aug 23 '16 at 11:57











          • Thanks, yeah, from my own google-fu this is also what I've learned. I'm wondering if any of the new iOS 10 stuff pertaining to .RAW images might help. I actually also learned that HDR images could be saved in a floating point .TIFF file, where high intensities could be preserved, it's just that the user would have to supply said image if they wanted to use tone mapping. It's further possible to make sure that your context is computed in floating point precision with the option kCIContextWorkingFormat.

            – DaveNine
            Aug 24 '16 at 2:35



















          1














          Since there are people who ask for a CI HDR Algorithm, I decided to share my code on github. See:



          https://github.com/schulz0r/CoreImage-HDR



          It is the Robertson HDR algorithm, so you cannot use RAW images. Please see the unit tests if you want to know how to get the camera response and obtain the hdr image. CoreImage saturates pixel values outside [0.0 ... 1.0], so the HDR is scaled into said interval.
          Coding with metal always causes messy code for me, so I decided to use MetalKitPlus which you have to include in your project. You can find it here:



          https://github.com/LRH539/MetalKitPlus



          I think you have to check out the dev/v2.0.0 branch. I will merge this into master in the future.



          edit: Just clone the master branch of MetalKitPlus. Also, I added a more detailed description to my CI-HDR project.






          share|improve this answer
































            1














            You can now(iOS 10+) capture Raw images(coded on 12 bits) and then filter them the way you like using CIFilter. You might not get a dynamic range as wide as the one you get by using bracketed captures; nevertheless, it is still wider than capturing 8-bits images.



            Check Apple's documentation for capturing and processing RAW images.



            I also recommend you watch wwdc2016 video by Apple(move to the raw processing part).






            share|improve this answer
























              Your Answer






              StackExchange.ifUsing("editor", function ()
              StackExchange.using("externalEditor", function ()
              StackExchange.using("snippets", function ()
              StackExchange.snippets.init();
              );
              );
              , "code-snippets");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "1"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f38967896%2fhdr-images-through-core-image%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              4





              +25









              I hope you have basic understanding of how HDR works. An HDR file is generated by capturing 2 or more images at different exposures and combining it. So even if there's something like .HDR file, it would be a container format with more than one jpg in it. Technically you can not give two image files at once as an input to a generic CIFilter.



              And in iOS, as I remember, it's not possible to access original set of photos of an HDR but the processed final output. Even if you could, you'd have to manually do the HDR process and generate a single HDR png/jpg anyway before feeding it to a CIFilter.






              share|improve this answer























              • Once combined though, the resulting file can't be used in any way to supply pixel information? Perhaps we have such a file externally, hence the .hdr extension that I've seen used in sample MATLAB code for tone mapping, one cannot simply supply this file and encode the resulting image into a CIImage? Maybe it isn't the purpose of Core Image to do tone mapping on HDR images then? Even though, it technically is possible, with the use of their custom filters.

                – DaveNine
                Aug 19 '16 at 6:17











              • @DavidSacco first problem is ,in iOS there's no way to access originally captured array of images that has different exposures. And once final HDR is processed , it doesn't hold pixel information of source files because its technically impossible to store such data in 3 RGB channels. In iOS "HDR" labeled images are nothing but JPEGs , that has already been processed using multiple photos with different exposures.

                – sleepwalkerfx
                Aug 23 '16 at 11:57











              • Thanks, yeah, from my own google-fu this is also what I've learned. I'm wondering if any of the new iOS 10 stuff pertaining to .RAW images might help. I actually also learned that HDR images could be saved in a floating point .TIFF file, where high intensities could be preserved, it's just that the user would have to supply said image if they wanted to use tone mapping. It's further possible to make sure that your context is computed in floating point precision with the option kCIContextWorkingFormat.

                – DaveNine
                Aug 24 '16 at 2:35
















              4





              +25









              I hope you have basic understanding of how HDR works. An HDR file is generated by capturing 2 or more images at different exposures and combining it. So even if there's something like .HDR file, it would be a container format with more than one jpg in it. Technically you can not give two image files at once as an input to a generic CIFilter.



              And in iOS, as I remember, it's not possible to access original set of photos of an HDR but the processed final output. Even if you could, you'd have to manually do the HDR process and generate a single HDR png/jpg anyway before feeding it to a CIFilter.






              share|improve this answer























              • Once combined though, the resulting file can't be used in any way to supply pixel information? Perhaps we have such a file externally, hence the .hdr extension that I've seen used in sample MATLAB code for tone mapping, one cannot simply supply this file and encode the resulting image into a CIImage? Maybe it isn't the purpose of Core Image to do tone mapping on HDR images then? Even though, it technically is possible, with the use of their custom filters.

                – DaveNine
                Aug 19 '16 at 6:17











              • @DavidSacco first problem is ,in iOS there's no way to access originally captured array of images that has different exposures. And once final HDR is processed , it doesn't hold pixel information of source files because its technically impossible to store such data in 3 RGB channels. In iOS "HDR" labeled images are nothing but JPEGs , that has already been processed using multiple photos with different exposures.

                – sleepwalkerfx
                Aug 23 '16 at 11:57











              • Thanks, yeah, from my own google-fu this is also what I've learned. I'm wondering if any of the new iOS 10 stuff pertaining to .RAW images might help. I actually also learned that HDR images could be saved in a floating point .TIFF file, where high intensities could be preserved, it's just that the user would have to supply said image if they wanted to use tone mapping. It's further possible to make sure that your context is computed in floating point precision with the option kCIContextWorkingFormat.

                – DaveNine
                Aug 24 '16 at 2:35














              4





              +25







              4





              +25



              4




              +25





              I hope you have basic understanding of how HDR works. An HDR file is generated by capturing 2 or more images at different exposures and combining it. So even if there's something like .HDR file, it would be a container format with more than one jpg in it. Technically you can not give two image files at once as an input to a generic CIFilter.



              And in iOS, as I remember, it's not possible to access original set of photos of an HDR but the processed final output. Even if you could, you'd have to manually do the HDR process and generate a single HDR png/jpg anyway before feeding it to a CIFilter.






              share|improve this answer













              I hope you have basic understanding of how HDR works. An HDR file is generated by capturing 2 or more images at different exposures and combining it. So even if there's something like .HDR file, it would be a container format with more than one jpg in it. Technically you can not give two image files at once as an input to a generic CIFilter.



              And in iOS, as I remember, it's not possible to access original set of photos of an HDR but the processed final output. Even if you could, you'd have to manually do the HDR process and generate a single HDR png/jpg anyway before feeding it to a CIFilter.







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered Aug 19 '16 at 4:06









              sleepwalkerfxsleepwalkerfx

              4,69553352




              4,69553352












              • Once combined though, the resulting file can't be used in any way to supply pixel information? Perhaps we have such a file externally, hence the .hdr extension that I've seen used in sample MATLAB code for tone mapping, one cannot simply supply this file and encode the resulting image into a CIImage? Maybe it isn't the purpose of Core Image to do tone mapping on HDR images then? Even though, it technically is possible, with the use of their custom filters.

                – DaveNine
                Aug 19 '16 at 6:17











              • @DavidSacco first problem is ,in iOS there's no way to access originally captured array of images that has different exposures. And once final HDR is processed , it doesn't hold pixel information of source files because its technically impossible to store such data in 3 RGB channels. In iOS "HDR" labeled images are nothing but JPEGs , that has already been processed using multiple photos with different exposures.

                – sleepwalkerfx
                Aug 23 '16 at 11:57











              • Thanks, yeah, from my own google-fu this is also what I've learned. I'm wondering if any of the new iOS 10 stuff pertaining to .RAW images might help. I actually also learned that HDR images could be saved in a floating point .TIFF file, where high intensities could be preserved, it's just that the user would have to supply said image if they wanted to use tone mapping. It's further possible to make sure that your context is computed in floating point precision with the option kCIContextWorkingFormat.

                – DaveNine
                Aug 24 '16 at 2:35


















              • Once combined though, the resulting file can't be used in any way to supply pixel information? Perhaps we have such a file externally, hence the .hdr extension that I've seen used in sample MATLAB code for tone mapping, one cannot simply supply this file and encode the resulting image into a CIImage? Maybe it isn't the purpose of Core Image to do tone mapping on HDR images then? Even though, it technically is possible, with the use of their custom filters.

                – DaveNine
                Aug 19 '16 at 6:17











              • @DavidSacco first problem is ,in iOS there's no way to access originally captured array of images that has different exposures. And once final HDR is processed , it doesn't hold pixel information of source files because its technically impossible to store such data in 3 RGB channels. In iOS "HDR" labeled images are nothing but JPEGs , that has already been processed using multiple photos with different exposures.

                – sleepwalkerfx
                Aug 23 '16 at 11:57











              • Thanks, yeah, from my own google-fu this is also what I've learned. I'm wondering if any of the new iOS 10 stuff pertaining to .RAW images might help. I actually also learned that HDR images could be saved in a floating point .TIFF file, where high intensities could be preserved, it's just that the user would have to supply said image if they wanted to use tone mapping. It's further possible to make sure that your context is computed in floating point precision with the option kCIContextWorkingFormat.

                – DaveNine
                Aug 24 '16 at 2:35

















              Once combined though, the resulting file can't be used in any way to supply pixel information? Perhaps we have such a file externally, hence the .hdr extension that I've seen used in sample MATLAB code for tone mapping, one cannot simply supply this file and encode the resulting image into a CIImage? Maybe it isn't the purpose of Core Image to do tone mapping on HDR images then? Even though, it technically is possible, with the use of their custom filters.

              – DaveNine
              Aug 19 '16 at 6:17





              Once combined though, the resulting file can't be used in any way to supply pixel information? Perhaps we have such a file externally, hence the .hdr extension that I've seen used in sample MATLAB code for tone mapping, one cannot simply supply this file and encode the resulting image into a CIImage? Maybe it isn't the purpose of Core Image to do tone mapping on HDR images then? Even though, it technically is possible, with the use of their custom filters.

              – DaveNine
              Aug 19 '16 at 6:17













              @DavidSacco first problem is ,in iOS there's no way to access originally captured array of images that has different exposures. And once final HDR is processed , it doesn't hold pixel information of source files because its technically impossible to store such data in 3 RGB channels. In iOS "HDR" labeled images are nothing but JPEGs , that has already been processed using multiple photos with different exposures.

              – sleepwalkerfx
              Aug 23 '16 at 11:57





              @DavidSacco first problem is ,in iOS there's no way to access originally captured array of images that has different exposures. And once final HDR is processed , it doesn't hold pixel information of source files because its technically impossible to store such data in 3 RGB channels. In iOS "HDR" labeled images are nothing but JPEGs , that has already been processed using multiple photos with different exposures.

              – sleepwalkerfx
              Aug 23 '16 at 11:57













              Thanks, yeah, from my own google-fu this is also what I've learned. I'm wondering if any of the new iOS 10 stuff pertaining to .RAW images might help. I actually also learned that HDR images could be saved in a floating point .TIFF file, where high intensities could be preserved, it's just that the user would have to supply said image if they wanted to use tone mapping. It's further possible to make sure that your context is computed in floating point precision with the option kCIContextWorkingFormat.

              – DaveNine
              Aug 24 '16 at 2:35






              Thanks, yeah, from my own google-fu this is also what I've learned. I'm wondering if any of the new iOS 10 stuff pertaining to .RAW images might help. I actually also learned that HDR images could be saved in a floating point .TIFF file, where high intensities could be preserved, it's just that the user would have to supply said image if they wanted to use tone mapping. It's further possible to make sure that your context is computed in floating point precision with the option kCIContextWorkingFormat.

              – DaveNine
              Aug 24 '16 at 2:35














              1














              Since there are people who ask for a CI HDR Algorithm, I decided to share my code on github. See:



              https://github.com/schulz0r/CoreImage-HDR



              It is the Robertson HDR algorithm, so you cannot use RAW images. Please see the unit tests if you want to know how to get the camera response and obtain the hdr image. CoreImage saturates pixel values outside [0.0 ... 1.0], so the HDR is scaled into said interval.
              Coding with metal always causes messy code for me, so I decided to use MetalKitPlus which you have to include in your project. You can find it here:



              https://github.com/LRH539/MetalKitPlus



              I think you have to check out the dev/v2.0.0 branch. I will merge this into master in the future.



              edit: Just clone the master branch of MetalKitPlus. Also, I added a more detailed description to my CI-HDR project.






              share|improve this answer





























                1














                Since there are people who ask for a CI HDR Algorithm, I decided to share my code on github. See:



                https://github.com/schulz0r/CoreImage-HDR



                It is the Robertson HDR algorithm, so you cannot use RAW images. Please see the unit tests if you want to know how to get the camera response and obtain the hdr image. CoreImage saturates pixel values outside [0.0 ... 1.0], so the HDR is scaled into said interval.
                Coding with metal always causes messy code for me, so I decided to use MetalKitPlus which you have to include in your project. You can find it here:



                https://github.com/LRH539/MetalKitPlus



                I think you have to check out the dev/v2.0.0 branch. I will merge this into master in the future.



                edit: Just clone the master branch of MetalKitPlus. Also, I added a more detailed description to my CI-HDR project.






                share|improve this answer



























                  1












                  1








                  1







                  Since there are people who ask for a CI HDR Algorithm, I decided to share my code on github. See:



                  https://github.com/schulz0r/CoreImage-HDR



                  It is the Robertson HDR algorithm, so you cannot use RAW images. Please see the unit tests if you want to know how to get the camera response and obtain the hdr image. CoreImage saturates pixel values outside [0.0 ... 1.0], so the HDR is scaled into said interval.
                  Coding with metal always causes messy code for me, so I decided to use MetalKitPlus which you have to include in your project. You can find it here:



                  https://github.com/LRH539/MetalKitPlus



                  I think you have to check out the dev/v2.0.0 branch. I will merge this into master in the future.



                  edit: Just clone the master branch of MetalKitPlus. Also, I added a more detailed description to my CI-HDR project.






                  share|improve this answer















                  Since there are people who ask for a CI HDR Algorithm, I decided to share my code on github. See:



                  https://github.com/schulz0r/CoreImage-HDR



                  It is the Robertson HDR algorithm, so you cannot use RAW images. Please see the unit tests if you want to know how to get the camera response and obtain the hdr image. CoreImage saturates pixel values outside [0.0 ... 1.0], so the HDR is scaled into said interval.
                  Coding with metal always causes messy code for me, so I decided to use MetalKitPlus which you have to include in your project. You can find it here:



                  https://github.com/LRH539/MetalKitPlus



                  I think you have to check out the dev/v2.0.0 branch. I will merge this into master in the future.



                  edit: Just clone the master branch of MetalKitPlus. Also, I added a more detailed description to my CI-HDR project.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Jun 21 '18 at 14:15

























                  answered Feb 27 '18 at 14:03









                  PhilliPhilli

                  767




                  767





















                      1














                      You can now(iOS 10+) capture Raw images(coded on 12 bits) and then filter them the way you like using CIFilter. You might not get a dynamic range as wide as the one you get by using bracketed captures; nevertheless, it is still wider than capturing 8-bits images.



                      Check Apple's documentation for capturing and processing RAW images.



                      I also recommend you watch wwdc2016 video by Apple(move to the raw processing part).






                      share|improve this answer





























                        1














                        You can now(iOS 10+) capture Raw images(coded on 12 bits) and then filter them the way you like using CIFilter. You might not get a dynamic range as wide as the one you get by using bracketed captures; nevertheless, it is still wider than capturing 8-bits images.



                        Check Apple's documentation for capturing and processing RAW images.



                        I also recommend you watch wwdc2016 video by Apple(move to the raw processing part).






                        share|improve this answer



























                          1












                          1








                          1







                          You can now(iOS 10+) capture Raw images(coded on 12 bits) and then filter them the way you like using CIFilter. You might not get a dynamic range as wide as the one you get by using bracketed captures; nevertheless, it is still wider than capturing 8-bits images.



                          Check Apple's documentation for capturing and processing RAW images.



                          I also recommend you watch wwdc2016 video by Apple(move to the raw processing part).






                          share|improve this answer















                          You can now(iOS 10+) capture Raw images(coded on 12 bits) and then filter them the way you like using CIFilter. You might not get a dynamic range as wide as the one you get by using bracketed captures; nevertheless, it is still wider than capturing 8-bits images.



                          Check Apple's documentation for capturing and processing RAW images.



                          I also recommend you watch wwdc2016 video by Apple(move to the raw processing part).







                          share|improve this answer














                          share|improve this answer



                          share|improve this answer








                          edited Nov 15 '18 at 12:18









                          Billal Begueradj

                          6,013132946




                          6,013132946










                          answered Nov 15 '18 at 11:57









                          Rebecca Abi RaadRebecca Abi Raad

                          114




                          114



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Stack Overflow!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f38967896%2fhdr-images-through-core-image%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              這個網誌中的熱門文章

                              How to read a connectionString WITH PROVIDER in .NET Core?

                              Node.js Script on GitHub Pages or Amazon S3

                              Museum of Modern and Contemporary Art of Trento and Rovereto