Curve-Fitting over an integral containing both a data array and function in python










2















I have a data set described by an integral with unknown constants that I am attempting to determine using python's curve_fit. However, the integrand contains a function being multiplied against a data set



def integrand(tm, Pm, args):
dt, alpha1, alpha2 = args
return Pm*(1-np.e**( -(alpha1 * (tm+dt))) )*np.e**(-(alpha2 * (tm+dt)))


Where Pm is a 1-D array of collected data of impulse responses, Image of Impulse data and Integral Curve





The orange curve represents the impulse data and the blue curve is what the integral should evaluate to



tm is the variable of integration, and dt, alpha1, alpha2 are unknown constants and the bounds of integration would be from 0 to tm.



What would be the best way to perform a curve fit on an integral of this kind, or possibly other ways to solve for the unknown constants?



The Data sets are linked here










share|improve this question




























    2















    I have a data set described by an integral with unknown constants that I am attempting to determine using python's curve_fit. However, the integrand contains a function being multiplied against a data set



    def integrand(tm, Pm, args):
    dt, alpha1, alpha2 = args
    return Pm*(1-np.e**( -(alpha1 * (tm+dt))) )*np.e**(-(alpha2 * (tm+dt)))


    Where Pm is a 1-D array of collected data of impulse responses, Image of Impulse data and Integral Curve





    The orange curve represents the impulse data and the blue curve is what the integral should evaluate to



    tm is the variable of integration, and dt, alpha1, alpha2 are unknown constants and the bounds of integration would be from 0 to tm.



    What would be the best way to perform a curve fit on an integral of this kind, or possibly other ways to solve for the unknown constants?



    The Data sets are linked here










    share|improve this question


























      2












      2








      2








      I have a data set described by an integral with unknown constants that I am attempting to determine using python's curve_fit. However, the integrand contains a function being multiplied against a data set



      def integrand(tm, Pm, args):
      dt, alpha1, alpha2 = args
      return Pm*(1-np.e**( -(alpha1 * (tm+dt))) )*np.e**(-(alpha2 * (tm+dt)))


      Where Pm is a 1-D array of collected data of impulse responses, Image of Impulse data and Integral Curve





      The orange curve represents the impulse data and the blue curve is what the integral should evaluate to



      tm is the variable of integration, and dt, alpha1, alpha2 are unknown constants and the bounds of integration would be from 0 to tm.



      What would be the best way to perform a curve fit on an integral of this kind, or possibly other ways to solve for the unknown constants?



      The Data sets are linked here










      share|improve this question
















      I have a data set described by an integral with unknown constants that I am attempting to determine using python's curve_fit. However, the integrand contains a function being multiplied against a data set



      def integrand(tm, Pm, args):
      dt, alpha1, alpha2 = args
      return Pm*(1-np.e**( -(alpha1 * (tm+dt))) )*np.e**(-(alpha2 * (tm+dt)))


      Where Pm is a 1-D array of collected data of impulse responses, Image of Impulse data and Integral Curve





      The orange curve represents the impulse data and the blue curve is what the integral should evaluate to



      tm is the variable of integration, and dt, alpha1, alpha2 are unknown constants and the bounds of integration would be from 0 to tm.



      What would be the best way to perform a curve fit on an integral of this kind, or possibly other ways to solve for the unknown constants?



      The Data sets are linked here







      python signal-processing curve-fitting






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 14 '18 at 20:43







      J.zendejas

















      asked Nov 14 '18 at 18:29









      J.zendejasJ.zendejas

      112




      112






















          1 Answer
          1






          active

          oldest

          votes


















          0














          From the length of the data sets, it seems that the intent is to fit integrand(t) to output(t+dt). There are several functions in the scipy optimize module that can be used to do this. For a simple example, we show an implementation using scipy.optimize.leastsqr(). For further details see the tutorial at scipy optimize



          The basic scheme is to create a function that evaluates the model function over the independent coordinate and returns a numpy array containing the residuals, the difference between the model and the observations at each point. The leastsq() finds the values of a set of parameters that minimizes the sum of the squares of the residuals.



          We note as a caveat that the fit can be sensitive to the initial guess.
          Simulated annealing is often used to find a likely global minimum and provide a rough estimate for the fit parameters before refining the fit. The values used here for the initial guess are for notional purposes only.



          from scipy.optimize import leastsq
          import numpy as np

          # Read the data files
          Pm = np.array( [ float(v) for v in open( "impulse_data.txt", "r" ).readlines() ] )
          print type(Pm), Pm.shape

          tm = np.array( [ float(v) for v in open( "Impulse_time_axis.txt", "r" ).readlines() ] )
          print type(tm), tm.shape

          output = np.array( [ float(v) for v in open( "Output_data.txt", "r" ).readlines() ] )
          print type(output), output.shape

          tout = np.array( [ float(v) for v in open( "Output_time_axis.txt", "r" ).readlines() ] )
          print type(tout), tout.shape

          # Define the function that calculates the residuals
          def residuals( coeffs, output, tm ):
          dt, alpha1, alpha2 = coeffs
          res = np.zeros( tm.shape )
          for n,t in enumerate(tm):
          # integrate to "t"
          value = sum( Pm[:n]*(1-np.e**( -(alpha1 * (tm[:n]+dt))) )*np.e**(-(alpha2 * (tm[:n]+dt))) )
          # retrieve output at t+dt
          n1 = (np.abs(tout - (t+dt) )).argmin()
          # construct the residual
          res[n] = output[n1] - value
          return res

          # Initial guess for the parameters
          x0 = (10.,1.,1.)

          # Run the least squares routine
          coeffs, flag = leastsq( residuals, x0, args=(output,tm) )

          # Report the results
          print( coeffs )
          print( flag )





          share|improve this answer

























          • Attempting this method led to the error response: "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ". On your suggestion I've included a link to the relevant data sets

            – J.zendejas
            Nov 14 '18 at 20:45












          • Okay, this was meant to be a schematic of how to go about it. I'll try to find some time to work-up a specific example using your data.

            – DrM
            Nov 14 '18 at 23:10











          • @J.zendejas. That's it, lets see if that does it for you. You will need a reasonable guess for the parameters. Its a lengthy fit because of the integral, in essence a loop over t at each t.

            – DrM
            Nov 15 '18 at 3:18











          • I haven't been able to get it to work completely yet, but it has helped make progress, I'll keep working with it, thank you for your help!

            – J.zendejas
            Nov 28 '18 at 9:54











          • It might be worthwhile to try simulated annealing as a preliminary round, and then refine the result with least squares. But, be aware that some annealers do a refinement on each candidate configuration of the parameters (I recall that the annealer in the optimize module does it this way). That's often unnecessary, and might make it too expensive (i.e., slow) for your problem. You most likely want to save the least squares until after you select the configuration that corresponds to the best local minimum. I will try to find some time to look into this a bit and perhaps edit the answer.

            – DrM
            Nov 29 '18 at 13:58










          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53306632%2fcurve-fitting-over-an-integral-containing-both-a-data-array-and-function-in-pyth%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          From the length of the data sets, it seems that the intent is to fit integrand(t) to output(t+dt). There are several functions in the scipy optimize module that can be used to do this. For a simple example, we show an implementation using scipy.optimize.leastsqr(). For further details see the tutorial at scipy optimize



          The basic scheme is to create a function that evaluates the model function over the independent coordinate and returns a numpy array containing the residuals, the difference between the model and the observations at each point. The leastsq() finds the values of a set of parameters that minimizes the sum of the squares of the residuals.



          We note as a caveat that the fit can be sensitive to the initial guess.
          Simulated annealing is often used to find a likely global minimum and provide a rough estimate for the fit parameters before refining the fit. The values used here for the initial guess are for notional purposes only.



          from scipy.optimize import leastsq
          import numpy as np

          # Read the data files
          Pm = np.array( [ float(v) for v in open( "impulse_data.txt", "r" ).readlines() ] )
          print type(Pm), Pm.shape

          tm = np.array( [ float(v) for v in open( "Impulse_time_axis.txt", "r" ).readlines() ] )
          print type(tm), tm.shape

          output = np.array( [ float(v) for v in open( "Output_data.txt", "r" ).readlines() ] )
          print type(output), output.shape

          tout = np.array( [ float(v) for v in open( "Output_time_axis.txt", "r" ).readlines() ] )
          print type(tout), tout.shape

          # Define the function that calculates the residuals
          def residuals( coeffs, output, tm ):
          dt, alpha1, alpha2 = coeffs
          res = np.zeros( tm.shape )
          for n,t in enumerate(tm):
          # integrate to "t"
          value = sum( Pm[:n]*(1-np.e**( -(alpha1 * (tm[:n]+dt))) )*np.e**(-(alpha2 * (tm[:n]+dt))) )
          # retrieve output at t+dt
          n1 = (np.abs(tout - (t+dt) )).argmin()
          # construct the residual
          res[n] = output[n1] - value
          return res

          # Initial guess for the parameters
          x0 = (10.,1.,1.)

          # Run the least squares routine
          coeffs, flag = leastsq( residuals, x0, args=(output,tm) )

          # Report the results
          print( coeffs )
          print( flag )





          share|improve this answer

























          • Attempting this method led to the error response: "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ". On your suggestion I've included a link to the relevant data sets

            – J.zendejas
            Nov 14 '18 at 20:45












          • Okay, this was meant to be a schematic of how to go about it. I'll try to find some time to work-up a specific example using your data.

            – DrM
            Nov 14 '18 at 23:10











          • @J.zendejas. That's it, lets see if that does it for you. You will need a reasonable guess for the parameters. Its a lengthy fit because of the integral, in essence a loop over t at each t.

            – DrM
            Nov 15 '18 at 3:18











          • I haven't been able to get it to work completely yet, but it has helped make progress, I'll keep working with it, thank you for your help!

            – J.zendejas
            Nov 28 '18 at 9:54











          • It might be worthwhile to try simulated annealing as a preliminary round, and then refine the result with least squares. But, be aware that some annealers do a refinement on each candidate configuration of the parameters (I recall that the annealer in the optimize module does it this way). That's often unnecessary, and might make it too expensive (i.e., slow) for your problem. You most likely want to save the least squares until after you select the configuration that corresponds to the best local minimum. I will try to find some time to look into this a bit and perhaps edit the answer.

            – DrM
            Nov 29 '18 at 13:58















          0














          From the length of the data sets, it seems that the intent is to fit integrand(t) to output(t+dt). There are several functions in the scipy optimize module that can be used to do this. For a simple example, we show an implementation using scipy.optimize.leastsqr(). For further details see the tutorial at scipy optimize



          The basic scheme is to create a function that evaluates the model function over the independent coordinate and returns a numpy array containing the residuals, the difference between the model and the observations at each point. The leastsq() finds the values of a set of parameters that minimizes the sum of the squares of the residuals.



          We note as a caveat that the fit can be sensitive to the initial guess.
          Simulated annealing is often used to find a likely global minimum and provide a rough estimate for the fit parameters before refining the fit. The values used here for the initial guess are for notional purposes only.



          from scipy.optimize import leastsq
          import numpy as np

          # Read the data files
          Pm = np.array( [ float(v) for v in open( "impulse_data.txt", "r" ).readlines() ] )
          print type(Pm), Pm.shape

          tm = np.array( [ float(v) for v in open( "Impulse_time_axis.txt", "r" ).readlines() ] )
          print type(tm), tm.shape

          output = np.array( [ float(v) for v in open( "Output_data.txt", "r" ).readlines() ] )
          print type(output), output.shape

          tout = np.array( [ float(v) for v in open( "Output_time_axis.txt", "r" ).readlines() ] )
          print type(tout), tout.shape

          # Define the function that calculates the residuals
          def residuals( coeffs, output, tm ):
          dt, alpha1, alpha2 = coeffs
          res = np.zeros( tm.shape )
          for n,t in enumerate(tm):
          # integrate to "t"
          value = sum( Pm[:n]*(1-np.e**( -(alpha1 * (tm[:n]+dt))) )*np.e**(-(alpha2 * (tm[:n]+dt))) )
          # retrieve output at t+dt
          n1 = (np.abs(tout - (t+dt) )).argmin()
          # construct the residual
          res[n] = output[n1] - value
          return res

          # Initial guess for the parameters
          x0 = (10.,1.,1.)

          # Run the least squares routine
          coeffs, flag = leastsq( residuals, x0, args=(output,tm) )

          # Report the results
          print( coeffs )
          print( flag )





          share|improve this answer

























          • Attempting this method led to the error response: "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ". On your suggestion I've included a link to the relevant data sets

            – J.zendejas
            Nov 14 '18 at 20:45












          • Okay, this was meant to be a schematic of how to go about it. I'll try to find some time to work-up a specific example using your data.

            – DrM
            Nov 14 '18 at 23:10











          • @J.zendejas. That's it, lets see if that does it for you. You will need a reasonable guess for the parameters. Its a lengthy fit because of the integral, in essence a loop over t at each t.

            – DrM
            Nov 15 '18 at 3:18











          • I haven't been able to get it to work completely yet, but it has helped make progress, I'll keep working with it, thank you for your help!

            – J.zendejas
            Nov 28 '18 at 9:54











          • It might be worthwhile to try simulated annealing as a preliminary round, and then refine the result with least squares. But, be aware that some annealers do a refinement on each candidate configuration of the parameters (I recall that the annealer in the optimize module does it this way). That's often unnecessary, and might make it too expensive (i.e., slow) for your problem. You most likely want to save the least squares until after you select the configuration that corresponds to the best local minimum. I will try to find some time to look into this a bit and perhaps edit the answer.

            – DrM
            Nov 29 '18 at 13:58













          0












          0








          0







          From the length of the data sets, it seems that the intent is to fit integrand(t) to output(t+dt). There are several functions in the scipy optimize module that can be used to do this. For a simple example, we show an implementation using scipy.optimize.leastsqr(). For further details see the tutorial at scipy optimize



          The basic scheme is to create a function that evaluates the model function over the independent coordinate and returns a numpy array containing the residuals, the difference between the model and the observations at each point. The leastsq() finds the values of a set of parameters that minimizes the sum of the squares of the residuals.



          We note as a caveat that the fit can be sensitive to the initial guess.
          Simulated annealing is often used to find a likely global minimum and provide a rough estimate for the fit parameters before refining the fit. The values used here for the initial guess are for notional purposes only.



          from scipy.optimize import leastsq
          import numpy as np

          # Read the data files
          Pm = np.array( [ float(v) for v in open( "impulse_data.txt", "r" ).readlines() ] )
          print type(Pm), Pm.shape

          tm = np.array( [ float(v) for v in open( "Impulse_time_axis.txt", "r" ).readlines() ] )
          print type(tm), tm.shape

          output = np.array( [ float(v) for v in open( "Output_data.txt", "r" ).readlines() ] )
          print type(output), output.shape

          tout = np.array( [ float(v) for v in open( "Output_time_axis.txt", "r" ).readlines() ] )
          print type(tout), tout.shape

          # Define the function that calculates the residuals
          def residuals( coeffs, output, tm ):
          dt, alpha1, alpha2 = coeffs
          res = np.zeros( tm.shape )
          for n,t in enumerate(tm):
          # integrate to "t"
          value = sum( Pm[:n]*(1-np.e**( -(alpha1 * (tm[:n]+dt))) )*np.e**(-(alpha2 * (tm[:n]+dt))) )
          # retrieve output at t+dt
          n1 = (np.abs(tout - (t+dt) )).argmin()
          # construct the residual
          res[n] = output[n1] - value
          return res

          # Initial guess for the parameters
          x0 = (10.,1.,1.)

          # Run the least squares routine
          coeffs, flag = leastsq( residuals, x0, args=(output,tm) )

          # Report the results
          print( coeffs )
          print( flag )





          share|improve this answer















          From the length of the data sets, it seems that the intent is to fit integrand(t) to output(t+dt). There are several functions in the scipy optimize module that can be used to do this. For a simple example, we show an implementation using scipy.optimize.leastsqr(). For further details see the tutorial at scipy optimize



          The basic scheme is to create a function that evaluates the model function over the independent coordinate and returns a numpy array containing the residuals, the difference between the model and the observations at each point. The leastsq() finds the values of a set of parameters that minimizes the sum of the squares of the residuals.



          We note as a caveat that the fit can be sensitive to the initial guess.
          Simulated annealing is often used to find a likely global minimum and provide a rough estimate for the fit parameters before refining the fit. The values used here for the initial guess are for notional purposes only.



          from scipy.optimize import leastsq
          import numpy as np

          # Read the data files
          Pm = np.array( [ float(v) for v in open( "impulse_data.txt", "r" ).readlines() ] )
          print type(Pm), Pm.shape

          tm = np.array( [ float(v) for v in open( "Impulse_time_axis.txt", "r" ).readlines() ] )
          print type(tm), tm.shape

          output = np.array( [ float(v) for v in open( "Output_data.txt", "r" ).readlines() ] )
          print type(output), output.shape

          tout = np.array( [ float(v) for v in open( "Output_time_axis.txt", "r" ).readlines() ] )
          print type(tout), tout.shape

          # Define the function that calculates the residuals
          def residuals( coeffs, output, tm ):
          dt, alpha1, alpha2 = coeffs
          res = np.zeros( tm.shape )
          for n,t in enumerate(tm):
          # integrate to "t"
          value = sum( Pm[:n]*(1-np.e**( -(alpha1 * (tm[:n]+dt))) )*np.e**(-(alpha2 * (tm[:n]+dt))) )
          # retrieve output at t+dt
          n1 = (np.abs(tout - (t+dt) )).argmin()
          # construct the residual
          res[n] = output[n1] - value
          return res

          # Initial guess for the parameters
          x0 = (10.,1.,1.)

          # Run the least squares routine
          coeffs, flag = leastsq( residuals, x0, args=(output,tm) )

          # Report the results
          print( coeffs )
          print( flag )






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 15 '18 at 13:15

























          answered Nov 14 '18 at 19:30









          DrMDrM

          1,084314




          1,084314












          • Attempting this method led to the error response: "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ". On your suggestion I've included a link to the relevant data sets

            – J.zendejas
            Nov 14 '18 at 20:45












          • Okay, this was meant to be a schematic of how to go about it. I'll try to find some time to work-up a specific example using your data.

            – DrM
            Nov 14 '18 at 23:10











          • @J.zendejas. That's it, lets see if that does it for you. You will need a reasonable guess for the parameters. Its a lengthy fit because of the integral, in essence a loop over t at each t.

            – DrM
            Nov 15 '18 at 3:18











          • I haven't been able to get it to work completely yet, but it has helped make progress, I'll keep working with it, thank you for your help!

            – J.zendejas
            Nov 28 '18 at 9:54











          • It might be worthwhile to try simulated annealing as a preliminary round, and then refine the result with least squares. But, be aware that some annealers do a refinement on each candidate configuration of the parameters (I recall that the annealer in the optimize module does it this way). That's often unnecessary, and might make it too expensive (i.e., slow) for your problem. You most likely want to save the least squares until after you select the configuration that corresponds to the best local minimum. I will try to find some time to look into this a bit and perhaps edit the answer.

            – DrM
            Nov 29 '18 at 13:58

















          • Attempting this method led to the error response: "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ". On your suggestion I've included a link to the relevant data sets

            – J.zendejas
            Nov 14 '18 at 20:45












          • Okay, this was meant to be a schematic of how to go about it. I'll try to find some time to work-up a specific example using your data.

            – DrM
            Nov 14 '18 at 23:10











          • @J.zendejas. That's it, lets see if that does it for you. You will need a reasonable guess for the parameters. Its a lengthy fit because of the integral, in essence a loop over t at each t.

            – DrM
            Nov 15 '18 at 3:18











          • I haven't been able to get it to work completely yet, but it has helped make progress, I'll keep working with it, thank you for your help!

            – J.zendejas
            Nov 28 '18 at 9:54











          • It might be worthwhile to try simulated annealing as a preliminary round, and then refine the result with least squares. But, be aware that some annealers do a refinement on each candidate configuration of the parameters (I recall that the annealer in the optimize module does it this way). That's often unnecessary, and might make it too expensive (i.e., slow) for your problem. You most likely want to save the least squares until after you select the configuration that corresponds to the best local minimum. I will try to find some time to look into this a bit and perhaps edit the answer.

            – DrM
            Nov 29 '18 at 13:58
















          Attempting this method led to the error response: "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ". On your suggestion I've included a link to the relevant data sets

          – J.zendejas
          Nov 14 '18 at 20:45






          Attempting this method led to the error response: "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ". On your suggestion I've included a link to the relevant data sets

          – J.zendejas
          Nov 14 '18 at 20:45














          Okay, this was meant to be a schematic of how to go about it. I'll try to find some time to work-up a specific example using your data.

          – DrM
          Nov 14 '18 at 23:10





          Okay, this was meant to be a schematic of how to go about it. I'll try to find some time to work-up a specific example using your data.

          – DrM
          Nov 14 '18 at 23:10













          @J.zendejas. That's it, lets see if that does it for you. You will need a reasonable guess for the parameters. Its a lengthy fit because of the integral, in essence a loop over t at each t.

          – DrM
          Nov 15 '18 at 3:18





          @J.zendejas. That's it, lets see if that does it for you. You will need a reasonable guess for the parameters. Its a lengthy fit because of the integral, in essence a loop over t at each t.

          – DrM
          Nov 15 '18 at 3:18













          I haven't been able to get it to work completely yet, but it has helped make progress, I'll keep working with it, thank you for your help!

          – J.zendejas
          Nov 28 '18 at 9:54





          I haven't been able to get it to work completely yet, but it has helped make progress, I'll keep working with it, thank you for your help!

          – J.zendejas
          Nov 28 '18 at 9:54













          It might be worthwhile to try simulated annealing as a preliminary round, and then refine the result with least squares. But, be aware that some annealers do a refinement on each candidate configuration of the parameters (I recall that the annealer in the optimize module does it this way). That's often unnecessary, and might make it too expensive (i.e., slow) for your problem. You most likely want to save the least squares until after you select the configuration that corresponds to the best local minimum. I will try to find some time to look into this a bit and perhaps edit the answer.

          – DrM
          Nov 29 '18 at 13:58





          It might be worthwhile to try simulated annealing as a preliminary round, and then refine the result with least squares. But, be aware that some annealers do a refinement on each candidate configuration of the parameters (I recall that the annealer in the optimize module does it this way). That's often unnecessary, and might make it too expensive (i.e., slow) for your problem. You most likely want to save the least squares until after you select the configuration that corresponds to the best local minimum. I will try to find some time to look into this a bit and perhaps edit the answer.

          – DrM
          Nov 29 '18 at 13:58



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53306632%2fcurve-fitting-over-an-integral-containing-both-a-data-array-and-function-in-pyth%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          How to read a connectionString WITH PROVIDER in .NET Core?

          Node.js Script on GitHub Pages or Amazon S3

          Museum of Modern and Contemporary Art of Trento and Rovereto