Numpy Overflow in calculations disrupting code









up vote
-2
down vote

favorite












I am trying to train a neural network in Python 3.7. For this, I am using Numpy to perform calculations and do my matrix multiplications. I find this error



RuntimeWarning: overflow encountered in multiply (when I am multiplying matrices)



This, in turn, results in nan values, which raises errors like



RuntimeWarning: invalid value encountered in multiply



RuntimeWarning: invalid value encountered in sign



Now, I have seen many answers related to this question, all explaining why this happens. But I want to know, "How do I solve this problem?". I have tried using the default math module, but that still doesn't work and raises errors like



TypeError: only size-1 arrays can be converted to Python scalars



I know I can use for loops to do the multiplications, but that is computationally very expensive, and also lengthens and complicates the code a lot. Is there any solution to this problem? Like doing something with Numpy (I am aware that there are ways to handle exceptions, but not solve them), and if not, then perhaps alternative to Numpy, which doesn't require me to change my code much?



I don't really mind if the precision of my data is compromised a bit. (If it helps the dtype for the matrices is float64)



EDIT:
Here is a dummy version of my code:



import numpy as np
network = np.array([np.ones(10), np.ones(5)])
for i in range(100000):
for lindex, layer in enumerate(network):
network[lindex] *= abs(np.random.random(len(layer)))*200000


I think the overflow error occurs when I am adding large values to the network.










share|improve this question























  • what's the size of your matrices?
    – Troy
    Nov 7 at 15:14






  • 1




    sometimes 2, sometimes 3 dimensional
    – ѕняєє ѕιиgнι
    Nov 7 at 16:20






  • 1




    @brainfuck4d I think the details I have provided are sufficient, basically, overflow error occurs and I do not know how to get my result, and overcome the error, and my code is quite big, so it would confuse many people
    – ѕняєє ѕιиgнι
    Nov 7 at 17:56







  • 1




    @brainfuck4d But, I don't feel comfortable, giving away my hard-worked code, open on the internet
    – ѕняєє ѕιиgнι
    Nov 7 at 20:53






  • 1




    If you actually want people to help you, just posting a bounty isn't going to cut it. You've already had two other people tell you the same thing in the comments: there just aren't enough details in your question (regardless of how you may feel about it). You're going to have to share at least a little bit of your code. If that makes you uncomfortable, you can always write up a simpler dummy version of your problem that hides all of the details (see this doc for suggestions on how to do that). It just has to reproduce the same error.
    – tel
    Nov 10 at 15:31














up vote
-2
down vote

favorite












I am trying to train a neural network in Python 3.7. For this, I am using Numpy to perform calculations and do my matrix multiplications. I find this error



RuntimeWarning: overflow encountered in multiply (when I am multiplying matrices)



This, in turn, results in nan values, which raises errors like



RuntimeWarning: invalid value encountered in multiply



RuntimeWarning: invalid value encountered in sign



Now, I have seen many answers related to this question, all explaining why this happens. But I want to know, "How do I solve this problem?". I have tried using the default math module, but that still doesn't work and raises errors like



TypeError: only size-1 arrays can be converted to Python scalars



I know I can use for loops to do the multiplications, but that is computationally very expensive, and also lengthens and complicates the code a lot. Is there any solution to this problem? Like doing something with Numpy (I am aware that there are ways to handle exceptions, but not solve them), and if not, then perhaps alternative to Numpy, which doesn't require me to change my code much?



I don't really mind if the precision of my data is compromised a bit. (If it helps the dtype for the matrices is float64)



EDIT:
Here is a dummy version of my code:



import numpy as np
network = np.array([np.ones(10), np.ones(5)])
for i in range(100000):
for lindex, layer in enumerate(network):
network[lindex] *= abs(np.random.random(len(layer)))*200000


I think the overflow error occurs when I am adding large values to the network.










share|improve this question























  • what's the size of your matrices?
    – Troy
    Nov 7 at 15:14






  • 1




    sometimes 2, sometimes 3 dimensional
    – ѕняєє ѕιиgнι
    Nov 7 at 16:20






  • 1




    @brainfuck4d I think the details I have provided are sufficient, basically, overflow error occurs and I do not know how to get my result, and overcome the error, and my code is quite big, so it would confuse many people
    – ѕняєє ѕιиgнι
    Nov 7 at 17:56







  • 1




    @brainfuck4d But, I don't feel comfortable, giving away my hard-worked code, open on the internet
    – ѕняєє ѕιиgнι
    Nov 7 at 20:53






  • 1




    If you actually want people to help you, just posting a bounty isn't going to cut it. You've already had two other people tell you the same thing in the comments: there just aren't enough details in your question (regardless of how you may feel about it). You're going to have to share at least a little bit of your code. If that makes you uncomfortable, you can always write up a simpler dummy version of your problem that hides all of the details (see this doc for suggestions on how to do that). It just has to reproduce the same error.
    – tel
    Nov 10 at 15:31












up vote
-2
down vote

favorite









up vote
-2
down vote

favorite











I am trying to train a neural network in Python 3.7. For this, I am using Numpy to perform calculations and do my matrix multiplications. I find this error



RuntimeWarning: overflow encountered in multiply (when I am multiplying matrices)



This, in turn, results in nan values, which raises errors like



RuntimeWarning: invalid value encountered in multiply



RuntimeWarning: invalid value encountered in sign



Now, I have seen many answers related to this question, all explaining why this happens. But I want to know, "How do I solve this problem?". I have tried using the default math module, but that still doesn't work and raises errors like



TypeError: only size-1 arrays can be converted to Python scalars



I know I can use for loops to do the multiplications, but that is computationally very expensive, and also lengthens and complicates the code a lot. Is there any solution to this problem? Like doing something with Numpy (I am aware that there are ways to handle exceptions, but not solve them), and if not, then perhaps alternative to Numpy, which doesn't require me to change my code much?



I don't really mind if the precision of my data is compromised a bit. (If it helps the dtype for the matrices is float64)



EDIT:
Here is a dummy version of my code:



import numpy as np
network = np.array([np.ones(10), np.ones(5)])
for i in range(100000):
for lindex, layer in enumerate(network):
network[lindex] *= abs(np.random.random(len(layer)))*200000


I think the overflow error occurs when I am adding large values to the network.










share|improve this question















I am trying to train a neural network in Python 3.7. For this, I am using Numpy to perform calculations and do my matrix multiplications. I find this error



RuntimeWarning: overflow encountered in multiply (when I am multiplying matrices)



This, in turn, results in nan values, which raises errors like



RuntimeWarning: invalid value encountered in multiply



RuntimeWarning: invalid value encountered in sign



Now, I have seen many answers related to this question, all explaining why this happens. But I want to know, "How do I solve this problem?". I have tried using the default math module, but that still doesn't work and raises errors like



TypeError: only size-1 arrays can be converted to Python scalars



I know I can use for loops to do the multiplications, but that is computationally very expensive, and also lengthens and complicates the code a lot. Is there any solution to this problem? Like doing something with Numpy (I am aware that there are ways to handle exceptions, but not solve them), and if not, then perhaps alternative to Numpy, which doesn't require me to change my code much?



I don't really mind if the precision of my data is compromised a bit. (If it helps the dtype for the matrices is float64)



EDIT:
Here is a dummy version of my code:



import numpy as np
network = np.array([np.ones(10), np.ones(5)])
for i in range(100000):
for lindex, layer in enumerate(network):
network[lindex] *= abs(np.random.random(len(layer)))*200000


I think the overflow error occurs when I am adding large values to the network.







python python-3.x numpy runtime-error






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 11 at 6:48

























asked Nov 7 at 14:18









ѕняєє ѕιиgнι

30014




30014











  • what's the size of your matrices?
    – Troy
    Nov 7 at 15:14






  • 1




    sometimes 2, sometimes 3 dimensional
    – ѕняєє ѕιиgнι
    Nov 7 at 16:20






  • 1




    @brainfuck4d I think the details I have provided are sufficient, basically, overflow error occurs and I do not know how to get my result, and overcome the error, and my code is quite big, so it would confuse many people
    – ѕняєє ѕιиgнι
    Nov 7 at 17:56







  • 1




    @brainfuck4d But, I don't feel comfortable, giving away my hard-worked code, open on the internet
    – ѕняєє ѕιиgнι
    Nov 7 at 20:53






  • 1




    If you actually want people to help you, just posting a bounty isn't going to cut it. You've already had two other people tell you the same thing in the comments: there just aren't enough details in your question (regardless of how you may feel about it). You're going to have to share at least a little bit of your code. If that makes you uncomfortable, you can always write up a simpler dummy version of your problem that hides all of the details (see this doc for suggestions on how to do that). It just has to reproduce the same error.
    – tel
    Nov 10 at 15:31
















  • what's the size of your matrices?
    – Troy
    Nov 7 at 15:14






  • 1




    sometimes 2, sometimes 3 dimensional
    – ѕняєє ѕιиgнι
    Nov 7 at 16:20






  • 1




    @brainfuck4d I think the details I have provided are sufficient, basically, overflow error occurs and I do not know how to get my result, and overcome the error, and my code is quite big, so it would confuse many people
    – ѕняєє ѕιиgнι
    Nov 7 at 17:56







  • 1




    @brainfuck4d But, I don't feel comfortable, giving away my hard-worked code, open on the internet
    – ѕняєє ѕιиgнι
    Nov 7 at 20:53






  • 1




    If you actually want people to help you, just posting a bounty isn't going to cut it. You've already had two other people tell you the same thing in the comments: there just aren't enough details in your question (regardless of how you may feel about it). You're going to have to share at least a little bit of your code. If that makes you uncomfortable, you can always write up a simpler dummy version of your problem that hides all of the details (see this doc for suggestions on how to do that). It just has to reproduce the same error.
    – tel
    Nov 10 at 15:31















what's the size of your matrices?
– Troy
Nov 7 at 15:14




what's the size of your matrices?
– Troy
Nov 7 at 15:14




1




1




sometimes 2, sometimes 3 dimensional
– ѕняєє ѕιиgнι
Nov 7 at 16:20




sometimes 2, sometimes 3 dimensional
– ѕняєє ѕιиgнι
Nov 7 at 16:20




1




1




@brainfuck4d I think the details I have provided are sufficient, basically, overflow error occurs and I do not know how to get my result, and overcome the error, and my code is quite big, so it would confuse many people
– ѕняєє ѕιиgнι
Nov 7 at 17:56





@brainfuck4d I think the details I have provided are sufficient, basically, overflow error occurs and I do not know how to get my result, and overcome the error, and my code is quite big, so it would confuse many people
– ѕняєє ѕιиgнι
Nov 7 at 17:56





1




1




@brainfuck4d But, I don't feel comfortable, giving away my hard-worked code, open on the internet
– ѕняєє ѕιиgнι
Nov 7 at 20:53




@brainfuck4d But, I don't feel comfortable, giving away my hard-worked code, open on the internet
– ѕняєє ѕιиgнι
Nov 7 at 20:53




1




1




If you actually want people to help you, just posting a bounty isn't going to cut it. You've already had two other people tell you the same thing in the comments: there just aren't enough details in your question (regardless of how you may feel about it). You're going to have to share at least a little bit of your code. If that makes you uncomfortable, you can always write up a simpler dummy version of your problem that hides all of the details (see this doc for suggestions on how to do that). It just has to reproduce the same error.
– tel
Nov 10 at 15:31




If you actually want people to help you, just posting a bounty isn't going to cut it. You've already had two other people tell you the same thing in the comments: there just aren't enough details in your question (regardless of how you may feel about it). You're going to have to share at least a little bit of your code. If that makes you uncomfortable, you can always write up a simpler dummy version of your problem that hides all of the details (see this doc for suggestions on how to do that). It just has to reproduce the same error.
– tel
Nov 10 at 15:31












1 Answer
1






active

oldest

votes

















up vote
1
down vote



accepted
+50










This is a problem I too have faced with my neural network while using ReLu activators because of the infinite range on the positive side. There are two solutions to this problem:



A) Use another activation function: atan,tanh,sigmoid or any other one with limited range



However if you do not find those suitable:



B)Dampen the ReLu activations. This can be done by scaling down all values of the ReLu and ReLu prime function. Here's the difference in code:



##Normal Code
def ReLu(x,derivative=False):
if derivative:
return 0 if x<0 else 1
return 0 if x<0 else x

##Adjusted Code
def ReLu(x,derivative=False):
scaling_factor = 0.001
if derivative:
return 0 if x<0 else scaling_factor
return 0 if x<0 else scaling_factor*x


Since you are willing to compromise on the precision this is a perfect solution for you! In the ending, you can multiply by the inverse of the scaling_factor to get the approximate solution- approximate because of rounding discrepancies.






share|improve this answer




















    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53191284%2fnumpy-overflow-in-calculations-disrupting-code%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    1
    down vote



    accepted
    +50










    This is a problem I too have faced with my neural network while using ReLu activators because of the infinite range on the positive side. There are two solutions to this problem:



    A) Use another activation function: atan,tanh,sigmoid or any other one with limited range



    However if you do not find those suitable:



    B)Dampen the ReLu activations. This can be done by scaling down all values of the ReLu and ReLu prime function. Here's the difference in code:



    ##Normal Code
    def ReLu(x,derivative=False):
    if derivative:
    return 0 if x<0 else 1
    return 0 if x<0 else x

    ##Adjusted Code
    def ReLu(x,derivative=False):
    scaling_factor = 0.001
    if derivative:
    return 0 if x<0 else scaling_factor
    return 0 if x<0 else scaling_factor*x


    Since you are willing to compromise on the precision this is a perfect solution for you! In the ending, you can multiply by the inverse of the scaling_factor to get the approximate solution- approximate because of rounding discrepancies.






    share|improve this answer
























      up vote
      1
      down vote



      accepted
      +50










      This is a problem I too have faced with my neural network while using ReLu activators because of the infinite range on the positive side. There are two solutions to this problem:



      A) Use another activation function: atan,tanh,sigmoid or any other one with limited range



      However if you do not find those suitable:



      B)Dampen the ReLu activations. This can be done by scaling down all values of the ReLu and ReLu prime function. Here's the difference in code:



      ##Normal Code
      def ReLu(x,derivative=False):
      if derivative:
      return 0 if x<0 else 1
      return 0 if x<0 else x

      ##Adjusted Code
      def ReLu(x,derivative=False):
      scaling_factor = 0.001
      if derivative:
      return 0 if x<0 else scaling_factor
      return 0 if x<0 else scaling_factor*x


      Since you are willing to compromise on the precision this is a perfect solution for you! In the ending, you can multiply by the inverse of the scaling_factor to get the approximate solution- approximate because of rounding discrepancies.






      share|improve this answer






















        up vote
        1
        down vote



        accepted
        +50







        up vote
        1
        down vote



        accepted
        +50




        +50




        This is a problem I too have faced with my neural network while using ReLu activators because of the infinite range on the positive side. There are two solutions to this problem:



        A) Use another activation function: atan,tanh,sigmoid or any other one with limited range



        However if you do not find those suitable:



        B)Dampen the ReLu activations. This can be done by scaling down all values of the ReLu and ReLu prime function. Here's the difference in code:



        ##Normal Code
        def ReLu(x,derivative=False):
        if derivative:
        return 0 if x<0 else 1
        return 0 if x<0 else x

        ##Adjusted Code
        def ReLu(x,derivative=False):
        scaling_factor = 0.001
        if derivative:
        return 0 if x<0 else scaling_factor
        return 0 if x<0 else scaling_factor*x


        Since you are willing to compromise on the precision this is a perfect solution for you! In the ending, you can multiply by the inverse of the scaling_factor to get the approximate solution- approximate because of rounding discrepancies.






        share|improve this answer












        This is a problem I too have faced with my neural network while using ReLu activators because of the infinite range on the positive side. There are two solutions to this problem:



        A) Use another activation function: atan,tanh,sigmoid or any other one with limited range



        However if you do not find those suitable:



        B)Dampen the ReLu activations. This can be done by scaling down all values of the ReLu and ReLu prime function. Here's the difference in code:



        ##Normal Code
        def ReLu(x,derivative=False):
        if derivative:
        return 0 if x<0 else 1
        return 0 if x<0 else x

        ##Adjusted Code
        def ReLu(x,derivative=False):
        scaling_factor = 0.001
        if derivative:
        return 0 if x<0 else scaling_factor
        return 0 if x<0 else scaling_factor*x


        Since you are willing to compromise on the precision this is a perfect solution for you! In the ending, you can multiply by the inverse of the scaling_factor to get the approximate solution- approximate because of rounding discrepancies.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 10 at 16:53









        Vikhyat Agarwal

        400214




        400214



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53191284%2fnumpy-overflow-in-calculations-disrupting-code%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            這個網誌中的熱門文章

            How to read a connectionString WITH PROVIDER in .NET Core?

            In R, how to develop a multiplot heatmap.2 figure showing key labels successfully

            Museum of Modern and Contemporary Art of Trento and Rovereto