How can a computer run an infinite loop?
up vote
4
down vote
favorite
I understand the concept behind for and do/while loops, but I am trying to understand what is happening at the hardware level that allows a loop to run infinitely. Technically wouldn't it have to stop at some point because there are only a couple billion transistors in a microprocessor? Maybe my logic is off.
loops control-flow
add a comment |
up vote
4
down vote
favorite
I understand the concept behind for and do/while loops, but I am trying to understand what is happening at the hardware level that allows a loop to run infinitely. Technically wouldn't it have to stop at some point because there are only a couple billion transistors in a microprocessor? Maybe my logic is off.
loops control-flow
3
A mechanical counter can only memorize a few digits, but it can loop forever. After 9999, we restart from 0000. Your argument only proves that in an infinite computation, the processor will eventually visit the same state twice, entering a loop.
– chi
Nov 9 at 17:33
2
See this comment.
– philipxy
Nov 9 at 19:58
5
"only a couple billion transistors". The same transistors get used over and over while the processor is running.
– Barmar
Nov 9 at 20:23
9
A running track has a finite size, but you can run around it as many times as you like.
– Barmar
Nov 9 at 20:24
A Cray can finish an infinite loop in 3 seconds.
– kelalaka
6 hours ago
add a comment |
up vote
4
down vote
favorite
up vote
4
down vote
favorite
I understand the concept behind for and do/while loops, but I am trying to understand what is happening at the hardware level that allows a loop to run infinitely. Technically wouldn't it have to stop at some point because there are only a couple billion transistors in a microprocessor? Maybe my logic is off.
loops control-flow
I understand the concept behind for and do/while loops, but I am trying to understand what is happening at the hardware level that allows a loop to run infinitely. Technically wouldn't it have to stop at some point because there are only a couple billion transistors in a microprocessor? Maybe my logic is off.
loops control-flow
loops control-flow
edited yesterday
Bergi
34519
34519
asked Nov 9 at 9:21
Cody Rutscher
398
398
3
A mechanical counter can only memorize a few digits, but it can loop forever. After 9999, we restart from 0000. Your argument only proves that in an infinite computation, the processor will eventually visit the same state twice, entering a loop.
– chi
Nov 9 at 17:33
2
See this comment.
– philipxy
Nov 9 at 19:58
5
"only a couple billion transistors". The same transistors get used over and over while the processor is running.
– Barmar
Nov 9 at 20:23
9
A running track has a finite size, but you can run around it as many times as you like.
– Barmar
Nov 9 at 20:24
A Cray can finish an infinite loop in 3 seconds.
– kelalaka
6 hours ago
add a comment |
3
A mechanical counter can only memorize a few digits, but it can loop forever. After 9999, we restart from 0000. Your argument only proves that in an infinite computation, the processor will eventually visit the same state twice, entering a loop.
– chi
Nov 9 at 17:33
2
See this comment.
– philipxy
Nov 9 at 19:58
5
"only a couple billion transistors". The same transistors get used over and over while the processor is running.
– Barmar
Nov 9 at 20:23
9
A running track has a finite size, but you can run around it as many times as you like.
– Barmar
Nov 9 at 20:24
A Cray can finish an infinite loop in 3 seconds.
– kelalaka
6 hours ago
3
3
A mechanical counter can only memorize a few digits, but it can loop forever. After 9999, we restart from 0000. Your argument only proves that in an infinite computation, the processor will eventually visit the same state twice, entering a loop.
– chi
Nov 9 at 17:33
A mechanical counter can only memorize a few digits, but it can loop forever. After 9999, we restart from 0000. Your argument only proves that in an infinite computation, the processor will eventually visit the same state twice, entering a loop.
– chi
Nov 9 at 17:33
2
2
See this comment.
– philipxy
Nov 9 at 19:58
See this comment.
– philipxy
Nov 9 at 19:58
5
5
"only a couple billion transistors". The same transistors get used over and over while the processor is running.
– Barmar
Nov 9 at 20:23
"only a couple billion transistors". The same transistors get used over and over while the processor is running.
– Barmar
Nov 9 at 20:23
9
9
A running track has a finite size, but you can run around it as many times as you like.
– Barmar
Nov 9 at 20:24
A running track has a finite size, but you can run around it as many times as you like.
– Barmar
Nov 9 at 20:24
A Cray can finish an infinite loop in 3 seconds.
– kelalaka
6 hours ago
A Cray can finish an infinite loop in 3 seconds.
– kelalaka
6 hours ago
add a comment |
5 Answers
5
active
oldest
votes
up vote
24
down vote
accepted
step 1: take a calculator
step 2: input a number
step 3: add 1 to the number
step 4: subtract 1 from the number
step 5: goto step 3
If you didn't eventually get tired or bored you would be switching between the 2 results forever. Computers don't get tired or bored.
2
Good answer, but it leaves out an important word that appears in some of the others: That is, "state." Your answer illustrates the fact that even though a real computer can only have a finite number of possible states, nothing prevents it from cycling through some subset of those states, over and over again forever.
– Solomon Slow
yesterday
1
I'd propose to remove state altogether, as it's not mandatory.
– phresnel
14 hours ago
add a comment |
up vote
28
down vote
In a contemporary processor there is, among many other things, a register (digital electronic component to hold some bits) called the Program Counter (PC). It holds the memory address to the current machine instruction.
During normal flow of the program, the PC will always be updated to address the next instruction. However, any loop will be implemented with so called branch or jump type instructions which will cause the PC to address some other instruction, rather than the next one.
One, if not the, simplest infinite loop, is one in which the PC is updated again to the same instruction.
An assembly (MIPS) code example (beq
stands for Branch on Equal):
LABEL: beq $0, $0, LABEL # Comment: If 0 == 0,then goto LABEL.
New contributor
add a comment |
up vote
5
down vote
At the most basic level, an "infinite loop" only requires two transistors — an astable multivibrator will alternate between two different digital states indefinitely as long as it has power. In some sense, every other loop is just a bigger version of this. You don't need more and more transistors to keep operating for a longer period of time.
But what you might notice is that the astable multivibrator only has two states. Q is on, then Q is off, then Q is on again. It can only do two things before it repeats. There isn't really anywhere else for it to go. This is a general limitation: a system with a finite state can only go through a finite number of states before it repeats. The more transistors you have, the more "interesting stuff" you can do in your infinite loop. But with "only a couple billion" transistors there are trillions of possible combinations, which is quite a lot. And that's not even counting the RAM, which adds more billions (or even a trillion) more bits of state, making the number of possible states that the system can be in truly massive. As such, it's possible (even fairly easy) to write a program that, e.g. generates prime numbers, and keeps running for longer than the probable lifetime of the hardware (or even the solar system), without running out of different possible combinations of on/off transistors.
add a comment |
up vote
4
down vote
You only need two transistors for that, as demonstrated by the old joke.
How do you keep a (insert ethnicity here) busy forever?
Write "Please turn over" on both sides of a sheet of paper.
It's as true for transistors as for humans with a sheet of paper. In this case, two transistors make a bistable oscillator, and they'll toggle between true and false for as long as there's power there.
add a comment |
up vote
1
down vote
One thing other answers have missed is that CPUs have a clock source, an external device (usually a quartz crystal oscillator) which switches from 0 to 1 and back a fixed number of times a second. This clock signal is what makes the CPU go from one instruction to the next. If the clock signal is too fast, the transistors that "perform" the instruction haven't finished charging up and so the results are garbled.
Modern CPUs do very complicated things for performance reasons but this is the basic idea.
Maybe that is enough to understand how the CPU moves from one thing to another thing and then moves back to the beginning. But I will go into a bit more detail.
Inside the CPU are memory cells using a fixed number of transistors which can be overwritten with new values. There is also a fixed number of transistors that perform various computations, like addition, comparison, and so on. The clock opens the gate to allow data to "flow" from the memory cells, through the "compute" portion, and then the result is allowed back into the memory cells.
So if your instructions are as follows:
1. x = 5
2. y = x + 1
3. Go to step 1
the processor will end up in exactly the same state when it starts each loop. (Well, in theory.)
add a comment |
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
24
down vote
accepted
step 1: take a calculator
step 2: input a number
step 3: add 1 to the number
step 4: subtract 1 from the number
step 5: goto step 3
If you didn't eventually get tired or bored you would be switching between the 2 results forever. Computers don't get tired or bored.
2
Good answer, but it leaves out an important word that appears in some of the others: That is, "state." Your answer illustrates the fact that even though a real computer can only have a finite number of possible states, nothing prevents it from cycling through some subset of those states, over and over again forever.
– Solomon Slow
yesterday
1
I'd propose to remove state altogether, as it's not mandatory.
– phresnel
14 hours ago
add a comment |
up vote
24
down vote
accepted
step 1: take a calculator
step 2: input a number
step 3: add 1 to the number
step 4: subtract 1 from the number
step 5: goto step 3
If you didn't eventually get tired or bored you would be switching between the 2 results forever. Computers don't get tired or bored.
2
Good answer, but it leaves out an important word that appears in some of the others: That is, "state." Your answer illustrates the fact that even though a real computer can only have a finite number of possible states, nothing prevents it from cycling through some subset of those states, over and over again forever.
– Solomon Slow
yesterday
1
I'd propose to remove state altogether, as it's not mandatory.
– phresnel
14 hours ago
add a comment |
up vote
24
down vote
accepted
up vote
24
down vote
accepted
step 1: take a calculator
step 2: input a number
step 3: add 1 to the number
step 4: subtract 1 from the number
step 5: goto step 3
If you didn't eventually get tired or bored you would be switching between the 2 results forever. Computers don't get tired or bored.
step 1: take a calculator
step 2: input a number
step 3: add 1 to the number
step 4: subtract 1 from the number
step 5: goto step 3
If you didn't eventually get tired or bored you would be switching between the 2 results forever. Computers don't get tired or bored.
answered Nov 9 at 10:19
ratchet freak
2,49078
2,49078
2
Good answer, but it leaves out an important word that appears in some of the others: That is, "state." Your answer illustrates the fact that even though a real computer can only have a finite number of possible states, nothing prevents it from cycling through some subset of those states, over and over again forever.
– Solomon Slow
yesterday
1
I'd propose to remove state altogether, as it's not mandatory.
– phresnel
14 hours ago
add a comment |
2
Good answer, but it leaves out an important word that appears in some of the others: That is, "state." Your answer illustrates the fact that even though a real computer can only have a finite number of possible states, nothing prevents it from cycling through some subset of those states, over and over again forever.
– Solomon Slow
yesterday
1
I'd propose to remove state altogether, as it's not mandatory.
– phresnel
14 hours ago
2
2
Good answer, but it leaves out an important word that appears in some of the others: That is, "state." Your answer illustrates the fact that even though a real computer can only have a finite number of possible states, nothing prevents it from cycling through some subset of those states, over and over again forever.
– Solomon Slow
yesterday
Good answer, but it leaves out an important word that appears in some of the others: That is, "state." Your answer illustrates the fact that even though a real computer can only have a finite number of possible states, nothing prevents it from cycling through some subset of those states, over and over again forever.
– Solomon Slow
yesterday
1
1
I'd propose to remove state altogether, as it's not mandatory.
– phresnel
14 hours ago
I'd propose to remove state altogether, as it's not mandatory.
– phresnel
14 hours ago
add a comment |
up vote
28
down vote
In a contemporary processor there is, among many other things, a register (digital electronic component to hold some bits) called the Program Counter (PC). It holds the memory address to the current machine instruction.
During normal flow of the program, the PC will always be updated to address the next instruction. However, any loop will be implemented with so called branch or jump type instructions which will cause the PC to address some other instruction, rather than the next one.
One, if not the, simplest infinite loop, is one in which the PC is updated again to the same instruction.
An assembly (MIPS) code example (beq
stands for Branch on Equal):
LABEL: beq $0, $0, LABEL # Comment: If 0 == 0,then goto LABEL.
New contributor
add a comment |
up vote
28
down vote
In a contemporary processor there is, among many other things, a register (digital electronic component to hold some bits) called the Program Counter (PC). It holds the memory address to the current machine instruction.
During normal flow of the program, the PC will always be updated to address the next instruction. However, any loop will be implemented with so called branch or jump type instructions which will cause the PC to address some other instruction, rather than the next one.
One, if not the, simplest infinite loop, is one in which the PC is updated again to the same instruction.
An assembly (MIPS) code example (beq
stands for Branch on Equal):
LABEL: beq $0, $0, LABEL # Comment: If 0 == 0,then goto LABEL.
New contributor
add a comment |
up vote
28
down vote
up vote
28
down vote
In a contemporary processor there is, among many other things, a register (digital electronic component to hold some bits) called the Program Counter (PC). It holds the memory address to the current machine instruction.
During normal flow of the program, the PC will always be updated to address the next instruction. However, any loop will be implemented with so called branch or jump type instructions which will cause the PC to address some other instruction, rather than the next one.
One, if not the, simplest infinite loop, is one in which the PC is updated again to the same instruction.
An assembly (MIPS) code example (beq
stands for Branch on Equal):
LABEL: beq $0, $0, LABEL # Comment: If 0 == 0,then goto LABEL.
New contributor
In a contemporary processor there is, among many other things, a register (digital electronic component to hold some bits) called the Program Counter (PC). It holds the memory address to the current machine instruction.
During normal flow of the program, the PC will always be updated to address the next instruction. However, any loop will be implemented with so called branch or jump type instructions which will cause the PC to address some other instruction, rather than the next one.
One, if not the, simplest infinite loop, is one in which the PC is updated again to the same instruction.
An assembly (MIPS) code example (beq
stands for Branch on Equal):
LABEL: beq $0, $0, LABEL # Comment: If 0 == 0,then goto LABEL.
New contributor
edited 2 days ago
New contributor
answered Nov 9 at 11:13
Klorax
42617
42617
New contributor
New contributor
add a comment |
add a comment |
up vote
5
down vote
At the most basic level, an "infinite loop" only requires two transistors — an astable multivibrator will alternate between two different digital states indefinitely as long as it has power. In some sense, every other loop is just a bigger version of this. You don't need more and more transistors to keep operating for a longer period of time.
But what you might notice is that the astable multivibrator only has two states. Q is on, then Q is off, then Q is on again. It can only do two things before it repeats. There isn't really anywhere else for it to go. This is a general limitation: a system with a finite state can only go through a finite number of states before it repeats. The more transistors you have, the more "interesting stuff" you can do in your infinite loop. But with "only a couple billion" transistors there are trillions of possible combinations, which is quite a lot. And that's not even counting the RAM, which adds more billions (or even a trillion) more bits of state, making the number of possible states that the system can be in truly massive. As such, it's possible (even fairly easy) to write a program that, e.g. generates prime numbers, and keeps running for longer than the probable lifetime of the hardware (or even the solar system), without running out of different possible combinations of on/off transistors.
add a comment |
up vote
5
down vote
At the most basic level, an "infinite loop" only requires two transistors — an astable multivibrator will alternate between two different digital states indefinitely as long as it has power. In some sense, every other loop is just a bigger version of this. You don't need more and more transistors to keep operating for a longer period of time.
But what you might notice is that the astable multivibrator only has two states. Q is on, then Q is off, then Q is on again. It can only do two things before it repeats. There isn't really anywhere else for it to go. This is a general limitation: a system with a finite state can only go through a finite number of states before it repeats. The more transistors you have, the more "interesting stuff" you can do in your infinite loop. But with "only a couple billion" transistors there are trillions of possible combinations, which is quite a lot. And that's not even counting the RAM, which adds more billions (or even a trillion) more bits of state, making the number of possible states that the system can be in truly massive. As such, it's possible (even fairly easy) to write a program that, e.g. generates prime numbers, and keeps running for longer than the probable lifetime of the hardware (or even the solar system), without running out of different possible combinations of on/off transistors.
add a comment |
up vote
5
down vote
up vote
5
down vote
At the most basic level, an "infinite loop" only requires two transistors — an astable multivibrator will alternate between two different digital states indefinitely as long as it has power. In some sense, every other loop is just a bigger version of this. You don't need more and more transistors to keep operating for a longer period of time.
But what you might notice is that the astable multivibrator only has two states. Q is on, then Q is off, then Q is on again. It can only do two things before it repeats. There isn't really anywhere else for it to go. This is a general limitation: a system with a finite state can only go through a finite number of states before it repeats. The more transistors you have, the more "interesting stuff" you can do in your infinite loop. But with "only a couple billion" transistors there are trillions of possible combinations, which is quite a lot. And that's not even counting the RAM, which adds more billions (or even a trillion) more bits of state, making the number of possible states that the system can be in truly massive. As such, it's possible (even fairly easy) to write a program that, e.g. generates prime numbers, and keeps running for longer than the probable lifetime of the hardware (or even the solar system), without running out of different possible combinations of on/off transistors.
At the most basic level, an "infinite loop" only requires two transistors — an astable multivibrator will alternate between two different digital states indefinitely as long as it has power. In some sense, every other loop is just a bigger version of this. You don't need more and more transistors to keep operating for a longer period of time.
But what you might notice is that the astable multivibrator only has two states. Q is on, then Q is off, then Q is on again. It can only do two things before it repeats. There isn't really anywhere else for it to go. This is a general limitation: a system with a finite state can only go through a finite number of states before it repeats. The more transistors you have, the more "interesting stuff" you can do in your infinite loop. But with "only a couple billion" transistors there are trillions of possible combinations, which is quite a lot. And that's not even counting the RAM, which adds more billions (or even a trillion) more bits of state, making the number of possible states that the system can be in truly massive. As such, it's possible (even fairly easy) to write a program that, e.g. generates prime numbers, and keeps running for longer than the probable lifetime of the hardware (or even the solar system), without running out of different possible combinations of on/off transistors.
answered Nov 9 at 17:26
hobbs
21315
21315
add a comment |
add a comment |
up vote
4
down vote
You only need two transistors for that, as demonstrated by the old joke.
How do you keep a (insert ethnicity here) busy forever?
Write "Please turn over" on both sides of a sheet of paper.
It's as true for transistors as for humans with a sheet of paper. In this case, two transistors make a bistable oscillator, and they'll toggle between true and false for as long as there's power there.
add a comment |
up vote
4
down vote
You only need two transistors for that, as demonstrated by the old joke.
How do you keep a (insert ethnicity here) busy forever?
Write "Please turn over" on both sides of a sheet of paper.
It's as true for transistors as for humans with a sheet of paper. In this case, two transistors make a bistable oscillator, and they'll toggle between true and false for as long as there's power there.
add a comment |
up vote
4
down vote
up vote
4
down vote
You only need two transistors for that, as demonstrated by the old joke.
How do you keep a (insert ethnicity here) busy forever?
Write "Please turn over" on both sides of a sheet of paper.
It's as true for transistors as for humans with a sheet of paper. In this case, two transistors make a bistable oscillator, and they'll toggle between true and false for as long as there's power there.
You only need two transistors for that, as demonstrated by the old joke.
How do you keep a (insert ethnicity here) busy forever?
Write "Please turn over" on both sides of a sheet of paper.
It's as true for transistors as for humans with a sheet of paper. In this case, two transistors make a bistable oscillator, and they'll toggle between true and false for as long as there's power there.
edited 8 hours ago
answered 2 days ago
Graham
20912
20912
add a comment |
add a comment |
up vote
1
down vote
One thing other answers have missed is that CPUs have a clock source, an external device (usually a quartz crystal oscillator) which switches from 0 to 1 and back a fixed number of times a second. This clock signal is what makes the CPU go from one instruction to the next. If the clock signal is too fast, the transistors that "perform" the instruction haven't finished charging up and so the results are garbled.
Modern CPUs do very complicated things for performance reasons but this is the basic idea.
Maybe that is enough to understand how the CPU moves from one thing to another thing and then moves back to the beginning. But I will go into a bit more detail.
Inside the CPU are memory cells using a fixed number of transistors which can be overwritten with new values. There is also a fixed number of transistors that perform various computations, like addition, comparison, and so on. The clock opens the gate to allow data to "flow" from the memory cells, through the "compute" portion, and then the result is allowed back into the memory cells.
So if your instructions are as follows:
1. x = 5
2. y = x + 1
3. Go to step 1
the processor will end up in exactly the same state when it starts each loop. (Well, in theory.)
add a comment |
up vote
1
down vote
One thing other answers have missed is that CPUs have a clock source, an external device (usually a quartz crystal oscillator) which switches from 0 to 1 and back a fixed number of times a second. This clock signal is what makes the CPU go from one instruction to the next. If the clock signal is too fast, the transistors that "perform" the instruction haven't finished charging up and so the results are garbled.
Modern CPUs do very complicated things for performance reasons but this is the basic idea.
Maybe that is enough to understand how the CPU moves from one thing to another thing and then moves back to the beginning. But I will go into a bit more detail.
Inside the CPU are memory cells using a fixed number of transistors which can be overwritten with new values. There is also a fixed number of transistors that perform various computations, like addition, comparison, and so on. The clock opens the gate to allow data to "flow" from the memory cells, through the "compute" portion, and then the result is allowed back into the memory cells.
So if your instructions are as follows:
1. x = 5
2. y = x + 1
3. Go to step 1
the processor will end up in exactly the same state when it starts each loop. (Well, in theory.)
add a comment |
up vote
1
down vote
up vote
1
down vote
One thing other answers have missed is that CPUs have a clock source, an external device (usually a quartz crystal oscillator) which switches from 0 to 1 and back a fixed number of times a second. This clock signal is what makes the CPU go from one instruction to the next. If the clock signal is too fast, the transistors that "perform" the instruction haven't finished charging up and so the results are garbled.
Modern CPUs do very complicated things for performance reasons but this is the basic idea.
Maybe that is enough to understand how the CPU moves from one thing to another thing and then moves back to the beginning. But I will go into a bit more detail.
Inside the CPU are memory cells using a fixed number of transistors which can be overwritten with new values. There is also a fixed number of transistors that perform various computations, like addition, comparison, and so on. The clock opens the gate to allow data to "flow" from the memory cells, through the "compute" portion, and then the result is allowed back into the memory cells.
So if your instructions are as follows:
1. x = 5
2. y = x + 1
3. Go to step 1
the processor will end up in exactly the same state when it starts each loop. (Well, in theory.)
One thing other answers have missed is that CPUs have a clock source, an external device (usually a quartz crystal oscillator) which switches from 0 to 1 and back a fixed number of times a second. This clock signal is what makes the CPU go from one instruction to the next. If the clock signal is too fast, the transistors that "perform" the instruction haven't finished charging up and so the results are garbled.
Modern CPUs do very complicated things for performance reasons but this is the basic idea.
Maybe that is enough to understand how the CPU moves from one thing to another thing and then moves back to the beginning. But I will go into a bit more detail.
Inside the CPU are memory cells using a fixed number of transistors which can be overwritten with new values. There is also a fixed number of transistors that perform various computations, like addition, comparison, and so on. The clock opens the gate to allow data to "flow" from the memory cells, through the "compute" portion, and then the result is allowed back into the memory cells.
So if your instructions are as follows:
1. x = 5
2. y = x + 1
3. Go to step 1
the processor will end up in exactly the same state when it starts each loop. (Well, in theory.)
answered 2 days ago
Artelius
30112
30112
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f99806%2fhow-can-a-computer-run-an-infinite-loop%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
3
A mechanical counter can only memorize a few digits, but it can loop forever. After 9999, we restart from 0000. Your argument only proves that in an infinite computation, the processor will eventually visit the same state twice, entering a loop.
– chi
Nov 9 at 17:33
2
See this comment.
– philipxy
Nov 9 at 19:58
5
"only a couple billion transistors". The same transistors get used over and over while the processor is running.
– Barmar
Nov 9 at 20:23
9
A running track has a finite size, but you can run around it as many times as you like.
– Barmar
Nov 9 at 20:24
A Cray can finish an infinite loop in 3 seconds.
– kelalaka
6 hours ago