memory consumed by a thread in python
up vote
1
down vote
favorite
Here I can get the time take by thread to complete. How can I get the memory consumed by the thread.
import threading
import time
class mythread(threading.Thread):
def __init__(self,i,to):
threading.Thread.__init__(self)
self.h=i
self.t=to
self.st=0
self.end=0
def run(self):
self.st =time.time()
ls=
for i in range(self.t):
ls.append(i)
time.sleep(0.002)
self.end=time.time()
print "total time taken by is ".format(self.h,self.end-self.st)
thread1=mythread("thread1",10)
thread2=mythread("thread2",20)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
python multithreading
add a comment |
up vote
1
down vote
favorite
Here I can get the time take by thread to complete. How can I get the memory consumed by the thread.
import threading
import time
class mythread(threading.Thread):
def __init__(self,i,to):
threading.Thread.__init__(self)
self.h=i
self.t=to
self.st=0
self.end=0
def run(self):
self.st =time.time()
ls=
for i in range(self.t):
ls.append(i)
time.sleep(0.002)
self.end=time.time()
print "total time taken by is ".format(self.h,self.end-self.st)
thread1=mythread("thread1",10)
thread2=mythread("thread2",20)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
python multithreading
I want the memory used while running only
– Shubham Kumar
Nov 10 at 19:04
You could usecprofiler
and pass it tokcachegrind
. If you don't know what I'm talking about, read up on debugging processes and memory analyzation.
– Torxed
Nov 10 at 19:21
I wonder if you know what you measure, usingtime.time()
. It is not the cpu time of the threads, sincetime.time()
returns the wall clock time, i.e. the actual time of the system. Since both threads are started more or less in parallel, they will excute at the same time, and you will basically measure their combined time (together with anything else that happens to execute at the CPU at the time). Perhaps you want to look intotime.thread_time()
instead?
– JohanL
Nov 11 at 7:16
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Here I can get the time take by thread to complete. How can I get the memory consumed by the thread.
import threading
import time
class mythread(threading.Thread):
def __init__(self,i,to):
threading.Thread.__init__(self)
self.h=i
self.t=to
self.st=0
self.end=0
def run(self):
self.st =time.time()
ls=
for i in range(self.t):
ls.append(i)
time.sleep(0.002)
self.end=time.time()
print "total time taken by is ".format(self.h,self.end-self.st)
thread1=mythread("thread1",10)
thread2=mythread("thread2",20)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
python multithreading
Here I can get the time take by thread to complete. How can I get the memory consumed by the thread.
import threading
import time
class mythread(threading.Thread):
def __init__(self,i,to):
threading.Thread.__init__(self)
self.h=i
self.t=to
self.st=0
self.end=0
def run(self):
self.st =time.time()
ls=
for i in range(self.t):
ls.append(i)
time.sleep(0.002)
self.end=time.time()
print "total time taken by is ".format(self.h,self.end-self.st)
thread1=mythread("thread1",10)
thread2=mythread("thread2",20)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
python multithreading
python multithreading
asked Nov 10 at 18:59
Shubham Kumar
416
416
I want the memory used while running only
– Shubham Kumar
Nov 10 at 19:04
You could usecprofiler
and pass it tokcachegrind
. If you don't know what I'm talking about, read up on debugging processes and memory analyzation.
– Torxed
Nov 10 at 19:21
I wonder if you know what you measure, usingtime.time()
. It is not the cpu time of the threads, sincetime.time()
returns the wall clock time, i.e. the actual time of the system. Since both threads are started more or less in parallel, they will excute at the same time, and you will basically measure their combined time (together with anything else that happens to execute at the CPU at the time). Perhaps you want to look intotime.thread_time()
instead?
– JohanL
Nov 11 at 7:16
add a comment |
I want the memory used while running only
– Shubham Kumar
Nov 10 at 19:04
You could usecprofiler
and pass it tokcachegrind
. If you don't know what I'm talking about, read up on debugging processes and memory analyzation.
– Torxed
Nov 10 at 19:21
I wonder if you know what you measure, usingtime.time()
. It is not the cpu time of the threads, sincetime.time()
returns the wall clock time, i.e. the actual time of the system. Since both threads are started more or less in parallel, they will excute at the same time, and you will basically measure their combined time (together with anything else that happens to execute at the CPU at the time). Perhaps you want to look intotime.thread_time()
instead?
– JohanL
Nov 11 at 7:16
I want the memory used while running only
– Shubham Kumar
Nov 10 at 19:04
I want the memory used while running only
– Shubham Kumar
Nov 10 at 19:04
You could use
cprofiler
and pass it to kcachegrind
. If you don't know what I'm talking about, read up on debugging processes and memory analyzation.– Torxed
Nov 10 at 19:21
You could use
cprofiler
and pass it to kcachegrind
. If you don't know what I'm talking about, read up on debugging processes and memory analyzation.– Torxed
Nov 10 at 19:21
I wonder if you know what you measure, using
time.time()
. It is not the cpu time of the threads, since time.time()
returns the wall clock time, i.e. the actual time of the system. Since both threads are started more or less in parallel, they will excute at the same time, and you will basically measure their combined time (together with anything else that happens to execute at the CPU at the time). Perhaps you want to look into time.thread_time()
instead?– JohanL
Nov 11 at 7:16
I wonder if you know what you measure, using
time.time()
. It is not the cpu time of the threads, since time.time()
returns the wall clock time, i.e. the actual time of the system. Since both threads are started more or less in parallel, they will excute at the same time, and you will basically measure their combined time (together with anything else that happens to execute at the CPU at the time). Perhaps you want to look into time.thread_time()
instead?– JohanL
Nov 11 at 7:16
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
(This is a bit of a non-answer I'm afraid, but I'd argue that's due to the nature of the subject matter...)
The notion of thread memory usage is not a well defined one. Threads share their memory. The only truly thread-local memory is its call stack, and unless you do something seriously recursive, that's not the interesting part.
The ownership of "normal" memory isn't that simple. Consider this code:
import json
import threading
import time
data_dump =
class MyThread(threading.Thread):
def __init__(self, name, limit):
threading.Thread.__init__(self)
self.name = name
self.limit = limit
data_dump[name] =
def run(self):
start = time.monotonic()
for i in range(self.limit):
data_dump[self.name].append(str(i))
time.sleep(0.1)
end = time.monotonic()
print("thread wall time: s".format(end-start))
t1 = MyThread(name="one", limit=10)
t2 = MyThread(name="two", limit=12)
t1.start()
t2.start()
t1.join()
t2.join()
del t1
del t2
print(json.dumps(data_dump, indent=4))
The output of data_dump
will show you all the strings appended (and thus, allocated) by the threads. However, at the time of the output (the final print
), who owns the memory? Both threads have gone out of existance, yet it is still accessible and thus not a leak. Threads don't own memory (beyond their call stack); processes do.
Depending on what you want to do with these memory consumption numbers, it might help to use cprofiler as recommended by @Torxed.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
(This is a bit of a non-answer I'm afraid, but I'd argue that's due to the nature of the subject matter...)
The notion of thread memory usage is not a well defined one. Threads share their memory. The only truly thread-local memory is its call stack, and unless you do something seriously recursive, that's not the interesting part.
The ownership of "normal" memory isn't that simple. Consider this code:
import json
import threading
import time
data_dump =
class MyThread(threading.Thread):
def __init__(self, name, limit):
threading.Thread.__init__(self)
self.name = name
self.limit = limit
data_dump[name] =
def run(self):
start = time.monotonic()
for i in range(self.limit):
data_dump[self.name].append(str(i))
time.sleep(0.1)
end = time.monotonic()
print("thread wall time: s".format(end-start))
t1 = MyThread(name="one", limit=10)
t2 = MyThread(name="two", limit=12)
t1.start()
t2.start()
t1.join()
t2.join()
del t1
del t2
print(json.dumps(data_dump, indent=4))
The output of data_dump
will show you all the strings appended (and thus, allocated) by the threads. However, at the time of the output (the final print
), who owns the memory? Both threads have gone out of existance, yet it is still accessible and thus not a leak. Threads don't own memory (beyond their call stack); processes do.
Depending on what you want to do with these memory consumption numbers, it might help to use cprofiler as recommended by @Torxed.
add a comment |
up vote
0
down vote
(This is a bit of a non-answer I'm afraid, but I'd argue that's due to the nature of the subject matter...)
The notion of thread memory usage is not a well defined one. Threads share their memory. The only truly thread-local memory is its call stack, and unless you do something seriously recursive, that's not the interesting part.
The ownership of "normal" memory isn't that simple. Consider this code:
import json
import threading
import time
data_dump =
class MyThread(threading.Thread):
def __init__(self, name, limit):
threading.Thread.__init__(self)
self.name = name
self.limit = limit
data_dump[name] =
def run(self):
start = time.monotonic()
for i in range(self.limit):
data_dump[self.name].append(str(i))
time.sleep(0.1)
end = time.monotonic()
print("thread wall time: s".format(end-start))
t1 = MyThread(name="one", limit=10)
t2 = MyThread(name="two", limit=12)
t1.start()
t2.start()
t1.join()
t2.join()
del t1
del t2
print(json.dumps(data_dump, indent=4))
The output of data_dump
will show you all the strings appended (and thus, allocated) by the threads. However, at the time of the output (the final print
), who owns the memory? Both threads have gone out of existance, yet it is still accessible and thus not a leak. Threads don't own memory (beyond their call stack); processes do.
Depending on what you want to do with these memory consumption numbers, it might help to use cprofiler as recommended by @Torxed.
add a comment |
up vote
0
down vote
up vote
0
down vote
(This is a bit of a non-answer I'm afraid, but I'd argue that's due to the nature of the subject matter...)
The notion of thread memory usage is not a well defined one. Threads share their memory. The only truly thread-local memory is its call stack, and unless you do something seriously recursive, that's not the interesting part.
The ownership of "normal" memory isn't that simple. Consider this code:
import json
import threading
import time
data_dump =
class MyThread(threading.Thread):
def __init__(self, name, limit):
threading.Thread.__init__(self)
self.name = name
self.limit = limit
data_dump[name] =
def run(self):
start = time.monotonic()
for i in range(self.limit):
data_dump[self.name].append(str(i))
time.sleep(0.1)
end = time.monotonic()
print("thread wall time: s".format(end-start))
t1 = MyThread(name="one", limit=10)
t2 = MyThread(name="two", limit=12)
t1.start()
t2.start()
t1.join()
t2.join()
del t1
del t2
print(json.dumps(data_dump, indent=4))
The output of data_dump
will show you all the strings appended (and thus, allocated) by the threads. However, at the time of the output (the final print
), who owns the memory? Both threads have gone out of existance, yet it is still accessible and thus not a leak. Threads don't own memory (beyond their call stack); processes do.
Depending on what you want to do with these memory consumption numbers, it might help to use cprofiler as recommended by @Torxed.
(This is a bit of a non-answer I'm afraid, but I'd argue that's due to the nature of the subject matter...)
The notion of thread memory usage is not a well defined one. Threads share their memory. The only truly thread-local memory is its call stack, and unless you do something seriously recursive, that's not the interesting part.
The ownership of "normal" memory isn't that simple. Consider this code:
import json
import threading
import time
data_dump =
class MyThread(threading.Thread):
def __init__(self, name, limit):
threading.Thread.__init__(self)
self.name = name
self.limit = limit
data_dump[name] =
def run(self):
start = time.monotonic()
for i in range(self.limit):
data_dump[self.name].append(str(i))
time.sleep(0.1)
end = time.monotonic()
print("thread wall time: s".format(end-start))
t1 = MyThread(name="one", limit=10)
t2 = MyThread(name="two", limit=12)
t1.start()
t2.start()
t1.join()
t2.join()
del t1
del t2
print(json.dumps(data_dump, indent=4))
The output of data_dump
will show you all the strings appended (and thus, allocated) by the threads. However, at the time of the output (the final print
), who owns the memory? Both threads have gone out of existance, yet it is still accessible and thus not a leak. Threads don't own memory (beyond their call stack); processes do.
Depending on what you want to do with these memory consumption numbers, it might help to use cprofiler as recommended by @Torxed.
answered Nov 11 at 9:18
digitalarbeiter
1,6101112
1,6101112
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53242389%2fmemory-consumed-by-a-thread-in-python%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
I want the memory used while running only
– Shubham Kumar
Nov 10 at 19:04
You could use
cprofiler
and pass it tokcachegrind
. If you don't know what I'm talking about, read up on debugging processes and memory analyzation.– Torxed
Nov 10 at 19:21
I wonder if you know what you measure, using
time.time()
. It is not the cpu time of the threads, sincetime.time()
returns the wall clock time, i.e. the actual time of the system. Since both threads are started more or less in parallel, they will excute at the same time, and you will basically measure their combined time (together with anything else that happens to execute at the CPU at the time). Perhaps you want to look intotime.thread_time()
instead?– JohanL
Nov 11 at 7:16