raspberry pi yolo custom









up vote
0
down vote

favorite












I am running a custom yolo model based on this .cfg
Running this on a raspberry pi
I do not know if the problem is due to low memory or not. I do not think so



[net]
# Testing
batch=64
subdivisions=32
# Training
batch=64
subdivisions=32
height=416
width=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1

learning_rate=0.001
burn_in=1000
max_batches = 80200
policy=steps
steps=40000,60000
scales=.1,.1

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky


#######

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[route]
layers=-9

[convolutional]
batch_normalize=1
size=1
stride=1
pad=1
filters=64
activation=leaky

[reorg]
stride=2

[route]
layers=-1,-4

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=30
activation=linear


[region]
anchors = 1.3221, 1.73145, 3.19275, 4.00944, 5.05587, 8.09892, 9.47112, 4.84053, 11.2364, 10.0071
bias_match=1
classes=1
coords=4
num=5
softmax=1
jitter=.3
rescore=1

object_scale=5
noobject_scale=1
class_scale=1
coord_scale=1

absolute=1
thresh = .6
random=1


When I run on darkflow-master tensorflow based implementation of yolo I am getting the following output.
The error is occuring while setting the options



option = 
'model': 'custom-2/yolo-obj.cfg',
'load': 'custom-2/yolo-obj_2200.weights',
'threshold': 0.30,
#'gpu': 1.0


tfnet = TFNet(option) <----- at this line (line 23 of run_img.py)


The output:



>>> %Run run_img.py
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412
return f(*args, **kwds)
Using TensorFlow backend.
/home/pi/Desktop/darkflow-master/darkflow/dark/darknet.py:54: UserWarning: ./cfg/yolo-obj_2200.cfg not found, use custom-2/yolo-obj.cfg instead
cfg_path, FLAGS.model))
Parsing custom-2/yolo-obj.cfg
Loading custom-2/yolo-obj_2200.weights ...
Successfully identified 202314760 bytes
Finished in 0.17995047569274902s

Building net ...
Source | Train? | Layer description | Output size
-------+--------+----------------------------------+---------------
| | input | (?, 416, 416, 3)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 416, 416, 32)
Load | Yep! | maxp 2x2p0_2 | (?, 208, 208, 32)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 208, 208, 64)
Load | Yep! | maxp 2x2p0_2 | (?, 104, 104, 64)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 104, 104, 128)
Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 104, 104, 64)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 104, 104, 128)
Load | Yep! | maxp 2x2p0_2 | (?, 52, 52, 128)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 52, 52, 256)
Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 52, 52, 128)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 52, 52, 256)
Load | Yep! | maxp 2x2p0_2 | (?, 26, 26, 256)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 256)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 256)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
Load | Yep! | maxp 2x2p0_2 | (?, 13, 13, 512)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 13, 13, 512)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 13, 13, 512)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
Load | Yep! | concat [16] | (?, 26, 26, 512)
Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 64)
Load | Yep! | local flatten 2x2 | (?, 13, 13, 256)
Load | Yep! | concat [27, 24] | (?, 13, 13, 1280)
Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
Load | Yep! | conv 1x1p0_1 linear | (?, 13, 13, 30)
-------+--------+----------------------------------+---------------
Running entirely on CPU
Backend terminated (returncode: -11)
Fatal Python error: Segmentation fault

Thread 0x620ff470 (most recent call first):
File "/usr/lib/python3.5/threading.py", line 293 in wait
File "/usr/lib/python3.5/queue.py", line 164 in get
File "/usr/lib/python3.5/multiprocessing/pool.py", line 429 in _handle_results
File "/usr/lib/python3.5/threading.py", line 862 in run
File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

Thread 0x62aff470 (most recent call first):
File "/usr/lib/python3.5/threading.py", line 293 in wait
File "/usr/lib/python3.5/queue.py", line 164 in get
File "/usr/lib/python3.5/multiprocessing/pool.py", line 376 in _handle_tasks
File "/usr/lib/python3.5/threading.py", line 862 in run
File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

Thread 0x632ff470 (most recent call first):
File "/usr/lib/python3.5/multiprocessing/pool.py", line 367 in _handle_workers
File "/usr/lib/python3.5/threading.py", line 862 in run
File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

Thread 0x63cff470 (most recent call first):
File "/usr/lib/python3.5/threading.py", line 293 in wait
File "/usr/lib/python3.5/queue.py", line 164 in get
File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
File "/usr/lib/python3.5/threading.py", line 862 in run
File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

Thread 0x644ff470 (most recent call first):
File "/usr/lib/python3.5/threading.py", line 293 in wait
File "/usr/lib/python3.5/queue.py", line 164 in get
File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
File "/usr/lib/python3.5/threading.py", line 862 in run
File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

Thread 0x64eff470 (most recent call first):
File "/usr/lib/python3.5/threading.py", line 293 in wait
File "/usr/lib/python3.5/queue.py", line 164 in get
File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
File "/usr/lib/python3.5/threading.py", line 862 in run
File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

Thread 0x658dc470 (most recent call first):
File "/usr/lib/python3.5/threading.py", line 293 in wait
File "/usr/lib/python3.5/queue.py", line 164 in get
File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
File "/usr/lib/python3.5/threading.py", line 862 in run
File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

Current thread 0x76f6a010 (most recent call first):
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1367 in _call_tf_sessionrun
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1277 in _run_fn
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1292 in _do_call
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1286 in _do_run
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1110 in _run
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 887 in run
File "/home/pi/Desktop/darkflow-master/darkflow/net/build.py", line 146 in setup_meta_ops
File "/home/pi/Desktop/darkflow-master/darkflow/net/build.py", line 76 in __init__
File "/home/pi/Desktop/darkflow-master/run_img.py", line 23 in <module>
File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 588 in execute_source
File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 427 in _execute_source_ex
File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 374 in _execute_file
File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 155 in _cmd_Run
File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 119 in handle_command
File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 97 in mainloop
File "/usr/lib/python3/dist-packages/thonny/shared/backend_launcher.py", line 41 in <module>
Resetting ...
>>>









share|improve this question

























    up vote
    0
    down vote

    favorite












    I am running a custom yolo model based on this .cfg
    Running this on a raspberry pi
    I do not know if the problem is due to low memory or not. I do not think so



    [net]
    # Testing
    batch=64
    subdivisions=32
    # Training
    batch=64
    subdivisions=32
    height=416
    width=416
    channels=3
    momentum=0.9
    decay=0.0005
    angle=0
    saturation = 1.5
    exposure = 1.5
    hue=.1

    learning_rate=0.001
    burn_in=1000
    max_batches = 80200
    policy=steps
    steps=40000,60000
    scales=.1,.1

    [convolutional]
    batch_normalize=1
    filters=32
    size=3
    stride=1
    pad=1
    activation=leaky

    [maxpool]
    size=2
    stride=2

    [convolutional]
    batch_normalize=1
    filters=64
    size=3
    stride=1
    pad=1
    activation=leaky

    [maxpool]
    size=2
    stride=2

    [convolutional]
    batch_normalize=1
    filters=128
    size=3
    stride=1
    pad=1
    activation=leaky

    [convolutional]
    batch_normalize=1
    filters=64
    size=1
    stride=1
    pad=1
    activation=leaky

    [convolutional]
    batch_normalize=1
    filters=128
    size=3
    stride=1
    pad=1
    activation=leaky

    [maxpool]
    size=2
    stride=2

    [convolutional]
    batch_normalize=1
    filters=256
    size=3
    stride=1
    pad=1
    activation=leaky

    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky

    [convolutional]
    batch_normalize=1
    filters=256
    size=3
    stride=1
    pad=1
    activation=leaky

    [maxpool]
    size=2
    stride=2

    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky

    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky

    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky

    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky

    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky

    [maxpool]
    size=2
    stride=2

    [convolutional]
    batch_normalize=1
    filters=1024
    size=3
    stride=1
    pad=1
    activation=leaky

    [convolutional]
    batch_normalize=1
    filters=512
    size=1
    stride=1
    pad=1
    activation=leaky

    [convolutional]
    batch_normalize=1
    filters=1024
    size=3
    stride=1
    pad=1
    activation=leaky

    [convolutional]
    batch_normalize=1
    filters=512
    size=1
    stride=1
    pad=1
    activation=leaky

    [convolutional]
    batch_normalize=1
    filters=1024
    size=3
    stride=1
    pad=1
    activation=leaky


    #######

    [convolutional]
    batch_normalize=1
    size=3
    stride=1
    pad=1
    filters=1024
    activation=leaky

    [convolutional]
    batch_normalize=1
    size=3
    stride=1
    pad=1
    filters=1024
    activation=leaky

    [route]
    layers=-9

    [convolutional]
    batch_normalize=1
    size=1
    stride=1
    pad=1
    filters=64
    activation=leaky

    [reorg]
    stride=2

    [route]
    layers=-1,-4

    [convolutional]
    batch_normalize=1
    size=3
    stride=1
    pad=1
    filters=1024
    activation=leaky

    [convolutional]
    size=1
    stride=1
    pad=1
    filters=30
    activation=linear


    [region]
    anchors = 1.3221, 1.73145, 3.19275, 4.00944, 5.05587, 8.09892, 9.47112, 4.84053, 11.2364, 10.0071
    bias_match=1
    classes=1
    coords=4
    num=5
    softmax=1
    jitter=.3
    rescore=1

    object_scale=5
    noobject_scale=1
    class_scale=1
    coord_scale=1

    absolute=1
    thresh = .6
    random=1


    When I run on darkflow-master tensorflow based implementation of yolo I am getting the following output.
    The error is occuring while setting the options



    option = 
    'model': 'custom-2/yolo-obj.cfg',
    'load': 'custom-2/yolo-obj_2200.weights',
    'threshold': 0.30,
    #'gpu': 1.0


    tfnet = TFNet(option) <----- at this line (line 23 of run_img.py)


    The output:



    >>> %Run run_img.py
    /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
    return f(*args, **kwds)
    /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412
    return f(*args, **kwds)
    Using TensorFlow backend.
    /home/pi/Desktop/darkflow-master/darkflow/dark/darknet.py:54: UserWarning: ./cfg/yolo-obj_2200.cfg not found, use custom-2/yolo-obj.cfg instead
    cfg_path, FLAGS.model))
    Parsing custom-2/yolo-obj.cfg
    Loading custom-2/yolo-obj_2200.weights ...
    Successfully identified 202314760 bytes
    Finished in 0.17995047569274902s

    Building net ...
    Source | Train? | Layer description | Output size
    -------+--------+----------------------------------+---------------
    | | input | (?, 416, 416, 3)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 416, 416, 32)
    Load | Yep! | maxp 2x2p0_2 | (?, 208, 208, 32)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 208, 208, 64)
    Load | Yep! | maxp 2x2p0_2 | (?, 104, 104, 64)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 104, 104, 128)
    Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 104, 104, 64)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 104, 104, 128)
    Load | Yep! | maxp 2x2p0_2 | (?, 52, 52, 128)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 52, 52, 256)
    Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 52, 52, 128)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 52, 52, 256)
    Load | Yep! | maxp 2x2p0_2 | (?, 26, 26, 256)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
    Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 256)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
    Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 256)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
    Load | Yep! | maxp 2x2p0_2 | (?, 13, 13, 512)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
    Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 13, 13, 512)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
    Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 13, 13, 512)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
    Load | Yep! | concat [16] | (?, 26, 26, 512)
    Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 64)
    Load | Yep! | local flatten 2x2 | (?, 13, 13, 256)
    Load | Yep! | concat [27, 24] | (?, 13, 13, 1280)
    Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
    Load | Yep! | conv 1x1p0_1 linear | (?, 13, 13, 30)
    -------+--------+----------------------------------+---------------
    Running entirely on CPU
    Backend terminated (returncode: -11)
    Fatal Python error: Segmentation fault

    Thread 0x620ff470 (most recent call first):
    File "/usr/lib/python3.5/threading.py", line 293 in wait
    File "/usr/lib/python3.5/queue.py", line 164 in get
    File "/usr/lib/python3.5/multiprocessing/pool.py", line 429 in _handle_results
    File "/usr/lib/python3.5/threading.py", line 862 in run
    File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
    File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

    Thread 0x62aff470 (most recent call first):
    File "/usr/lib/python3.5/threading.py", line 293 in wait
    File "/usr/lib/python3.5/queue.py", line 164 in get
    File "/usr/lib/python3.5/multiprocessing/pool.py", line 376 in _handle_tasks
    File "/usr/lib/python3.5/threading.py", line 862 in run
    File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
    File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

    Thread 0x632ff470 (most recent call first):
    File "/usr/lib/python3.5/multiprocessing/pool.py", line 367 in _handle_workers
    File "/usr/lib/python3.5/threading.py", line 862 in run
    File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
    File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

    Thread 0x63cff470 (most recent call first):
    File "/usr/lib/python3.5/threading.py", line 293 in wait
    File "/usr/lib/python3.5/queue.py", line 164 in get
    File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
    File "/usr/lib/python3.5/threading.py", line 862 in run
    File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
    File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

    Thread 0x644ff470 (most recent call first):
    File "/usr/lib/python3.5/threading.py", line 293 in wait
    File "/usr/lib/python3.5/queue.py", line 164 in get
    File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
    File "/usr/lib/python3.5/threading.py", line 862 in run
    File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
    File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

    Thread 0x64eff470 (most recent call first):
    File "/usr/lib/python3.5/threading.py", line 293 in wait
    File "/usr/lib/python3.5/queue.py", line 164 in get
    File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
    File "/usr/lib/python3.5/threading.py", line 862 in run
    File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
    File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

    Thread 0x658dc470 (most recent call first):
    File "/usr/lib/python3.5/threading.py", line 293 in wait
    File "/usr/lib/python3.5/queue.py", line 164 in get
    File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
    File "/usr/lib/python3.5/threading.py", line 862 in run
    File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
    File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

    Current thread 0x76f6a010 (most recent call first):
    File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1367 in _call_tf_sessionrun
    File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1277 in _run_fn
    File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1292 in _do_call
    File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1286 in _do_run
    File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1110 in _run
    File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 887 in run
    File "/home/pi/Desktop/darkflow-master/darkflow/net/build.py", line 146 in setup_meta_ops
    File "/home/pi/Desktop/darkflow-master/darkflow/net/build.py", line 76 in __init__
    File "/home/pi/Desktop/darkflow-master/run_img.py", line 23 in <module>
    File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 588 in execute_source
    File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 427 in _execute_source_ex
    File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 374 in _execute_file
    File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 155 in _cmd_Run
    File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 119 in handle_command
    File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 97 in mainloop
    File "/usr/lib/python3/dist-packages/thonny/shared/backend_launcher.py", line 41 in <module>
    Resetting ...
    >>>









    share|improve this question























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      I am running a custom yolo model based on this .cfg
      Running this on a raspberry pi
      I do not know if the problem is due to low memory or not. I do not think so



      [net]
      # Testing
      batch=64
      subdivisions=32
      # Training
      batch=64
      subdivisions=32
      height=416
      width=416
      channels=3
      momentum=0.9
      decay=0.0005
      angle=0
      saturation = 1.5
      exposure = 1.5
      hue=.1

      learning_rate=0.001
      burn_in=1000
      max_batches = 80200
      policy=steps
      steps=40000,60000
      scales=.1,.1

      [convolutional]
      batch_normalize=1
      filters=32
      size=3
      stride=1
      pad=1
      activation=leaky

      [maxpool]
      size=2
      stride=2

      [convolutional]
      batch_normalize=1
      filters=64
      size=3
      stride=1
      pad=1
      activation=leaky

      [maxpool]
      size=2
      stride=2

      [convolutional]
      batch_normalize=1
      filters=128
      size=3
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=64
      size=1
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=128
      size=3
      stride=1
      pad=1
      activation=leaky

      [maxpool]
      size=2
      stride=2

      [convolutional]
      batch_normalize=1
      filters=256
      size=3
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=128
      size=1
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=256
      size=3
      stride=1
      pad=1
      activation=leaky

      [maxpool]
      size=2
      stride=2

      [convolutional]
      batch_normalize=1
      filters=512
      size=3
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=256
      size=1
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=512
      size=3
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=256
      size=1
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=512
      size=3
      stride=1
      pad=1
      activation=leaky

      [maxpool]
      size=2
      stride=2

      [convolutional]
      batch_normalize=1
      filters=1024
      size=3
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=512
      size=1
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=1024
      size=3
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=512
      size=1
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=1024
      size=3
      stride=1
      pad=1
      activation=leaky


      #######

      [convolutional]
      batch_normalize=1
      size=3
      stride=1
      pad=1
      filters=1024
      activation=leaky

      [convolutional]
      batch_normalize=1
      size=3
      stride=1
      pad=1
      filters=1024
      activation=leaky

      [route]
      layers=-9

      [convolutional]
      batch_normalize=1
      size=1
      stride=1
      pad=1
      filters=64
      activation=leaky

      [reorg]
      stride=2

      [route]
      layers=-1,-4

      [convolutional]
      batch_normalize=1
      size=3
      stride=1
      pad=1
      filters=1024
      activation=leaky

      [convolutional]
      size=1
      stride=1
      pad=1
      filters=30
      activation=linear


      [region]
      anchors = 1.3221, 1.73145, 3.19275, 4.00944, 5.05587, 8.09892, 9.47112, 4.84053, 11.2364, 10.0071
      bias_match=1
      classes=1
      coords=4
      num=5
      softmax=1
      jitter=.3
      rescore=1

      object_scale=5
      noobject_scale=1
      class_scale=1
      coord_scale=1

      absolute=1
      thresh = .6
      random=1


      When I run on darkflow-master tensorflow based implementation of yolo I am getting the following output.
      The error is occuring while setting the options



      option = 
      'model': 'custom-2/yolo-obj.cfg',
      'load': 'custom-2/yolo-obj_2200.weights',
      'threshold': 0.30,
      #'gpu': 1.0


      tfnet = TFNet(option) <----- at this line (line 23 of run_img.py)


      The output:



      >>> %Run run_img.py
      /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
      return f(*args, **kwds)
      /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412
      return f(*args, **kwds)
      Using TensorFlow backend.
      /home/pi/Desktop/darkflow-master/darkflow/dark/darknet.py:54: UserWarning: ./cfg/yolo-obj_2200.cfg not found, use custom-2/yolo-obj.cfg instead
      cfg_path, FLAGS.model))
      Parsing custom-2/yolo-obj.cfg
      Loading custom-2/yolo-obj_2200.weights ...
      Successfully identified 202314760 bytes
      Finished in 0.17995047569274902s

      Building net ...
      Source | Train? | Layer description | Output size
      -------+--------+----------------------------------+---------------
      | | input | (?, 416, 416, 3)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 416, 416, 32)
      Load | Yep! | maxp 2x2p0_2 | (?, 208, 208, 32)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 208, 208, 64)
      Load | Yep! | maxp 2x2p0_2 | (?, 104, 104, 64)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 104, 104, 128)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 104, 104, 64)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 104, 104, 128)
      Load | Yep! | maxp 2x2p0_2 | (?, 52, 52, 128)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 52, 52, 256)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 52, 52, 128)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 52, 52, 256)
      Load | Yep! | maxp 2x2p0_2 | (?, 26, 26, 256)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 256)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 256)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
      Load | Yep! | maxp 2x2p0_2 | (?, 13, 13, 512)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 13, 13, 512)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 13, 13, 512)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
      Load | Yep! | concat [16] | (?, 26, 26, 512)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 64)
      Load | Yep! | local flatten 2x2 | (?, 13, 13, 256)
      Load | Yep! | concat [27, 24] | (?, 13, 13, 1280)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
      Load | Yep! | conv 1x1p0_1 linear | (?, 13, 13, 30)
      -------+--------+----------------------------------+---------------
      Running entirely on CPU
      Backend terminated (returncode: -11)
      Fatal Python error: Segmentation fault

      Thread 0x620ff470 (most recent call first):
      File "/usr/lib/python3.5/threading.py", line 293 in wait
      File "/usr/lib/python3.5/queue.py", line 164 in get
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 429 in _handle_results
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Thread 0x62aff470 (most recent call first):
      File "/usr/lib/python3.5/threading.py", line 293 in wait
      File "/usr/lib/python3.5/queue.py", line 164 in get
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 376 in _handle_tasks
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Thread 0x632ff470 (most recent call first):
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 367 in _handle_workers
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Thread 0x63cff470 (most recent call first):
      File "/usr/lib/python3.5/threading.py", line 293 in wait
      File "/usr/lib/python3.5/queue.py", line 164 in get
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Thread 0x644ff470 (most recent call first):
      File "/usr/lib/python3.5/threading.py", line 293 in wait
      File "/usr/lib/python3.5/queue.py", line 164 in get
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Thread 0x64eff470 (most recent call first):
      File "/usr/lib/python3.5/threading.py", line 293 in wait
      File "/usr/lib/python3.5/queue.py", line 164 in get
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Thread 0x658dc470 (most recent call first):
      File "/usr/lib/python3.5/threading.py", line 293 in wait
      File "/usr/lib/python3.5/queue.py", line 164 in get
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Current thread 0x76f6a010 (most recent call first):
      File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1367 in _call_tf_sessionrun
      File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1277 in _run_fn
      File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1292 in _do_call
      File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1286 in _do_run
      File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1110 in _run
      File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 887 in run
      File "/home/pi/Desktop/darkflow-master/darkflow/net/build.py", line 146 in setup_meta_ops
      File "/home/pi/Desktop/darkflow-master/darkflow/net/build.py", line 76 in __init__
      File "/home/pi/Desktop/darkflow-master/run_img.py", line 23 in <module>
      File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 588 in execute_source
      File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 427 in _execute_source_ex
      File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 374 in _execute_file
      File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 155 in _cmd_Run
      File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 119 in handle_command
      File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 97 in mainloop
      File "/usr/lib/python3/dist-packages/thonny/shared/backend_launcher.py", line 41 in <module>
      Resetting ...
      >>>









      share|improve this question













      I am running a custom yolo model based on this .cfg
      Running this on a raspberry pi
      I do not know if the problem is due to low memory or not. I do not think so



      [net]
      # Testing
      batch=64
      subdivisions=32
      # Training
      batch=64
      subdivisions=32
      height=416
      width=416
      channels=3
      momentum=0.9
      decay=0.0005
      angle=0
      saturation = 1.5
      exposure = 1.5
      hue=.1

      learning_rate=0.001
      burn_in=1000
      max_batches = 80200
      policy=steps
      steps=40000,60000
      scales=.1,.1

      [convolutional]
      batch_normalize=1
      filters=32
      size=3
      stride=1
      pad=1
      activation=leaky

      [maxpool]
      size=2
      stride=2

      [convolutional]
      batch_normalize=1
      filters=64
      size=3
      stride=1
      pad=1
      activation=leaky

      [maxpool]
      size=2
      stride=2

      [convolutional]
      batch_normalize=1
      filters=128
      size=3
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=64
      size=1
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=128
      size=3
      stride=1
      pad=1
      activation=leaky

      [maxpool]
      size=2
      stride=2

      [convolutional]
      batch_normalize=1
      filters=256
      size=3
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=128
      size=1
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=256
      size=3
      stride=1
      pad=1
      activation=leaky

      [maxpool]
      size=2
      stride=2

      [convolutional]
      batch_normalize=1
      filters=512
      size=3
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=256
      size=1
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=512
      size=3
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=256
      size=1
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=512
      size=3
      stride=1
      pad=1
      activation=leaky

      [maxpool]
      size=2
      stride=2

      [convolutional]
      batch_normalize=1
      filters=1024
      size=3
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=512
      size=1
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=1024
      size=3
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=512
      size=1
      stride=1
      pad=1
      activation=leaky

      [convolutional]
      batch_normalize=1
      filters=1024
      size=3
      stride=1
      pad=1
      activation=leaky


      #######

      [convolutional]
      batch_normalize=1
      size=3
      stride=1
      pad=1
      filters=1024
      activation=leaky

      [convolutional]
      batch_normalize=1
      size=3
      stride=1
      pad=1
      filters=1024
      activation=leaky

      [route]
      layers=-9

      [convolutional]
      batch_normalize=1
      size=1
      stride=1
      pad=1
      filters=64
      activation=leaky

      [reorg]
      stride=2

      [route]
      layers=-1,-4

      [convolutional]
      batch_normalize=1
      size=3
      stride=1
      pad=1
      filters=1024
      activation=leaky

      [convolutional]
      size=1
      stride=1
      pad=1
      filters=30
      activation=linear


      [region]
      anchors = 1.3221, 1.73145, 3.19275, 4.00944, 5.05587, 8.09892, 9.47112, 4.84053, 11.2364, 10.0071
      bias_match=1
      classes=1
      coords=4
      num=5
      softmax=1
      jitter=.3
      rescore=1

      object_scale=5
      noobject_scale=1
      class_scale=1
      coord_scale=1

      absolute=1
      thresh = .6
      random=1


      When I run on darkflow-master tensorflow based implementation of yolo I am getting the following output.
      The error is occuring while setting the options



      option = 
      'model': 'custom-2/yolo-obj.cfg',
      'load': 'custom-2/yolo-obj_2200.weights',
      'threshold': 0.30,
      #'gpu': 1.0


      tfnet = TFNet(option) <----- at this line (line 23 of run_img.py)


      The output:



      >>> %Run run_img.py
      /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
      return f(*args, **kwds)
      /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412
      return f(*args, **kwds)
      Using TensorFlow backend.
      /home/pi/Desktop/darkflow-master/darkflow/dark/darknet.py:54: UserWarning: ./cfg/yolo-obj_2200.cfg not found, use custom-2/yolo-obj.cfg instead
      cfg_path, FLAGS.model))
      Parsing custom-2/yolo-obj.cfg
      Loading custom-2/yolo-obj_2200.weights ...
      Successfully identified 202314760 bytes
      Finished in 0.17995047569274902s

      Building net ...
      Source | Train? | Layer description | Output size
      -------+--------+----------------------------------+---------------
      | | input | (?, 416, 416, 3)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 416, 416, 32)
      Load | Yep! | maxp 2x2p0_2 | (?, 208, 208, 32)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 208, 208, 64)
      Load | Yep! | maxp 2x2p0_2 | (?, 104, 104, 64)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 104, 104, 128)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 104, 104, 64)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 104, 104, 128)
      Load | Yep! | maxp 2x2p0_2 | (?, 52, 52, 128)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 52, 52, 256)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 52, 52, 128)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 52, 52, 256)
      Load | Yep! | maxp 2x2p0_2 | (?, 26, 26, 256)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 256)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 256)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
      Load | Yep! | maxp 2x2p0_2 | (?, 13, 13, 512)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 13, 13, 512)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 13, 13, 512)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
      Load | Yep! | concat [16] | (?, 26, 26, 512)
      Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 64)
      Load | Yep! | local flatten 2x2 | (?, 13, 13, 256)
      Load | Yep! | concat [27, 24] | (?, 13, 13, 1280)
      Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
      Load | Yep! | conv 1x1p0_1 linear | (?, 13, 13, 30)
      -------+--------+----------------------------------+---------------
      Running entirely on CPU
      Backend terminated (returncode: -11)
      Fatal Python error: Segmentation fault

      Thread 0x620ff470 (most recent call first):
      File "/usr/lib/python3.5/threading.py", line 293 in wait
      File "/usr/lib/python3.5/queue.py", line 164 in get
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 429 in _handle_results
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Thread 0x62aff470 (most recent call first):
      File "/usr/lib/python3.5/threading.py", line 293 in wait
      File "/usr/lib/python3.5/queue.py", line 164 in get
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 376 in _handle_tasks
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Thread 0x632ff470 (most recent call first):
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 367 in _handle_workers
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Thread 0x63cff470 (most recent call first):
      File "/usr/lib/python3.5/threading.py", line 293 in wait
      File "/usr/lib/python3.5/queue.py", line 164 in get
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Thread 0x644ff470 (most recent call first):
      File "/usr/lib/python3.5/threading.py", line 293 in wait
      File "/usr/lib/python3.5/queue.py", line 164 in get
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Thread 0x64eff470 (most recent call first):
      File "/usr/lib/python3.5/threading.py", line 293 in wait
      File "/usr/lib/python3.5/queue.py", line 164 in get
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Thread 0x658dc470 (most recent call first):
      File "/usr/lib/python3.5/threading.py", line 293 in wait
      File "/usr/lib/python3.5/queue.py", line 164 in get
      File "/usr/lib/python3.5/multiprocessing/pool.py", line 108 in worker
      File "/usr/lib/python3.5/threading.py", line 862 in run
      File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
      File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

      Current thread 0x76f6a010 (most recent call first):
      File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1367 in _call_tf_sessionrun
      File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1277 in _run_fn
      File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1292 in _do_call
      File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1286 in _do_run
      File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1110 in _run
      File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 887 in run
      File "/home/pi/Desktop/darkflow-master/darkflow/net/build.py", line 146 in setup_meta_ops
      File "/home/pi/Desktop/darkflow-master/darkflow/net/build.py", line 76 in __init__
      File "/home/pi/Desktop/darkflow-master/run_img.py", line 23 in <module>
      File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 588 in execute_source
      File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 427 in _execute_source_ex
      File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 374 in _execute_file
      File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 155 in _cmd_Run
      File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 119 in handle_command
      File "/usr/lib/python3/dist-packages/thonny/shared/thonny/backend.py", line 97 in mainloop
      File "/usr/lib/python3/dist-packages/thonny/shared/backend_launcher.py", line 41 in <module>
      Resetting ...
      >>>






      tensorflow raspberry-pi yolo darkflow






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 11 at 12:52









      Knl_Kolhe

      113




      113



























          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53248935%2fraspberry-pi-yolo-custom%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown






























          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53248935%2fraspberry-pi-yolo-custom%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          Barbados

          How to read a connectionString WITH PROVIDER in .NET Core?

          Node.js Script on GitHub Pages or Amazon S3