本文介绍了不能从这个洞里出来:不能使用预学习模型的输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用 opencv 在 raspberry pi 4 上进行对象检测.从https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb并尝试转换为 opencv 以在本地运行它并从网络摄像头拍摄图像.

I use opencv to do object-detection on a raspberry pi 4. Downloaded this tutorial fromhttps://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynband tried to convert to opencv to run it locally an take images from webcam.

我将网络摄像头设置为 640x480 分辨率,而不是应用一些变换将图像调整为 300x300x3,因为这应该是为模型提供数据的正确输入.

I set the webcam to a 640x480 resolution than apply some transform to adapt the image to 300x300x3 because this shoud be the right input to feed the model.

#crop the image to a square
image = image[0:480,84:564]
#now the image is 480x480
#scales the image to 300x300
image = cv2.resize(image, (300,300), interpolation = cv2.INTER_AREA)

之后我调用函数show_inference(detection_model,converted_image)

After that I call the function show_inference(detection_model, converted_image)

def run_inference_for_single_image(model, image):
  image = np.asarray(image)
  # The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
  input_tensor = tf.convert_to_tensor(image)
  # The model expects a batch of images, so add an axis with `tf.newaxis`.
  input_tensor = input_tensor[tf.newaxis,...]

  # Run inference
  output_dict = model(input_tensor)

  print('\noutputdict:\n',output_dict,'\n')
  # All outputs are batches tensors.
  # Convert to numpy arrays, and take index [0] to remove the batch dimension.
  # We're only interested in the first num_detections.
  num_detections = int(output_dict.pop('num_detections'))
  print('\nnum_detections:\n',num_detections,'\n')
  output_dict = {key:value[0, :num_detections].numpy()
                 for key,value in output_dict.items()}
  output_dict['num_detections'] = num_detections

  # detection_classes should be ints.
  output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)

"""
  # Handle models with masks:
  if 'detection_masks' in output_dict:
    # Reframe the the bbox mask to the image size.
    detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
              output_dict['detection_masks'], output_dict['detection_boxes'],
           image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
                                   tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
"""
  return output_dict

"""Run it on each test image and show the results:"""

def show_inference(model, image):
  # the array based representation of the image will be used later in order to prepare the
  # result image with boxes and labels on it.
  image_np = np.array(image)
  # Actual detection.
  output_dict = run_inference_for_single_image(model, image_np)
  # Visualization of the results of a detection.
  vis_util.visualize_boxes_and_labels_on_image_array(
      image_np,
      output_dict['detection_boxes'],
      output_dict['detection_classes'],
      output_dict['detection_scores'],
      category_index,
      instance_masks=output_dict.get('detection_masks_reframed', None),
      use_normalized_coordinates=True,
      line_thickness=8)

  display(Image.fromarray(image_np))

在这一行(在 run_inference_for_single_image(model, image) 中):

At this line (in run_inference_for_single_image(model, image)):

num_detections = int(output_dict.pop('num_detections'))

我收到此错误:

Traceback (most recent call last):
  File "object_detection_webcam_opencv.py", line 223, in <module>
show_inference(detection_model, converted_image)
  File "object_detection_webcam_opencv.py", line 145, in show_inference
output_dict = run_inference_for_single_image(model, image_np)
  File "object_detection_webcam_opencv.py", line 116, in run_inference_for_single_image
  num_detections = int(output_dict.pop('num_detections'))
TypeError: int() argument must be a string, a bytes-like object or a number, not 'Tensor'

我已经坚持了 3 天了!我的树莓派有问题吗?

It has been 3 days I'm stuck on this! Is it a problem with my raspberry?

模型需要的输入:

[<tf.Tensor 'image_tensor:0' shape=(?, ?, ?, 3) dtype=uint8>]

预期输出:

 {'detection_classes': TensorShape([Dimension(None), Dimension(100)]), 'num_detections': TensorShape([Dimension(None)]), 'detection_boxes': TensorShape([Dimension(None), Dimension(100), Dimension(4)]), 'detection_scores': TensorShape([Dimension(None), Dimension(100)])}

这就是我得到的:outputdict:

This is what I get: outputdict:

 {'detection_classes': <tf.Tensor 'StatefulPartitionedCall:1' shape=(?, 100) dtype=float32>, 'num_detections': <tf.Tensor 'StatefulPartitionedCall:3' shape=(?,) dtype=float32>, 'detection_boxes': <tf.Tensor 'StatefulPartitionedCall:0' shape=(?, 100, 4) dtype=float32>, 'detection_scores': <tf.Tensor 'StatefulPartitionedCall:2' shape=(?, 100) dtype=float32>}

这是整个script.py

This is the entire script.py

# -*- coding: utf-8 -*-

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile

from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
import pathlib
import cv2

"""Import the object detection module."""

from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util

"""Patches:"""

# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1

# Patch the location of gfile
tf.gfile = tf.io.gfile



"""# Model preparation

## Variables

Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.

By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies.

## Loader
"""

def load_model(model_name):
  #per 'coco_ssd_mobilenet_v1_1.0_quant_2018_06_29'
  #base_url = 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/'
  #model_file = model_name + '.zip'

  #per 'ssd_mobilenet_v1_coco_2017_11_17' e 'ssd_mobilenet_v1_coco_2018_01_28'
  base_url = 'http://download.tensorflow.org/models/object_detection/'
  model_file = model_name + '.tar.gz'

  model_dir = tf.compat.v1.keras.utils.get_file(
    fname=model_name,
    origin=base_url + model_file,
    untar=True)

  model_dir = pathlib.Path(model_dir)/"saved_model"

  model = tf.compat.v1.keras.models.load_model(str(model_dir))
  model = model.signatures['serving_default']

  return model

"""## Loading label map
Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`.  Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
"""

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = '/home/pi/venv/models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)

"""For the sake of simplicity we will test on 2 images:"""

# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS

"""# Detection

Load an object detection model:
"""
#model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
model_name = 'ssd_mobilenet_v1_coco_2018_01_28'
#model_name = 'coco_ssd_mobilenet_v1_1.0_quant_2018_06_29'
detection_model = load_model(model_name)

"""Check the model's input signature, it expects a batch of 3-color images of type uint8:"""

print('\nInput:\n',detection_model.inputs,'\n')

"""And retuns several outputs:"""

detection_model.output_dtypes

print('\nOutput:\n',detection_model.output_shapes,'\n')

"""Add a wrapper function to call the model, and cleanup the outputs:"""

def run_inference_for_single_image(model, image):
  image = np.asarray(image)
  # The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
  input_tensor = tf.convert_to_tensor(image)
  # The model expects a batch of images, so add an axis with `tf.newaxis`.
  input_tensor = input_tensor[tf.newaxis,...]

  # Run inference
  output_dict = model(input_tensor)

  print('\noutputdict:\n',output_dict,'\n')
  # All outputs are batches tensors.
  # Convert to numpy arrays, and take index [0] to remove the batch dimension.
  # We're only interested in the first num_detections.
  num_detections = int(output_dict.pop('num_detections'))
  print('\nnum_detections:\n',num_detections,'\n')
  output_dict = {key:value[0, :num_detections].numpy()
             for key,value in output_dict.items()}
  output_dict['num_detections'] = num_detections

  # detection_classes should be ints.
  output_dict['detection_classes'] =  output_dict['detection_classes'].astype(np.int64)

  """
  # Handle models with masks:
  if 'detection_masks' in output_dict:
    # Reframe the the bbox mask to the image size.
    detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
          output_dict['detection_masks'], output_dict['detection_boxes'],
           image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
                                   tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
  """
  return output_dict

"""Run it on each test image and show the results:"""

def show_inference(model, image):
  # the array based representation of the image will be used later in order to prepare the
  # result image with boxes and labels on it.
  image_np = np.array(image)
  # Actual detection.
  output_dict = run_inference_for_single_image(model, image_np)
  # Visualization of the results of a detection.
  vis_util.visualize_boxes_and_labels_on_image_array(
      image_np,
      output_dict['detection_boxes'],
      output_dict['detection_classes'],
      output_dict['detection_scores'],
      category_index,
      instance_masks=output_dict.get('detection_masks_reframed', None),
      use_normalized_coordinates=True,
      line_thickness=8)

  display(Image.fromarray(image_np))


#accedo alla webcam
cap = cv2.VideoCapture(0)

#setto un framerate sufficientemente basso
cap.set(5,5)

#setto larghezza e poi altezza dello stream
cap.set(3,640)
cap.set(4,480)

def convert_Image(image):

  #Riduco l'immagine ad un formato 1:1 senza deformarla
  image = image[0:480,84:564]

  #scalo l'immagine a 28x28
  image = cv2.resize(image, (300,300), interpolation = cv2.INTER_AREA)
  print('\nLa risoluzione scalata è',image.shape,'\n')


  return image

#3) Crea un oggetto immagine
if cap.isOpened():
  check, image = cap.read()
  print('\nLa risoluzione è',image.shape,'\n')
else:
    check = False

while check:
  #print('Original: ',image)
  #print('Shape: ',image.shape)
  check, image = cap.read()
  converted_image = convert_Image(image)

  #mostra l'mmagine
  cv2.imshow('Object detection', image)
  cv2.imshow("Converted", converted_image)

  show_inference(detection_model, converted_image)

  #5) Per interrompere lo streaming premere un tasto
  key = cv2.waitKey(20)

  if key == 27: #per uscire premere ESC
    cv2.destroyAllWindows()
    cap.release
    break

  #to break the cycle after 1 run just for troubleshoot purpose
  check = False

推荐答案

能否在调用 show_inference 之前尝试转换为 numpy 数组?或在返回前的转换图像函数末尾添加此行 -

Can you try to convert into numpy array before you call the show_inference.or add this line at the end of convert image function before return -

def convert_Image(image):
    image = np.asarray(image)
    return image

如果这不起作用,请尝试调整图像大小,然后将图像转换为 numpy 数组.该模型需要 numpy 数组格式的图像.


if this does not work try to resize the image and later convert image into numpy array. The model requires the image in a numpy array format.

以下代码适用于我的 tf 2.0 和 cv2 -

The below code is working for my with tf 2.0 and cv2 -

#!/usr/bin/env python
# coding: utf-8
"""
Object detection with live camera using cv2 and tf2.0
"""
import pathlib
import cv2
import numpy as np
import tensorflow as tf
import sys
import time
# Import the object detection module.
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util

# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1

# Patch the location of gfile
tf.gfile = tf.io.gfile

def load_model(model_name):
    """Loading the model from the url"""
    base_url = 'http://download.tensorflow.org/models/object_detection/'
    model_file = model_name + '.tar.gz'
    model_dir = tf.keras.utils.get_file(
      fname=model_name,
      origin=base_url + model_file,
      untar=True)

    model_dir = pathlib.Path(model_dir)/"saved_model"

    model = tf.saved_model.load(str(model_dir))
    model = model.signatures['serving_default']

    return model

def run_inference_for_single_image(model, image):
    """ Add a wrapper function to call the model, and cleanup the outputs:"""
    image = np.asarray(image)
    # The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
    input_tensor = tf.convert_to_tensor(image)
    # The model expects a batch of images, so add an axis with `tf.newaxis`.
    input_tensor = input_tensor[tf.newaxis,...]

    # Run inference
    output_dict = model(input_tensor)

    # We're only interested in the first num_detections.
    num_detections = int(output_dict.pop('num_detections'))
    output_dict = {key:value[0, :num_detections].numpy()
                   for key,value in output_dict.items()}
    output_dict['num_detections'] = num_detections

    # detection_classes should be ints.
    output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)

    # Handle models with masks:
    if 'detection_masks' in output_dict:
      # Reframe the the bbox mask to the image size.
      detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
              output_dict['detection_masks'], output_dict['detection_boxes'],
               image.shape[0], image.shape[1])
      detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
                                       tf.uint8)
      output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()

    return output_dict


def show_inference(model, image):
    """# Run it on each test image and show the results:
    # the array based representation of the image will be used later in order to prepare the
    # result image with boxes and labels on it.
    """
    image_np = np.array(image)
    # Actual detection.
    output_dict = run_inference_for_single_image(model, image_np)
    # Visualization of the results of a detection.
    vis_util.visualize_boxes_and_labels_on_image_array(
      image_np,
      output_dict['detection_boxes'],
      output_dict['detection_classes'],
      output_dict['detection_scores'],
      category_index,
      instance_masks=output_dict.get('detection_masks_reframed', None),
      use_normalized_coordinates=True,
      line_thickness=8)

    return image_np


def main():
    """
    load the model and run the logic
    """
    #  Detection Load an object detection model:
    model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
    detection_model = load_model(model_name)

    try:
        cap = cv2.VideoCapture(0)  # video capture source camera (Here webcam of laptop)
        start = end = time.time()
        while (True):
            ret, frame = cap.read()  # return a single frame in variable `frame`
            image = np.asarray(frame)
            image_inf = show_inference(detection_model, image)
            end = time.time()
            cv2.imshow('Live web camera', image_inf)
            if cv2.waitKey(1) == ord('q'):
                cv2.destroyAllWindows()
                break
        cap.release()
    finally:
        print("Could not open video source, exiting the program !!")
        cap.release()
        sys.exit(1)


# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = '/home/sumanh/github/tf_models/models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
if __name__ == '__main__':
    main()

这篇关于不能从这个洞里出来:不能使用预学习模型的输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-22 16:36