TL,博士
定义输入形状时会出现这些错误

ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (4000, 20, 20)


ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5

长显式版本:
我正在使用不同的Keras NN在我自己的数据集上尝试分类。
到目前为止,我已经成功与我的安,但我有困难与我的有线电视新闻网。
数据集
Complete Code
数据集由指定大小的矩阵组成,其中0包含指定大小的子矩阵,1填充。子矩阵是可选的,目标是训练神经网络预测矩阵是否包含子矩阵或是否不包含子矩阵。为了增加检测的难度,我在矩阵中加入了各种类型的噪声。
这是一张单独的矩阵loosk的图片,黑色部分是0,白色部分是1。图像的像素和矩阵中的条目之间有1:1的对应关系。
python - 弄清楚如何为自己的数据集在Keras的Conv2D层中定义input_shape-LMLPHP
我用numpy savetxt和loadtxt将它们保存在文本中。然后看起来是这样的:
#________________Array__Info:__(4000, 20, 20)__________
#________________Entry__Number__1________
0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1
0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 1
0 0 1 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 1 1 1 0
0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1
0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 1 1 0 0 1 0 0 0 1 1 1
0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 1 1 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0
0 1 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1
1 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0
#________________Entry__Number__2________
0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0
1 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1
1 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0
0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0
0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0
1 0 1 0 0 1 0 1 0 1 0 0 0 0 1 1 1 0 0 1
0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
1 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0
0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1
0 0 0 0 0 1 1 0 0 0 0 1 0 1 0 0 0 0 0 0
0 0 1 1 0 0 0 0 0 0 0 1 1 1 1 1 0 1 0 0
0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 0 1
0 1 0 0 0 0. . . . . . (and so on)

Complete Dataset
CNN代码
Github
代码:(忽略导入)
# data

inputData = dsg.loadDataset("test_input.txt")
outputData = dsg.loadDataset("test_output.txt")
print("the size of the dataset is: ", inputData.shape, " of type: ", type(inputData))


# parameters

# CNN

cnn = Sequential()

cnn.add(Conv2D(32, (3, 3), input_shape = inputData.shape, activation = 'relu'))

cnn.add(MaxPooling2D(pool_size = (2, 2)))

cnn.add(Flatten())

cnn.add(Dense(units=64, activation='relu'))

cnn.add(Dense(units=1, activation='sigmoid'))

cnn.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy'])

cnn.summary()

cnn.fit(inputData,
        outputData,
        epochs=100,
        validation_split=0.2)

问题:
我收到这个输出错误消息
Using TensorFlow backend.
the size of the dataset is:  (4000, 20, 20)  of type:  <class 'numpy.ndarray'>
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
conv2d_1 (Conv2D)            (None, 3998, 18, 32)      5792
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 1999, 9, 32)       0
_________________________________________________________________
flatten_1 (Flatten)          (None, 575712)            0
_________________________________________________________________
dense_1 (Dense)              (None, 64)                36845632
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 65
=================================================================
Total params: 36,851,489
Trainable params: 36,851,489
Non-trainable params: 0
_________________________________________________________________
Traceback (most recent call last):
  File "D:\GOOGLE DRIVE\School\sem-2-2018\BSP2\BiCS-BSP-2\CNN\matrixCNN.py", line 47, in <module>
    validation_split=0.2)
  File "C:\Code\Python\lib\site-packages\keras\models.py", line 963, in fit
    validation_steps=validation_steps)
  File "C:\Code\Python\lib\site-packages\keras\engine\training.py", line 1637, in fit
    batch_size=batch_size)
  File "C:\Code\Python\lib\site-packages\keras\engine\training.py", line 1483, in _standardize_user_data
    exception_prefix='input')
  File "C:\Code\Python\lib\site-packages\keras\engine\training.py", line 113, in _standardize_input_data
    'with shape ' + str(data_shape))
ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (4000, 20, 20)

我真的不知道怎么解决这个问题。我看了一下documentation of Conv2D里面说要把它做成这样一种形式:(批量、高度、宽度、通道)。
就我而言,我认为:
input_shape=(4000, 20, 20, 1)

,因为我有4000个20*20矩阵,只有1和0
但是我得到了一条错误信息:
Using TensorFlow backend.
the size of the dataset is:  (4000, 20, 20)  of type:  <class 'numpy.ndarray'>
Traceback (most recent call last):
  File "D:\GOOGLE DRIVE\School\sem-2-2018\BSP2\BiCS-BSP-2\CNN\matrixCNN.py", line 30, in <module>
    cnn.add(Conv2D(32, (3, 3), input_shape = (4000, 12, 12, 1), activation = 'relu'))
  File "C:\Code\Python\lib\site-packages\keras\models.py", line 467, in add
    layer(x)
  File "C:\Code\Python\lib\site-packages\keras\engine\topology.py", line 573, in __call__
    self.assert_input_compatibility(inputs)
  File "C:\Code\Python\lib\site-packages\keras\engine\topology.py", line 472, in assert_input_compatibility
    str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5

我应该以什么样的形式把数据传给CNN?
所有文件都可用here
谢谢你抽出时间。

最佳答案

你的CNN期望的形状是(num_samples, 20, 20, 1),而你的数据是(num_samples, 20, 20)格式。
因为您只有一个通道,所以您可以将数据重塑为(4000, 20, 20, 1)

inputData = inputData.reshape(-1, 20, 20, 1)

如果要在模型内部进行重塑,只需添加一个Reshape层。作为您的第一层:
model.add(Reshape(input_shape = (20, 20), target_shape=(20, 20, 1)))

关于python - 弄清楚如何为自己的数据集在Keras的Conv2D层中定义input_shape,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/49843113/

10-12 17:40