使用具有纹理分析和(x,y)坐标的K均值进行图像分割

问题描述 投票:0回答:1

[我正在基于MathWorks的示例,尝试使用Python中的K-means实现颜色/图像分割:

https://nl.mathworks.com/help/images/ref/imsegkmeans.html

将(R,G,B)值用作功能集,我得到以下结果:

enter image description here

但是,如果将纹理信息(使用Gabor滤镜)和像素位置信息(x,y)添加到功能集中,则可以改进。

结果:

enter image description here

对于此结果,我不使用(R,G,B)值,因为狗的颜色与瓷砖的颜色大致相同。我正在使用灰度图像,使用像素坐标扩展了24个Gabor滤镜。

不幸的是,结果不如来自Mathworks的结果好:

https://nl.mathworks.com/help/examples/images/win64/ImproveKMeansSegmentationExample_05.png

目标是使用颜色/纹理分割将背景与对象分离。

您有一个改善的主意吗?非常感谢!

# Based on https://www.mathworks.com/help/images/ref/imsegkmeans.html

import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt

from sklearn.cluster import KMeans
from sklearn import preprocessing

# Building some gabor kernels to filter image
orientations = [0.0, np.pi / 2, np.pi, 3 * np.pi / 2]
wavelengths = [3, 6, 12, 24, 48, 96]

def build_gabor_kernels():
    filters = []
    ksize = 40
    for rotation in orientations:
        for wavelength in wavelengths:
            kernel = cv.getGaborKernel((ksize, ksize), 4.25, rotation, wavelength, 0.5, 0, ktype=cv.CV_32F)
            filters.append(kernel)

    return filters

image = cv.imread('./kobi.png')
rows, cols, channels = image.shape

# Resizing the image. 
# Full image is taking to much time to process
image = cv.resize(image, (int(cols * 0.5), int(rows * 0.5)))
rows, cols, channels = image.shape

gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY)

gaborKernels = build_gabor_kernels()

gaborFilters = []

for (i, kernel) in enumerate(gaborKernels):
    filteredImage = cv.filter2D(gray, cv.CV_8UC1, kernel)

    # Blurring the image
    sigma = int(3*0.5*wavelengths[i % len(wavelengths)])

    # Sigma needs to be odd
    if sigma % 2 == 0:
        sigma = sigma + 1

    blurredImage = cv.GaussianBlur(filteredImage,(int(sigma),int(sigma)),0)
    gaborFilters.append(blurredImage)


# numberOfFeatures = 1 (gray color) + number of gabor filters + 2 (x and y)
numberOfFeatures = 1  + len(gaborKernels) + 2

# Empty array that will contain all feature vectors
featureVectors = []

for i in range(0, rows, 1):
    for j in range(0, cols, 1):
        vector = [gray[i][j]]

        for k in range(0, len(gaborKernels)):
            vector.append(gaborFilters[k][i][j])

        vector.extend([i+1, j+1])

        featureVectors.append(vector)

# Some example results:
# featureVectors[0] = [164, 3, 10, 255, 249, 253, 249, 2, 43, 255, 249, 253, 249, 3, 10, 255, 249, 253, 249, 2, 43, 255, 249, 253, 249, 1, 1]
# featureVectors[1] = [163, 3, 17, 255, 249, 253, 249, 2, 43, 255, 249, 253, 249, 3, 17, 255, 249, 253, 249, 2, 43, 255, 249, 253, 249, 1, 2]

# Normalizing the feature vectors
scaler = preprocessing.StandardScaler()

scaler.fit(featureVectors)
featureVectors = scaler.transform(featureVectors)

kmeans = KMeans(n_clusters=2, random_state=170)
kmeans.fit(featureVectors)

centers = kmeans.cluster_centers_
labels = kmeans.labels_

result = centers[labels]

# Only keep first 3 columns to make it easy to plot as an RGB image
result = np.delete(result, range(3, numberOfFeatures), 1)

plt.figure(figsize = (15,8))
plt.imsave('test.jpg', result.reshape(rows, cols, 3) * 100)
python matlab opencv scikit-learn k-means
1个回答
0
投票

Mathworks上的方法包含一些临时步骤。有很多更好的算法可以解决此问题。首先,仅Gabor滤波器响应不会提供在同一纹理区域上一致的特征。更好的方法是增加另一个步骤-根据滤波器响应计算局部直方图。使用多个过滤器并将直方图连接在一起。通过选择适当的滤镜,此类特征可以区分各种纹理外观,并且在同一纹理区域上保持一致。这称为频谱直方图。

为了捕获有意义的纹理外观,需要从相对较大的局部窗口中计算局部直方图。但是,当本地窗口越过区域边界时,功能将不可靠。将具有类似特征的简单分组方法(例如k均值)应用到区域边界附近会导致不良结果。此approach提供了一种简单有效的解决方案。它使用局部频谱直方图作为特征,并使用矩阵分解来获得可以很好地定位边界的分段标签。它主要使用矩阵运算,因此速度很快。 matlab和python代码都可用。

© www.soinside.com 2019 - 2024. All rights reserved.