CIMedianFilter如何工作? (算法)

问题描述 投票:0回答:2

CIMedianFilter如何工作?我的意思是它的算法,我想要它去除噪音,我尝试通过这个代码:

                // -------------------------- W O R K I N G   O N   R E D -----------------
                // red pixels
                NSNumber *red1 = [NSNumber numberWithInt:rgbaPixel1[3]];
                NSNumber *red2 = [NSNumber numberWithInt:rgbaPixel2[3]];
                NSNumber *red3 = [NSNumber numberWithInt:rgbaPixel3[3]];
                NSNumber *red4 = [NSNumber numberWithInt:rgbaPixel4[3]];
                NSNumber *red5 = [NSNumber numberWithInt:rgbaPixel5[3]];

                // red array
                NSMutableArray *redArray = [NSMutableArray arrayWithObjects:red1, red2, red3, red4, red5, nil];
                // sorting
                NSSortDescriptor *lowToHigh = [NSSortDescriptor sortDescriptorWithKey:@"self" ascending:YES];
                [redArray sortUsingDescriptors:[NSArray arrayWithObject:lowToHigh]];
                // getting median
                int redMedian = [[redArray objectAtIndex:2] intValue];
                // setting the pixels red value to the median
                rgbaPixel1[3] = redMedian;
                 /////////////////////////////////testing if sorting and median is true
                 //            NSLog(@"Sir, here's a test (%@, %@, %@, %@, %@) and the median is %i", [redArray objectAtIndex:0],
                 //                                                                                    [redArray objectAtIndex:1],
                 //                                                                                    [redArray objectAtIndex:2],
                 //                  [redArray objectAtIndex:3], [redArray objectAtIndex:4], shit);
                // ---------------------------- E N D   O F   R E D ------------------------

                // ----------------------------- W O R K I N G   O N   G R E E N ---------------
                // getting green pixels first
                NSNumber *green1 = [NSNumber numberWithInteger:rgbaPixel1[2]];
                NSNumber *green2 = [NSNumber numberWithInteger:rgbaPixel2[2]];
                NSNumber *green3 = [NSNumber numberWithInteger:rgbaPixel3[2]];
                NSNumber *green4 = [NSNumber numberWithInteger:rgbaPixel4[2]];
                NSNumber *green5 = [NSNumber numberWithInteger:rgbaPixel5[2]];

                // creating array of greens
                NSMutableArray *greenArray = [NSMutableArray arrayWithObjects:green1, green2, green3, green4, green5, nil];
                // sorting the array
                [greenArray sortUsingDescriptors:[NSArray arrayWithObject:lowToHigh]];
                // getting the median
                int greenMedian = [[greenArray objectAtIndex:2] intValue];

                // setting the pixels green value to median value
                rgbaPixel1[2] = greenMedian;
                // ---------------------------- E N D   O F   G R E E N ------------------------

                // -------------------------- W O R K I N G   O N  B L U E ---------------------
                // getting blue pixel
                NSNumber *blue1 = [NSNumber numberWithInteger:rgbaPixel1[1]];
                NSNumber *blue2 = [NSNumber numberWithInteger:rgbaPixel2[1]];
                NSNumber *blue3 = [NSNumber numberWithInteger:rgbaPixel3[1]];
                NSNumber *blue4 = [NSNumber numberWithInteger:rgbaPixel4[1]];
                NSNumber *blue5 = [NSNumber numberWithInteger:rgbaPixel5[1]];

                // creating array for blue values
                NSMutableArray *blueArray = [NSMutableArray arrayWithObjects:blue1, blue2, blue3, blue4, blue5, nil];
                // sorting the array of blues
                [blueArray sortUsingDescriptors:[NSArray arrayWithObject:lowToHigh]];
                // getting the median
                int blueMedian = [[blueArray objectAtIndex:2] intValue];


                // setting pixel blue value to the median we just got :)
                rgbaPixel1[1] = blueMedian;

                // --------------------------------- E N D   O F   B L U E ----------------------

但它没有那么大的影响!或者我可能错误地获取RGB值,我真的需要一些帮助。

ios iphone uiimage core-image cgimage
2个回答
2
投票

我不能代表CIMedianFilter,因为我不知道Core Image在那里做的具体细节,但我已经为我的GPUImage框架编写了一个中值过滤器,我可以在那里描述过程。

首先,我应该说,当你在图像的像素上进行迭代时,你在NSNumber和NSMutableArray中使用大量对象时所做的事情会变得非常糟糕。此外,所有这些自动释放对象的内存管理将是棘手的。您至少需要移动到标量类型和C数组,以及用于排序的内联函数。更好的是,您可以将其迁移到GPU。

我在GPUImage中基于GPU的实现基于Morgan McGuire和Kyle Whitson在ShaderX6中的"A Fast, Small-Radius GPU Median Filter"章节。本文描述了一些可用于加速片段着色器中GPU侧中值滤波的优化。我在片段着色器中实现的3x3中值滤波器如下所示:

 precision highp float;

 varying vec2 textureCoordinate;
 varying vec2 leftTextureCoordinate;
 varying vec2 rightTextureCoordinate;

 varying vec2 topTextureCoordinate;
 varying vec2 topLeftTextureCoordinate;
 varying vec2 topRightTextureCoordinate;

 varying vec2 bottomTextureCoordinate;
 varying vec2 bottomLeftTextureCoordinate;
 varying vec2 bottomRightTextureCoordinate;

 uniform sampler2D inputImageTexture;

#define s2(a, b)                temp = a; a = min(a, b); b = max(temp, b);
#define mn3(a, b, c)            s2(a, b); s2(a, c);
#define mx3(a, b, c)            s2(b, c); s2(a, c);

#define mnmx3(a, b, c)          mx3(a, b, c); s2(a, b);                                   // 3 exchanges
#define mnmx4(a, b, c, d)       s2(a, b); s2(c, d); s2(a, c); s2(b, d);                   // 4 exchanges
#define mnmx5(a, b, c, d, e)    s2(a, b); s2(c, d); mn3(a, c, e); mx3(b, d, e);           // 6 exchanges
#define mnmx6(a, b, c, d, e, f) s2(a, d); s2(b, e); s2(c, f); mn3(a, b, c); mx3(d, e, f); // 7 exchanges

 void main()
 {
     vec3 v[6];

     v[0] = texture2D(inputImageTexture, bottomLeftTextureCoordinate).rgb;
     v[1] = texture2D(inputImageTexture, topRightTextureCoordinate).rgb;
     v[2] = texture2D(inputImageTexture, topLeftTextureCoordinate).rgb;
     v[3] = texture2D(inputImageTexture, bottomRightTextureCoordinate).rgb;
     v[4] = texture2D(inputImageTexture, leftTextureCoordinate).rgb;
     v[5] = texture2D(inputImageTexture, rightTextureCoordinate).rgb;
//     v[6] = texture2D(inputImageTexture, bottomTextureCoordinate).rgb;
//     v[7] = texture2D(inputImageTexture, topTextureCoordinate).rgb;
     vec3 temp;

     mnmx6(v[0], v[1], v[2], v[3], v[4], v[5]);

     v[5] = texture2D(inputImageTexture, bottomTextureCoordinate).rgb;

     mnmx5(v[1], v[2], v[3], v[4], v[5]);

     v[5] = texture2D(inputImageTexture, topTextureCoordinate).rgb;

     mnmx4(v[2], v[3], v[4], v[5]);

     v[5] = texture2D(inputImageTexture, textureCoordinate).rgb;

     mnmx3(v[3], v[4], v[5]);

     gl_FragColor = vec4(v[4], 1.0);
}

这足以在iOS设备上对抗实时视频,但3x3半径足够小,以至于您看不到最终图像的显着变化。它提供少量的空间去噪,但您可能需要扩展到5x5区域才能看到更具戏剧性的去噪效果。这也将开始略微模糊图像,因此在那里有一点权衡。使用视频,您可以将其与低通滤波器相结合,以更温和的方式进行一些时间去噪。

我将把它作为练习让你将上面的论文改编成大于3x3的案例。


0
投票

以上代码的OpenGL ES 3.0替代方案如下:

kernel vec4 medianUnsharpKernel(sampler u) {
vec4 pixel = unpremultiply(sample(u, samplerCoord(u)));
vec2 xy = destCoord();
int radius = 3;
int bounds = (radius - 1) / 2;
vec4 sum  = vec4(0.0);
for (int i = (0 - bounds); i <= bounds; i++)
{
    for (int j = (0 - bounds); j <= bounds; j++ )
    {
        sum += unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
    }
}
vec4 mean = vec4(sum / vec4(pow(float(radius), 2.0)));
float mean_avg = float(mean);
float comp_avg = 0.0;
vec4 comp  = vec4(0.0);
vec4 median  = mean;
for (int i = (0 - bounds); i <= bounds; i++)
{
    for (int j = (0 - bounds); j <= bounds; j++ )
    {
        comp = unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
        comp_avg = float(comp);
        median = (comp_avg < mean_avg) ? max(median, comp) : median;
    }
}

return premultiply(vec4(vec3(abs(pixel.rgb - median.rgb)), 1.0)); 
}

正如您所看到的,这与仅使用OpenGL子集的旧设备更兼容;并且,我相信它可能运行得更快(它更短)。我可以通过将第一遍中的像素值存储到数组中然后调用它们来从数组中进行第二次传递来加快速度。

理解起来也更容易,因为它基本上包括两个步骤:1。计算3x3邻域中源像素周围像素值的平均值; 2.找到同一邻域中小于均值的所有像素的最大像素值。 3. [可选]从源像素值中减去中值像素值以进行边缘检测。

如果您使用中值进行边缘检测,有几种方法可以修改上述代码以获得更好的结果,即混合中值滤波和截断媒体滤波(替代和更好的“模式”滤波)。如果您有兴趣,请询问。

© www.soinside.com 2019 - 2024. All rights reserved.