CIPixellate图像输出大小各不相同

问题描述 投票:4回答:3

我正在使用CIPixellate滤镜进行一些测试,我让它工作,但结果图像的大小不一。我认为这是有道理的,因为我改变了输入量表,但它不是我所期待的 - 我认为它会在图像的矩形内缩放。

我误解/使用过滤器错误或我只需要将输出图像裁剪为我想要的大小。

另外,inputCenter param在阅读标题/反复试验时并不清楚。任何人都可以解释那个参数是什么吗?

NSMutableArray * tmpImages = [[NSMutableArray alloc] init];
for (int i = 0; i < 10; i++) {
    double scale = i * 4.0;
    UIImage* tmpImg = [self applyCIPixelateFilter:self.faceImage withScale:scale];
    printf("tmpImg    width: %f height: %f\n",  tmpImg.size.width, tmpImg.size.height);
    [tmpImages addObject:tmpImg];
}

tmpImg    width: 480.000000 height: 640.000000
tmpImg    width: 484.000000 height: 644.000000
tmpImg    width: 488.000000 height: 648.000000
tmpImg    width: 492.000000 height: 652.000000
tmpImg    width: 496.000000 height: 656.000000
tmpImg    width: 500.000000 height: 660.000000
tmpImg    width: 504.000000 height: 664.000000
tmpImg    width: 508.000000 height: 668.000000
tmpImg    width: 512.000000 height: 672.000000
tmpImg    width: 516.000000 height: 676.000000

- (UIImage *)applyCIPixelateFilter:(UIImage*)fromImage withScale:(double)scale
{
    /*
     Makes an image blocky by mapping the image to colored squares whose color is defined by the replaced pixels.
     Parameters

     inputImage: A CIImage object whose display name is Image.

     inputCenter: A CIVector object whose attribute type is CIAttributeTypePosition and whose display name is Center.
     Default value: [150 150]

     inputScale: An NSNumber object whose attribute type is CIAttributeTypeDistance and whose display name is Scale.
     Default value: 8.00
     */
    CIContext *context = [CIContext contextWithOptions:nil];
    CIFilter *filter= [CIFilter filterWithName:@"CIPixellate"];
    CIImage *inputImage = [[CIImage alloc] initWithImage:fromImage];
    CIVector *vector = [CIVector vectorWithX:fromImage.size.width /2.0f Y:fromImage.size.height /2.0f];
    [filter setDefaults];
    [filter setValue:vector forKey:@"inputCenter"];
    [filter setValue:[NSNumber numberWithDouble:scale] forKey:@"inputScale"];
    [filter setValue:inputImage forKey:@"inputImage"];

    CGImageRef cgiimage = [context createCGImage:filter.outputImage fromRect:filter.outputImage.extent];
    UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:1.0f orientation:fromImage.imageOrientation];

    CGImageRelease(cgiimage);

    return newImage;
}
ios core-image cifilter
3个回答
0
投票

有时inputScale不会均匀划分你的图像,这是我发现我得到不同大小的输出图像。

例如,如果inputScale = 0或1,则输出图像大小非常准确。

我发现图像周围的额外空间居中的方式因inputCenter而“不透明地”变化。也就是说,我没有花时间弄清楚究竟是多么准确(我通过点击位置设置它)。

我对不同尺寸的解决方案是将图像重新渲染到输入图像大小的范围内,我使用Apple Watch的黑色背景。

CIFilter *pixelateFilter = [CIFilter filterWithName:@"CIPixellate"];
[pixelateFilter setDefaults];
[pixelateFilter setValue:[CIImage imageWithCGImage:editImage.CGImage] forKey:kCIInputImageKey];
[pixelateFilter setValue:@(amount) forKey:@"inputScale"];
[pixelateFilter setValue:vector forKey:@"inputCenter"];
CIImage* result = [pixelateFilter valueForKey:kCIOutputImageKey];    
CIContext *context = [CIContext contextWithOptions:nil];
CGRect extent = [pixelateResult extent];
CGImageRef cgImage = [context createCGImage:result fromRect:extent];

UIGraphicsBeginImageContextWithOptions(editImage.size, YES, [editImage scale]);
CGContextRef ref = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ref, 0, editImage.size.height);
CGContextScaleCTM(ref, 1.0, -1.0);

CGContextSetFillColorWithColor(ref, backgroundFillColor.CGColor);
CGRect drawRect = (CGRect){{0,0},editImage.size};
CGContextFillRect(ref, drawRect);
CGContextDrawImage(ref, drawRect, cgImage);
UIImage* filledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
returnImage = filledImage;

CGImageRelease(cgImage);

如果您要坚持使用您的实现,我建议至少改变您提取UIImage的方式以使用原始图像的“比例”,而不是与CIFilter比例混淆。

UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:fromImage.scale orientation:fromImage.imageOrientation];

0
投票

问题只在于缩放。

简单地说:

let result = UIImage(cgImage: cgimgresult!, scale: (originalImageView.image?.scale)!, orientation: (originalImageView.image?.imageOrientation)!)
            originalImageView.image = result

0
投票

正如Apple Core Image Programming Guidethis post中所提到的,

默认情况下,模糊滤镜还会通过模糊图像像素以及(在滤镜的图像处理空间中)环绕图像的透明像素来柔化图像的边缘

因此,您的输出图像会根据您的比例而变化。

对于inputCenter,正如Joshua Sullivan在这篇post on CIFilter的评论中所提到的,“它调整了像素网格与源图像的偏移”。因此,如果inputCenter坐标不是CI Pixellate inputScale的倍数,它将略微偏移像素方块(大多数在inputScale的大值上可见)。

© www.soinside.com 2019 - 2024. All rights reserved.