-1

I have two images of different sizes that I want to composite together using a CIFilter. ImageA size is 2400x1800. ImageB size is 1200x900.

As the two images are different sizes, when the images are composited together, ImageB is positioned at the bottom left and quarter the size of ImageA.

This does make sense, but it is not what I intended. I would like ImageB to resize to fit the same full size as ImageA.

I use an extension to resize ImageB but the performance is very slow and doesn’t seem to have any effect.


Questions

How do I efficiently resize ImageB to fit the same full size as ImageA, keeping both images center aligned, and maintaining the aspect ratio?

Does CIFilter have an inbuilt option to resize images before applying a filter?

Note: As a side thought, I considered an unconventional approach using an UIImageView with the size of ImageA, loading in ImageB, and then taking a snapshot. That would seem to guarantee the correct size and aspect with good performance.


Code

// Resize image extension
extension UIImage {
    func resized(to size: CGSize) -> UIImage {
        return UIGraphicsImageRenderer(size: size).image { _ in
            draw(in: CGRect(origin: .zero, size: size))
        }
    }
}

// Resize image usage
let ImageA = UIImage()
let ImageB = UIImage()
let resizedImage = ImageB.resized(to: CGSize(width: ImageA.size.width, height: ImageA.size.height))
ImageB = resizedImage
    
// Composite filter
let addCIFilter = CIFilter(name: "CIColorDodgeBlendMode")!
addCIFilter.setValue(CIImage(image: ImageA), forKey: kCIInputImageKey)
addCIFilter.setValue(CIImage(image: ImageB), forKey: kCIInputBackgroundImageKey)
let outputImage = addCIFilter.outputImage
0

2 Answers 2

1

Does CIFilter have an inbuilt option to resize images before applying a filter?

Yes, Core Image has built-in support for applying arbitrary transformations to an image at any point in the processing pipeline.

This is what a resized helper for a CIImage could look like:

extension CIImage {
    func resized(to targetSize: CGSize) -> CIImage {
        guard self.extent.size != targetSize, !self.extent.isInfinite else { return self }

        let transform = CGAffineTransform(scaleX: targetSize.width  / self.extent.width,
                                          y:      targetSize.height / self.extent.height)
        return self.transformed(by: transform)
    }
}

I highly recommend that you use CI's mechanisms for transforming images, as this will yield better performance. When you use the UIImage APIs, a new resized image needs to be created in memory, whereas Core Image would still operate on the original image, just with transformed pixel sampling.

Sign up to request clarification or add additional context in comments.

Comments

0

One way is to use a Lanczos scale transform filter.

Example

let ImageA = UIImage()
let ImageB = UIImage()

let imageScale = (ImageA.size.width / ImageB.size.width)

let addCIFilter = CIFilter(name: "CILanczosScaleTransform") {
addCIFilter.setValue(ImageB, forKey: kCIInputImageKey)
addCIFilter.setValue(imageScale, forKey: kCIInputScaleKey)
addCIFilter.setValue(1, forKey: kCIInputAspectRatioKey)
let outputImage = addCIFilter.outputImage

Another way is to directly resize or scale the input image.

Example

addCIFilter.setValue(CIImage(image: ImageB.resized(to: CGSize(width: 2400, height: 1800))), forKey: kCIInputBackgroundImageKey)

Reference

.resized(to:)

This resizes an image to fit within a given size.

let myImage1 = UIImage()
myImage1.resized(to: CGSize(width: 100, height: 100))

.scaled(to:)

This scales an image to fit within a given size while maintaining its aspect ratio.

let myImage1 = UIImage()
myImage1.scaled(to: CGSize(width: 100, height: 100))

Thanks to this Objective-C CGAffineTransform for the hint: https://stackoverflow.com/a/19778622

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.