Search

Custom filters with Core Image Kernel Language

Nicholas Ollis

8 min read

Nov 4, 2018

Custom filters with Core Image Kernel Language

If you’ve ever played with Core Image’s filter API, you might have been wondering “What would it take to make a filter and launch my own Snapchat!”. There are ways to build out and chain Core Image Filters to make custom ones. However, this is a bit expensive compared to writing your own. When I first went on this journey, I found many issues with API documentation being only in Objective-C and for Desktop. Moving this to both iOS and Swift proved to be quite the undertaking.

Creating our Custom Filter

Image of a hike

Let’s jump into the deep end and take a look at the Core Image Kernel file. For this post, we are creating a CI Color Kernel that applies a haze filter in a file called HazeRemove.cikernel.

kernel vec4 HazeRemovalKernel(sampler src, __color color, float distance, float slope) {
  vec4 t;
  float d;

  d = destCoord().y * slope + distance;
  t = unpremultiply(sample(src, samplerCoord(src)));
  t = (t - d*color) / (1.0-d);

  return premultiply(t);
}

So whats going on here? We’ll break it down on each line. However, it’s important to note this a pixel-for-pixel change. We run this code on a single pixel at a time and be returning a modified pixel. For the syntax, the Core Image Kernel Language sits on top of the OpenGL Shading Language so it’ll have different rules than from Swift of Objective-C.

kernel vec4 HazeRemovalKernel(
The first line we specify this is a kernel routine, so the system knows to hand this off to the CIKernel class to execute. We specify a return type of vec4 as Core Image requires us to return this type in order to change the input pixel for your output pixel properly.

sampler src, __color color, float distance, float slope)
In our function HazeRemovalKernel we are passing in a CISampler object that we treat as the source pixel. __color is a color that gets matched to the CIContext’s color space helping us keep the color looking as expected if the user has True Tone or Night Swift turned on. Also for our filter, we pass in a slope and distance as floats. These are just ways to affect the parameters of the filter for a typical haze removal algorithm.

vec4 t;
float d;

Next, we define a few variables we’ll use and modify in our routine. The first is our modified pixel; we make this a vec4. In OpenGL, a vec4 is a vector type that has 4 components of single precision floating-point numbers. So in our case, it is holding RGBA values. Next, we define a float that is used to hold our calculated value for our haze removal algorithm.

d = destCoord().y * slope + distance;
To figure out the amount we want to distort we use a simple algorithm that adds the slope to the distance. To get a slope that considers the full image, we take the slope value that is passed in and multiplies is by destCoord().y. destCoord returns the position of the pixel in the current working space, and it is a good base to use to create a slope.

t = unpremultiply(sample(src, samplerCoord(src)));
Next thing we need to do is account for the fact there may be some transparency applied to the image. So before we do the color correction, we want to remove the alpha to get its pure color. To do this, we use a method called unpremultiply. This takes in a vec4 color so to get this we use a method called sample which returns a vec4 containing the color of a given pixel. To help sample do its job we pass in our src variable that’s of type sampler. Also, a vec2 containing that pixel coordinate. We get the vec2 by calling samplerCoord(src) that method uses the sampler variable and find its coordinates for us.

All of that done and we now have two set variables. d is our distortion and t is the pure color values of the pixel we are trying the change.

t = (t - d*color) / (1.0-d);
Now let us create that haze! The haze is a pretty simple calculation on our end. t and color are both vect4 types, so they easily subtract from each other.

return premultiply(t);
Once we have our new haze removed pixel, we need to reapply that transparency if needed and return the results. We can do this by using premultiply and just returning the vect4 it generates.

Building our Core Image Filter

Great we have our CI Kernel File but to use it we’ll need to wrap it into a CI Filter, so it is accessible to call from our app. Now a word of warning we are playing with a C level API and because of this while we are writing this is Swift knowledge of C, and Objective-C interoperability is needed.

import Foundation
import CoreImage

class HazeRemoveFilter: CIFilter {
  @objc dynamic var inputImage: CIImage?
  @objc dynamic var inputColor: CIColor = CIColor.white
  @objc dynamic var inputDistance: NSNumber = 0.2
  @objc dynamic var inputSlope: NSNumber = 0

First, we’ll set up the filter by importing the essentials like CoreImage and creating a new class that inherits from CIFilter. We will then define our variables we’ll be passing into our Filter and give all but inputImage a default value.

override var attributes: [String : Any] {
  return [
    kCIAttributeFilterDisplayName: "Remove Haze",

    "inputImage": [kCIAttributeIdentity: 0,
    kCIAttributeClass: "CIImage",
    kCIAttributeDisplayName: "Image",
    kCIAttributeType: kCIAttributeTypeImage],

    "inputDistance": [kCIAttributeIdentity: 0,
    kCIAttributeClass: "NSNumber",
    kCIAttributeDisplayName: "Distance Factor",
    kCIAttributeDefault: 0.2,
    kCIAttributeMin: 0,
    kCIAttributeMax: 1,
    kCIAttributeSliderMin: 0,
    kCIAttributeSliderMax: 0.7,
    kCIAttributeType: kCIAttributeTypeScalar],
    "inputSlope": [kCIAttributeIdentity: 0,
    kCIAttributeClass: "NSNumber",
    kCIAttributeDisplayName: "Slope Factor",
    kCIAttributeDefault: 0.2,
    kCIAttributeSliderMin: -0.01,
    kCIAttributeSliderMax: 0.01,
    kCIAttributeType: kCIAttributeTypeScalar],
    kCIInputColorKey: [
      kCIAttributeDefault: CIColor.white
    ]
  ]
}

Next part is a bit tricky. Because we are defining some custom inputs (being distance and slope) we need to override CIFilters attributes so it knows about them. With us overriding attributes we will also have to define some that CIFilter already had, this would be the display name, image, and color. I won’t go into all the details, the main thing to know is all we are doing is setting up a map so the C level API knows how to interpret the Objective-C objects and if any min max info, defaults, and more we want it to adhere too. There are a lot available checkout Apple’s documentation for Filter Attribute Keys in CIFilter.

private lazy var hazeRemovalKernel: CIColorKernel? = {
  guard let path = Bundle.main.path(forResource: "HazeRemove", ofType: "cikernel"), 
    let code = try? String(contentsOfFile: path) else { fatalError("Failed to load HazeRemove.cikernel from bundle") }
  let kernel = CIColorKernel(source: code)
  return kernel
}()

Now lets load in that filter! However, let us be lazy about it. This way we don’t load in the file until we know we need to use it for the first time. Again we made a CI Color Kernel, so we make sure our filter is of the same type. To load in a CI Color Kernel, you’ll pass in its source code as a string to CIColorKernel, so we simply load our file with bundle and check to make sure it has loaded in as a string. In a production application we might not want a fatal error, but for our purpose here it’ll work just fine.

override var outputImage: CIImage? {
  get {
    if let inputImage = self.inputImage {
      return hazeRemovalKernel?.apply(extent: inputImage.extent, arguments: [
        inputImage as Any,
        inputColor,
        inputDistance,
        inputSlope
      ])
    } else {
      return nil
    }
  }
}

Finally getting to business. To actually filter the image we will override a computed method called outputImage. Here we can take our hazeRemovalKernel and call the CIColorKernel method apply with our input image and our arguments and get back our new filtered output.

Registering our Filter

To make our new shiny filter available to Core Image clients, we need to create a vendor that implements the CIFilterConstructor protocol.

import CoreImage

class CustomFiltersVendor: NSObject, CIFilterConstructor {

We’ll set up our filter with the awesome name of CusomFilterVendor As this is an Objective-C protocol we’ll have to inherit NSObject. New to Swift interoperability? We teach it in our Advanced iOS Bootcamp!

public static let HazeRemoveFilterName = "HazeRemoveFilter"

I’m not a big fan of stringly typing names. So let us define a public static variable with our filter name, so we don’t misspell anything by accident.

static func registerFilters() {
let classAttributes = [kCIAttributeFilterCategories: ["CustomFilters"]]
HazeRemoveFilter.registerName(HazeRemoveFilterName, constructor: CustomFiltersVendor(), classAttributes: classAttributes)
}

Next, we’ll register our filter in a pretty straightforward way. If you have more filters you made throw them in here by calling registerName on their own filter object. This is the method telling Core Image if this filter gets called what vendor is responsible for it.

func filter(withName name: String) -> CIFilter? {
  switch name
  {
    case CustomFiltersVendor.HazeRemoveFilterName:
      return HazeRemoveFilter()
    default:
      return nil
  }
}

Finally, if Core Image tells our vendor one of its filters has been called it’ll call its filter method with the filter name to figure out what filter should get returned. Here we’re assuming you are going to love writing your filters so much you’ll want to start our your own Snapchat. For this, we set up a switch so you can easily keep adding more.

Using our Custom Filter

For using the custom filter, I like to extend CIImage that way we already have access to the root image, and it can be as simple as calling a method on any CIImage processed by our filter.

enum Filter {

case none

case gloom(intensity: Double, radius: Double)

case sepia(intensity: Double)

case blur(intensity: Double)

case removeHaze

}

First I like to define an enumerator of our filters we plan on using here we can pass in parameters like the slope and distance, but in my implementation, I was happy with the defaults we defined earlier.

func filtered(_ filter: Filter) throws -> CIImage {
  let parameters: [String: AnyObject]
  let filterName: String
  let shouldCrop: Bool
  // Configure the CIFilter() inputs based on the chosen filter
  switch filter {
    case .none:
    return self
    case .removeHaze:
    parameters = [
      kCIInputImageKey: self
    ]
    filterName = CustomFiltersVendor.HazeRemoveFilterName
    shouldCrop = false
}

Here we set up a filtered function that takes that Enum type we defined and find the corresponding filter via a switch statement. One we call ours we set up the parameters we want to pass in because we are using defaults we pass in the ImageKey of self. Moreover, make sure to set our filterName to the one we defined in our CustomFiltersVendor.

// Actually create and apply the filter
guard let filter = CIFilter(name: filterName, withInputParameters: parameters),
let output = filter.outputImage else {
  throw ImageProcessor.Error.filterConfiguration(name: filterName, params: parameters)
}

// Crop back to the extent if necessary
if shouldCrop {
  let croppedImage = output.cropped(to: extent)
  return croppedImage
} else {
  return output
}

Once we have our filterName and parameters set we call CIFilter, and it takes care of the rest for us. Assuming nothing went wrong and we didn’t throw an error we get the output by calling the outputImage on our new filter object. Some filters might edit the output image outside of the original bounds so we define a shouldCrop property so if we had one of those filters we could crop the image to its original size. Extending the image is not something our filters does but look into CIGaussianBlur for an example of one that would.

After all of that, the filter is complete! You could call our new filter from any CIImage as simple as just running let newImage = image.filtered(.removeHaze).

I’ve hope you have enjoyed going down this tunnel of custom filters implemented in Swift for iOS.

If you want to learn more about writing shaders and making more complex filters, check out OpenGL Shading Language published by Pearson Education. Apple’s documentation is also has other kinds of shaders you can create outside of CI Color Filters.

Steve Sparks

Reviewer Big Nerd Ranch

Steve Sparks has been a Nerd since 2011 and an electronics geek since age five. His primary focus is on iOS, watchOS, and tvOS development. He plays guitar and makes strange and terrifying contraptions. He lives in Atlanta with his family.

Speak with a Nerd

Schedule a call today! Our team of Nerds are ready to help

Let's Talk

Related Posts

We are ready to discuss your needs.

Not applicable? Click here to schedule a call.

Stay in Touch WITH Big Nerd Ranch News