UIImage+CVPixelBuffer: Converting an UIImage to a Core Video Pixel Buffer

With this UIImage extension you will be able to convert an UIImage to a CVPixelBuffer. The Core Video pixel buffer is an image buffer that holds pixels in main memory.

UIImage+CVPixelBuffer: Converting an UIImage to a Core Video Pixel Buffer

In some contexts you have to work with data types of more low lever frameworks. In regard to image and video data, the frameworks Core Video and Core Image serve to process digital image or video data. If you need to manipulate or work on individual video frames, the pipeline-based API of Core Video is using a CVPixelBuffer to hold pixel data in main memory for manipulation.

Use Case example:

The Core Video pixel buffer is used by the Vision framework for example, to execute algorithms on input images or videos like face detection, barcode recognition or feature tracking and can also be used with Core ML models for image classification or object detection.

Input image data for Vision may be CGImage, the Core Graphics format that can be generated from UIImage through the integrated instance property cgImage, or CIImage, the Core Image format that can be generated from UIImage through the integrated instance property ciImage.

Then, Vision also works with CVPixelBuffer, the Core Video format for data from live video feeds or movie files than can also be generated from UIImage with the provided extension below.

import Foundation
import UIKit

extension UIImage {
    func convertToBuffer() -> CVPixelBuffer? {
        let attributes = [
            kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
            kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue
        ] as CFDictionary
        var pixelBuffer: CVPixelBuffer?
        let status = CVPixelBufferCreate(
            kCFAllocatorDefault, Int(self.size.width),
        guard (status == kCVReturnSuccess) else {
            return nil
        CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
        let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        let context = CGContext(
            data: pixelData,
            width: Int(self.size.width),
            height: Int(self.size.height),
            bitsPerComponent: 8,
            bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!),
            space: rgbColorSpace,
            bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
        context?.translateBy(x: 0, y: self.size.height)
        context?.scaleBy(x: 1.0, y: -1.0)
        self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
        CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
        return pixelBuffer

Through this UIImage+Resize extension, any UIImage can be conveniently converted into a Core Video pixel buffer.For example, it can then be used with the Vision framework and a custom Core ML machine learning model to classify the image or detect objects within the image.

Commonly, Core ML models also have requirements for specific resolutions when providing input image data. Therefor, this extension might be used in conjunction with UIImage+Resize (link to Reference on Ghost) to conveniently resize the an UIImage to any size using a CGSize structure with width and height values as a parameter. For an extension to UIImage that combines both the resizing and conversion to CVPixelBuffer, also consider the UIImage+Resize+CVPixelbuffer extension.

This reference is part of a series of articles derived from the presentation Creating Machine Learning Models with Create ML presented as a one time event at the Swift Heroes 2021 Digital Conference on April 16th, 2021.

Where to go next?

If you are interested into knowing more about uses cases for the CVPixelBuffer you can check other tutorials on:

You want to know more? There is more to see...