Monday, 8 July 2019

Cross-platform imaging with Xamarin.Forms and SkiaSharp

Back in the Xamarin.Forms 1.x days, I attempted to show the power of Xamarin.Forms development by writing a cross-platform imaging app. This was a mistake. While I produced a working cross-platform app, the majority of the code was platform code, joined together through DependencyService calls from a shared UI. If anything, it showed that it wasn’t easily possible to create a cross-platform imaging app with shared code. So it never saw the light of day.

I’d been thinking about this project recently, and while I knew that it’s possible to write cross-platform imaging apps with Xamarin.Forms and SkiaSharp, I wasn’t sure if it was advisable, from an execution speed point of view. In particular, I was worried about the execution speed of imaging algorithms on Android, especially when considering the resolution of photos taken with recent mobile devices. So I decided to write a proof of concept app to find out if Xamarin.Forms and SkiaSharp was a viable platform for writing cross-platform imaging apps.

App requirements and assumptions

When I talk about writing a cross-platform imaging app, I’m not particularly interested in calling platform APIs to resize images, crop images etc. I’m interested in accessing pixel data quickly, and being able to manipulate that data.

The core platforms I wanted to support were iOS and Android. UWP support would be a bonus, but I’d be happy to drop UWP support at the first sign of any issues.

The core functionality of the app is to load/display/save images, and manipulate the pixel values of the images as quickly as possible, with as much of this happening through shared code as possible. I wanted to support the common image file formats, but was only interested in supporting 32 bit images. The consequence of this is that when loading a colour image and converting it to greyscale, it would be saved back out as a 32 bit image, rather than an 8 bit image.

Note that the app is just a proof of concept app. Therefore, I wasn’t bothered about creating a slick UI. I just needed a functional UI. Similarly, I didn’t get hung up on architectural decisions. At one point I was going to implement each imaging algorithm using a plugin architecture, so the app would detect the algorithms and let the user choose them. But that was missing the point. It’s only a proof of concept. So it’s code-behind all the way, and the algorithms are hard-coded into the app.

App overview

The app was created in Xamarin.Forms and SkiaSharp, and the vast majority of the code is shared code. Platform code was required for choosing images on each platform, but that was about it. Image load/display/save/manipulation is handled with SkiaSharp shared code. Code for the sample app can be found on GitHub.

As part of our SkiaSharp docs, we’ve covered how to load and display an image using SkiaSharp. We’ve also covered how to save images using SkiaSharp. Our docs also explain how to write code to pick photos from the device’s photo library. I won’t regurgitate these topics here. Instead, just know that the app uses the techniques covered in these docs. The only difference is that while I started by using the SKBitmap class, I soon moved to using the SKImage class, after discovering that Google have plans to deprecate SKBitmap. Here’s a screenshot of the app, that shows an image of my magilyzer, which I’ll use as a test image in this blog post:

We’ve also got docs on accessing pixel data in SkiaSharp. SkiaSharp offers a number of different approaches for doing this, and understanding them is key to creating a performant app. In particular, take a look at the table in the Comparing the techniques section of the doc. This table shows execution times in milliseconds for these different approaches. The TL;DR is that the fastest approach is to use the GetPixels method to return a pointer to the pixel data, deference the pointer whenever you want to read/write a pixel value, and use pointer arithmetic to move the pointer to process other pixels.

Using this approach requires knowledge of how pixel data is stored in memory on different platforms. On iOS and Android, each pixel is stored as four bytes in RGBA format, which is represented in SkiaSharp with the SKColorType.Rgba8888 type. On UWP, each pixel is stored as four bytes in BGRA format, which is represented in SkiaSharp with the SKColorType.Bgra8888 type. Initially, I coded my imaging algorithms for all three platforms, but I got sick of having to handle UWP’s special case, so at that point it was goodbye UWP!

Basic algorithms

As I mentioned earlier, the focus of the app isn’t on calling platform APIs to performing imaging operations. It’s on accessing pixel data and manipulating that data. If you want to know how to crop images with SkiaSharp, see Cropping SkiaSharp bitmaps. Similarly, SkiaSharp has functionality for resizing images. With all that said, the first imaging algorithm I always implement when getting to grips with a new platform is converting a colour image to greyscale, as it’s a simple algorithm. The following code example shows how I accomplished this in SkiaSharp:

public static unsafe SKPixmap ToGreyscale(this SKImage image) { SKPixmap pixmap = image.PeekPixels(); byte* bmpPtr = (byte*)pixmap.GetPixels().ToPointer(); int width = image.Width; int height = image.Height; byte* tempPtr; for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { tempPtr = bmpPtr; byte red = *bmpPtr++; byte green = *bmpPtr++; byte blue = *bmpPtr++; byte alpha = *bmpPtr++; // Assuming SKColorType.Rgba8888 - used by iOS and Android // (UWP uses SKColorType.Bgra8888) byte result = (byte)(0.2126 * red + 0.7152 * green + 0.0722 * blue); bmpPtr = tempPtr; *bmpPtr++ = result; // red *bmpPtr++ = result; // green *bmpPtr++ = result; // blue *bmpPtr++ = alpha; // alpha } } return pixmap; }

This method converts a colour image to greyscale by retrieving a pointer to the start of the pixel data, and then retrieving the R, G, B, and A components of each pixel by deferencing the pointer, and then incrementing it’s address. The greyscale pixel value is obtained by multiplying the R value by 0.2126, multiplying the G value by 0.7152, multiplying the B value by 0.0722, and then summing the results. Note that the input to this method is an image in RGBA8888 format, and the output is an image in RGBA8888 format, despite being a greyscale image. Therefore the R, G, and B components of each pixel are all set to the same value. The following screenshot shows the test image converted to greyscale, on iOS:

As an example of colour processing, I implemented an algorithm for converting an image to sepia, which is shown in the following example:

public static unsafe SKPixmap ToSepia(this SKImage image) { SKPixmap pixmap = image.PeekPixels(); byte* bmpPtr = (byte*)pixmap.GetPixels().ToPointer(); int width = image.Width; int height = image.Height; byte* tempPtr; for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { tempPtr = bmpPtr; byte red = *bmpPtr++; byte green = *bmpPtr++; byte blue = *bmpPtr++; byte alpha = *bmpPtr++; // Assuming SKColorType.Rgba8888 - used by iOS and Android // (UWP uses SKColorType.Bgra8888) byte intensity = (byte)(0.299 * red + 0.587 * green + 0.114 * blue); bmpPtr = tempPtr; *bmpPtr++ = (byte)((intensity > 206) ? 255 : intensity + 49); // red *bmpPtr++ = (byte)((intensity < 14) ? 0 : intensity - 14); // green *bmpPtr++ = (byte)((intensity < 56) ? 0 : intensity - 56); // blue *bmpPtr++ = alpha; // alpha } } return pixmap; }

This method first derives an intensity value for the pixel (essentially a greyscale representation of the pixel), based on its R, G, and B components, and then sets the R, G, and B components based on this intensity value. The following screenshot shows the test image converted to sepia, on iOS:

I also implemented Otsu’s thresholding algorithm, as an example of binarisation. This algorithm typically derives the threshold for an image by minimizing intra-class variance. However, the implementation I’ve used derives the threshold by maximising inter-class variance, which is equivalent. The threshold is then used to separate pixels into foreground and background classes. For more information about this algorithm, see Otsu’s method. The code for the algorithm can be found on GitHub. The following screenshot shows the test image thresholded with this algorithm, on iOS:

Wrapping up

The question I set out to answer is as follows: is the combination of Xamarin.Forms and a SkiaSharp a viable platform for writing cross-platform imaging apps? My main criteria for answering this question are:

  1. Can most of the app be written in shared code?
  2. Can imaging algorithms be implemented so that they have a fast execution speed, particularly on Android?

The answer to both questions, at this stage, is yes. In particular, I was impressed with the execution speed of the algorithms on both platforms (even Android!). I was particularly impressed when considering the size of the source image (4032x3024). The reason I say at this stage is because the algorithms I’ve implemented are quite basic. They don’t really do any heavy processing. Therefore, in my next blog post I’ll look at performing convolution operations, which up the amount of processing performed on an image.

The sample this code comes from can be found on GitHub.

No comments:

Post a comment