Tuesday, 16 July 2019

Cross-platform imaging with Xamarin.Forms and SkiaSharp II

Previously, I wrote about combining Xamarin.Forms and SkiaSharp to create a cross-platform imaging app. SkiaSharp offers a number of different approaches for accessing pixel data. I went with the most performant approach for doing this, which is to use the GetPixels method to return a pointer to the pixel data, dereference the pointer whenever you want to read/write a pixel value, and use pointer arithmetic to move the pointer to other pixels.

I created a basic app that loads/displays/saves images, and performs basic imaging algorithms. Surprisingly, the execution speed of the algorithms was excellent on both iOS and Android, despite the source image size (4032x3024). However, the imaging algorithms (greyscale, threshold, sepia) were pretty basic and so I decided to move things up a gear and see what the execution speed would be like when performing convolution, which increases the amount of processing performed on an image.

In this blog post I’ll discuss implementing convolution in SkiaSharp. The sample this code comes from can be found on GitHub.

Implementing convolution

In image processing, convolution is the process of adding each element of the image to its local neighbours, weighted by a convolution kernel. The kernel is a small matrix that defines the imaging operation, such as blurring, sharpening, embossing, edge detection, and more. For more information about convolution, see Kernel image processing.

The ConvolutionKernels class in the app defines a number of kernels that implement a different imaging algorithm, when convolved with an image. The following code shows three kernels from this class:

namespace Imaging
{
    public class ConvolutionKernels
    {
        public static float[] EdgeDetection => new float[9]
        {
            -1, -1, -1,
            -1,  8, -1,
            -1, -1, -1
        };

        public static float[] LaplacianOfGaussian => new float[25]
        {
             0,  0, -1,  0,  0,
             0, -1, -2, -1,  0,
            -1, -2, 16, -2, -1,
             0, -1, -2, -1,  0,
             0,  0, -1,  0,  0
        };

        public static float[] Emboss => new float[9]
        {
            -2, -1, 0,
            -1,  1, 1,
             0,  1, 2
        };
    }
}

I implemented my own convolution algorithm for performing convolution with 3x3 kernels and was reasonably happy with its execution speed. However, as my ConvolutionKernels class included kernels of different sizes, I had to extend the algorithm to handle NxN sized kernels. Unfortunately, for larger kernel sizes the execution speed slowed quite drammatically. This is because convolution has a complexity of O(N2). However, there are fast convolution algorithms that reduce the complexity to O(N log N). For more information, see Convolution theorem.

I was about to implement a fast convolution algorithm when I discovered the CreateMatrixConvolution method in the SKImageFilter class. While I set out to avoid using any filters baked into SkiaSharp, I was happy to use this method because (1) it allows you to specify kernels of arbitrary size, (2) it allows you to specify how edge pixels in the image are handled, and (3) it turned out it was lightning fast (I’m assuming it uses fast convolution under the hood, amongst other optimisation techniques).

After investigating this method, it seemed there  were a number of obvious approaches to using it:

  1. Load the image, select a kernel, apply an SKImageFilter, then draw the resulting image.
  2. Load the image, select a kernel, and apply an SKImageFilter while redrawing the image.
  3. Select a kernel, load the image and apply an SKImageFilter while drawing it.

I implemented both (1) and (2) and settled on (2) as my final implementation as it was less code and offered slightly better performance (presumably due to SkiaSharp being optimised for drawing). In addition, I discounted (3), purely because I like to see the source image before I process it.

The following code example shows how a selected kernel is applied to an image, when drawing it:

        void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs e)
        {
            SKImageInfo info = e.Info;
            SKCanvas canvas = e.Surface.Canvas;

            canvas.Clear();
            if (image != null)
            {
                if (kernelSelected)
                {
                    using (SKPaint paint = new SKPaint())
                    {
                        paint.FilterQuality = SKFilterQuality.High;
                        paint.IsAntialias = false;
                        paint.IsDither = false;
                        paint.ImageFilter = SKImageFilter.CreateMatrixConvolution(
                            sizeI, kernel, 1f, 0f, new SKPointI(1, 1),
                            SKMatrixConvolutionTileMode.Clamp, false);

                        canvas.DrawImage(image, info.Rect, ImageStretch.Uniform, paint: paint);
                        image = e.Surface.Snapshot();
                        kernel = null;
                        kernelSelected = false;
                    }
                }
                else
                {
                    canvas.DrawImage(image, info.Rect, ImageStretch.Uniform);
                }
            }
        }

The OnCanvasViewPaintSurface handler is invoked to draw an image on the UI, when the image is loaded and whenever processing is performed on it. This method will be invoked whenevermthe InvalidateSurface method is called on the SKCanvasView object. The code in the else clause executes when an image is loaded, and draws the image on the UI. The code in the if clause executes when the user has selected a kernel, and taps a Button to perform convolution. Convolution is performed by creating an SKPaint object, and setting various properties on the object. Importantly, this includes setting the ImageFilter property to the SKImageFilter object returned by the CreateMatrixConvolution method. The arguments to the CreateMatrixConvolution method are:

  • The kernel size in pixels, as a SKSizeI struct.
  • The image processing kernel, as a float[] .
  • A scale factor applied to each pixel after convolution, as a float. I use a value of 1, so no scaling is applied.
  • A bias factor added to each pixel after convolution, as a float. I use a value of 0, representing no bias factor.
  • A kernel offset, which is applied to each pixel before convolution, as a SKPointI struct. I use values of 1,1 to ensure that no offset values are applied.
  • A tile mode that represents how pixel accesses outside the image are treated, as a SKMatrixConvolutionTileMode enumeration value. I used the Clamp enumeration member to specify that the convolution should be clamped to the image’s edge pixels.
  • A boolean value that indicates whether the alpha channel should be included in the convolution. I specified false to ensure that only the RGB channels are processed.

In addition, further arguments for the CreateMatrixConvolution method can be specified, but weren’t required here. For example, you could choose to perform convolution only on a specified region in the image.

After defining the SKImageFilter, the image is re-drawn, using the SKPaint object that includes the SKImageFilter object. The result is an image that has been convolved with the kernel selected by the user. Then, the SKSurface.Snapshot method is called, so that the re-drawn image is returned as an SKImage. This ensures that if the user selects an another kernel, convolution occurs against the new image, rather than the originally loaded image.

The following iOS screenshot shows the source image convolved with a simple edge detection kernel:

The following iOS screenshot shows the source image convolved with a kernel designed to create an emboss effect:

The following iOS screenshot shows the source image convolved with a kernel that implements the Laplacian of a Gaussian:

The Laplacian of a Gaussian is an interesting kernel that performs edge detection on smoothed image data. The Laplacian operator highlights regions of rapid intensity change, and is applied to an image that has first been smoothed with a Gaussian smoothing filter in order to reduce its sensitivity to noise.

Wrapping up

In undertaking this work, the question I set out to answer is as follows: is the combination of Xamarin.Forms and SkiaShap a viable platform for writing cross-platform imaging apps, when performing more substantial image processing? My main criteria for answering this question are:

  1. Can most of the app be written in shared code?
  2. Can imaging algorithms be implemented so that they have a fast execution speed, particularly on Android?

The answer to both questions, at this stage, is yes. I was impressed with the execution speed of SkiaSharp’s convolution algorithm on both platforms. I was particularly impressed when considering the size of the source image (4032x3024), and the amount of processing that’s performed when convolving an image with a kernel, particularly for larger kernels (the largest one in the app is 7x7).

The reason I say at this stage is that while performing convolution is a step up from basic imaging algorithms (greyscale, thresholding, sepia), it’s still considered relatively basic in image processing terms, despite the processing performed during convolution. Therefore, in my next blog post I’ll look at performing frequency filtering, which significantly increases the amount of processing performed on an image.

The sample this code comes from can be found on GitHub.

Monday, 8 July 2019

Cross-platform imaging with Xamarin.Forms and SkiaSharp

Back in the Xamarin.Forms 1.x days, I attempted to show the power of Xamarin.Forms development by writing a cross-platform imaging app. This was a mistake. While I produced a working cross-platform app, the majority of the code was platform code, joined together through DependencyService calls from a shared UI. If anything, it showed that it wasn’t easily possible to create a cross-platform imaging app with shared code. So it never saw the light of day.

I’d been thinking about this project recently, and while I knew that it’s possible to write cross-platform imaging apps with Xamarin.Forms and SkiaSharp, I wasn’t sure if it was advisable, from an execution speed point of view. In particular, I was worried about the execution speed of imaging algorithms on Android, especially when considering the resolution of photos taken with recent mobile devices. So I decided to write a proof of concept app to find out if Xamarin.Forms and SkiaSharp was a viable platform for writing cross-platform imaging apps.

App requirements and assumptions

When I talk about writing a cross-platform imaging app, I’m not particularly interested in calling platform APIs to resize images, crop images etc. I’m interested in accessing pixel data quickly, and being able to manipulate that data.

The core platforms I wanted to support were iOS and Android. UWP support would be a bonus, but I’d be happy to drop UWP support at the first sign of any issues.

The core functionality of the app is to load/display/save images, and manipulate the pixel values of the images as quickly as possible, with as much of this happening through shared code as possible. I wanted to support the common image file formats, but was only interested in supporting 32 bit images. The consequence of this is that when loading a colour image and converting it to greyscale, it would be saved back out as a 32 bit image, rather than an 8 bit image.

Note that the app is just a proof of concept app. Therefore, I wasn’t bothered about creating a slick UI. I just needed a functional UI. Similarly, I didn’t get hung up on architectural decisions. At one point I was going to implement each imaging algorithm using a plugin architecture, so the app would detect the algorithms and let the user choose them. But that was missing the point. It’s only a proof of concept. So it’s code-behind all the way, and the algorithms are hard-coded into the app.

App overview

The app was created in Xamarin.Forms and SkiaSharp, and the vast majority of the code is shared code. Platform code was required for choosing images on each platform, but that was about it. Image load/display/save/manipulation is handled with SkiaSharp shared code. Code for the sample app can be found on GitHub.

As part of our SkiaSharp docs, we’ve covered how to load and display an image using SkiaSharp. We’ve also covered how to save images using SkiaSharp. Our docs also explain how to write code to pick photos from the device’s photo library. I won’t regurgitate these topics here. Instead, just know that the app uses the techniques covered in these docs. The only difference is that while I started by using the SKBitmap class, I soon moved to using the SKImage class, after discovering that Google have plans to deprecate SKBitmap. Here’s a screenshot of the app, that shows an image of my magilyzer, which I’ll use as a test image in this blog post:

We’ve also got docs on accessing pixel data in SkiaSharp. SkiaSharp offers a number of different approaches for doing this, and understanding them is key to creating a performant app. In particular, take a look at the table in the Comparing the techniques section of the doc. This table shows execution times in milliseconds for these different approaches. The TL;DR is that the fastest approach is to use the GetPixels method to return a pointer to the pixel data, deference the pointer whenever you want to read/write a pixel value, and use pointer arithmetic to move the pointer to process other pixels.

Using this approach requires knowledge of how pixel data is stored in memory on different platforms. On iOS and Android, each pixel is stored as four bytes in RGBA format, which is represented in SkiaSharp with the SKColorType.Rgba8888 type. On UWP, each pixel is stored as four bytes in BGRA format, which is represented in SkiaSharp with the SKColorType.Bgra8888 type. Initially, I coded my imaging algorithms for all three platforms, but I got sick of having to handle UWP’s special case, so at that point it was goodbye UWP!

Basic algorithms

As I mentioned earlier, the focus of the app isn’t on calling platform APIs to performing imaging operations. It’s on accessing pixel data and manipulating that data. If you want to know how to crop images with SkiaSharp, see Cropping SkiaSharp bitmaps. Similarly, SkiaSharp has functionality for resizing images. With all that said, the first imaging algorithm I always implement when getting to grips with a new platform is converting a colour image to greyscale, as it’s a simple algorithm. The following code example shows how I accomplished this in SkiaSharp:

public static unsafe SKPixmap ToGreyscale(this SKImage image) { SKPixmap pixmap = image.PeekPixels(); byte* bmpPtr = (byte*)pixmap.GetPixels().ToPointer(); int width = image.Width; int height = image.Height; byte* tempPtr; for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { tempPtr = bmpPtr; byte red = *bmpPtr++; byte green = *bmpPtr++; byte blue = *bmpPtr++; byte alpha = *bmpPtr++; // Assuming SKColorType.Rgba8888 - used by iOS and Android // (UWP uses SKColorType.Bgra8888) byte result = (byte)(0.2126 * red + 0.7152 * green + 0.0722 * blue); bmpPtr = tempPtr; *bmpPtr++ = result; // red *bmpPtr++ = result; // green *bmpPtr++ = result; // blue *bmpPtr++ = alpha; // alpha } } return pixmap; }

This method converts a colour image to greyscale by retrieving a pointer to the start of the pixel data, and then retrieving the R, G, B, and A components of each pixel by deferencing the pointer, and then incrementing it’s address. The greyscale pixel value is obtained by multiplying the R value by 0.2126, multiplying the G value by 0.7152, multiplying the B value by 0.0722, and then summing the results. Note that the input to this method is an image in RGBA8888 format, and the output is an image in RGBA8888 format, despite being a greyscale image. Therefore the R, G, and B components of each pixel are all set to the same value. The following screenshot shows the test image converted to greyscale, on iOS:

As an example of colour processing, I implemented an algorithm for converting an image to sepia, which is shown in the following example:

public static unsafe SKPixmap ToSepia(this SKImage image) { SKPixmap pixmap = image.PeekPixels(); byte* bmpPtr = (byte*)pixmap.GetPixels().ToPointer(); int width = image.Width; int height = image.Height; byte* tempPtr; for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { tempPtr = bmpPtr; byte red = *bmpPtr++; byte green = *bmpPtr++; byte blue = *bmpPtr++; byte alpha = *bmpPtr++; // Assuming SKColorType.Rgba8888 - used by iOS and Android // (UWP uses SKColorType.Bgra8888) byte intensity = (byte)(0.299 * red + 0.587 * green + 0.114 * blue); bmpPtr = tempPtr; *bmpPtr++ = (byte)((intensity > 206) ? 255 : intensity + 49); // red *bmpPtr++ = (byte)((intensity < 14) ? 0 : intensity - 14); // green *bmpPtr++ = (byte)((intensity < 56) ? 0 : intensity - 56); // blue *bmpPtr++ = alpha; // alpha } } return pixmap; }

This method first derives an intensity value for the pixel (essentially a greyscale representation of the pixel), based on its R, G, and B components, and then sets the R, G, and B components based on this intensity value. The following screenshot shows the test image converted to sepia, on iOS:

I also implemented Otsu’s thresholding algorithm, as an example of binarisation. This algorithm typically derives the threshold for an image by minimizing intra-class variance. However, the implementation I’ve used derives the threshold by maximising inter-class variance, which is equivalent. The threshold is then used to separate pixels into foreground and background classes. For more information about this algorithm, see Otsu’s method. The code for the algorithm can be found on GitHub. The following screenshot shows the test image thresholded with this algorithm, on iOS:

Wrapping up

The question I set out to answer is as follows: is the combination of Xamarin.Forms and a SkiaSharp a viable platform for writing cross-platform imaging apps? My main criteria for answering this question are:

  1. Can most of the app be written in shared code?
  2. Can imaging algorithms be implemented so that they have a fast execution speed, particularly on Android?

The answer to both questions, at this stage, is yes. In particular, I was impressed with the execution speed of the algorithms on both platforms (even Android!). I was particularly impressed when considering the size of the source image (4032x3024). The reason I say at this stage is because the algorithms I’ve implemented are quite basic. They don’t really do any heavy processing. Therefore, in my next blog post I’ll look at performing convolution operations, which up the amount of processing performed on an image.

The sample this code comes from can be found on GitHub.

Thursday, 4 July 2019

OAuth 2.0 for Native Apps using Xamarin.Forms

About two years ago I wrote some samples that demonstrate using Xamarin.Forms to implement OAuth 2.0 for Native Apps. This spec represents the best practices for OAuth 2.0 authentication flows from mobile apps. These include:

  • Authentication requests should only be made through external user agents, such as the browser. This results in better security, and enables use of the user’s current authentication state, making single sign-on possible. Conversely, this means that authentication requests should never be made through a WebView. WebView controls are unsafe for third parties, as they leave the authorization grant and user’s credentials vulnerable to recording or malicious use. In addition, WebView controls don’t share authentication state, meaning single sign-on isn’t possible.
  • Native apps must request user authorization by creating a URI with the appropriate grant types. The app then redirects the user to this request URI. A redirect URI that the native app can receive and parse must also be supplied.
  • Native apps must use the Proof Key for Code Exchange (PKCE) protocol, to defend against apps on the same device potentially intercepting the authorization code.
  • Native apps should use the authorization code grant flow with PKCE. Conversely, native apps shouldn’t use the implicit grant flow.
  • Cross-Site Request Forgery (CSRF) attacks should be mitigated by using the state parameter to link requests and responses.

More details can be found in the OAuth 2.0 for Native Apps spec. Ultimately though, it leads to the OAuth 2.0 authentication flow for native apps being:

  1. The native app opens a browser tab with the authorisation request.
  2. The authorisation endpoint receives the authorisation request, authenticates the user, and obtains authorisation.
  3. The authorisation server issues an authorization code to the redirect URI.
  4. The native app receives the authorisation code from the redirect URI.
  5. The native app presents presents the authorization code at the token endpoint.
  6. The token endpoint validates the authorization code and issues the requested tokens.

For a whole variety of reasons, the samples that demo this using Xamarin.Forms never saw the light of day, but they can now be found in my GitHub repo. There are two samples:

Both samples consume endpoints on a publically available IdentityServer site. The main things to note about the samples are that (1) they use custom URL schemes defined in the platform projects, and (2) each platform project has code to open/close the browser as required, which is invoked with the Xamarin.Forms DependencyService.

Hopefully the samples will be of use to people, and if you want to know how the code works you should thoroughly read the OAuth 2.0 for Native Apps spec.

Wednesday, 3 July 2019

What’s new in CollectionView in Xamarin.Forms 4.1

Xamarin.Forms 4.1 was released on Monday, and as well as new functionality such as CheckBox, it includes a number of updates to CollectionView. The main CollectionView updates are outlined below.

Item Spacing

By default, each item in a CollectionView lacks empty space around it. This can now be changed by setting properties on the items layout used by the CollectionView.

For a ListItemsLayout, set the ItemSpacing property to a double that represents the empty space around each item. For a GridItemsLayout, set the VerticalItemSpacing and HorizontalItemSpacing properties to double values that represent the empty space vertically and horizontally around each item.

For more info, see Item spacing.

Specifying Layout

The static VerticalList and HorizontalList members in the ListItemsLayout class have been renamed to Vertical and Horizontal.

In addition, CollectionView has gained some converters so that vertical and horizontal lists can be specified in XAML using strings, rather than static members:

<CollectionView ItemsSource="{Binding Monkeys}" ItemsLayout="HorizontalList" />

For more info, see CollectionView Layout.

Item Sizing Strategy

The ItemSizingStrategy enumeration is now implemented on Android. For more info, see Item sizing.

SelectedItem and SelectedItems

The SelectedItem property now uses a TwoWay binding by default, and the selection can be cleared by setting the property, or the object it binds to, to null.

The SelectedItems property now uses a OneWay binding by default, and is now bindable to view model properties. However, note that this property is defined as IList<object>, and must bind to a collection that implements IList, and that has an object generic type. Therefore, the bound collection should be, for example, ObservableCollection<object> rather than ObservableCollection<Monkey>. In addition, selections can be cleared by setting this property, of the collection it binds to, to null.

For more info, see CollectionView Selection.