Monday, 29 July 2019

Frequency filtering with Xamarin.Forms and SkiaSharp

Previously, I wrote about performing convolution in SkiaSharp. I used the CreateMatrixConvolution method from the SKImageFilter class to convolve kernels with the source image. This method allows you to specify kernels of arbitrary size, allows you to specify how the edge pixels in the image are handled, and is lightning fast. I was particularly impressed with the execution speed of the method when considering the size of the source image (4032x3024), and the amount of processing that’s performed when convolving an image with a kernel, particularly for larger kernels.

However, convolution is still considered relatively basic in image processing terms, despite the amount of processing performed. Therefore, I decided to move things up a gear and see what the execution speed would be like when performing frequency filtering, which substantially increases the amount of processing performed on an image.

In this blog post I’ll discuss implementing frequency filtering in SkiaSharp. The sample this code comes from can be found on GitHub.

Implementing frequency filtering

As any computer science undergrad can tell you, a Fourier transform take a signal from the time domain and transforms it into the frequency domain. What this means is it takes a signal, and decomposes it into its constituent frequencies. The world of signal processing is full of applications for the Fourier transform, with one of the stanard applications being musical instrument tuners, which compare the frequency of sound input to the known frequencies of specific notes. Fourier transforms can be performed using analog electronics, digital electronics, and even at the speed of light using optics. For more info about Fourier transforms, see Fourier transform.

Another standard application of the Fourier transform is to perform frequency filtering. A signal can be transformed to the frequency domain, where specific frequencies, or frequency ranges, can be removed. This enables a class of filters to be implemented, known as pass through filters: specifically high-pass, low-pass, and band-pass filters. In terms of image processing, a low-pass filter attentuates high frequencies and retains low frequencies unchanged, with the result being similar to that of a smoothing filter. A high-pass filter attentuates low frequencies, and retains high frequencies unchanged, with the results being similar to that of edge detection. A band-pass filter attentuates very low and very high frequencies, but retains a middle band of frequencies, and is often used to enhance edges while reducing noise.

In terms of image processing, the Fourier transform actually transforms an image from the spatial domain to the frequency domain, where frequency filtering can then be performed. The process of performing frequency filtering on an image is:

  • Fourier transform the image.
  • Filter the image frequencies in the frequency domain.
  • Inverse Fourier transform the frequency data back into the spatial domain.

In reality, the process of Fourier transforming an image is more complex than the bullet points suggest. The frequency data that results from a Fourier transform is expressed as a complex number (a number that has a real, and imaginary component), whose magnitude represents the amount of that frequency present in the original signal, with the phase offset being the basic sinusoid in that frequency. For more information about complex numbers, see complex numbers.

In addition, as well as handling complex numbers, there are other issues to be dealt with when Fourier transforming an image. Firstly, computing a Fourier transform is too slow to be practical, because it has a complexity of O(N2). However, this issue can be overcome by implementing a Fast Fourier Transform (FFT), which has a complexity of O(N log N). For more information about this algorithm, see Fast Fourier transform. The second issue is that Fourier transforms only operate on image dimensions that are a power of 2 (512x512, 1024x1024 etc.). While there are techniques to allow arbitrarily dimensoined images to be Fourier transformed, they are beyond the scope of this blog post. Therefore, any images processed by the algorithm must first be manually resized. The final issue is which image data do you Fourier transform? The answer is to Fourier transform a greyscale representation of the image, as it’s the intensity data that caries more information than colour channels.

For clarity, restating all this leads to the following assumptions for my implementation:

  • The input image for the Fourier transform will be a 32-bit image, with RGBA channels.
  • The input image dimensions must be a power of 2. This is due to the FFT algorithm (Radix-2 FFT) being used.
  • The input image will first be converted to a greyscale representation.
  • The greyscale representation will then be converted to a complex number representation.
  • The complex number data will undergo a 2D FFT.
  • Filtering will then be performed in the frequency domain.
  • The filtered complex data will be inverse FFT’d.
  • The filtered complex data will be converted back from complex numbers to pixel data for display.
  • The output image will be in greyscale, but still stored as a RGBA image.

Implementation

It takes more code to implement a 2D FFT than can be covered in a blog post, particularly as there are complexities I haven’t outlined here. Rather than show every last bit of code, I’ll just state that there’s a Complex type that represents a complex number, and a ComplexImage type that represents an image as a 2D array of complex numbers. The full source code can be found on GitHub.

The following code example shows a high level overview of how the frequency filtering process is implemented:

        public static unsafe SKPixmap FrequencyFilter(this SKImage image, int min, int max)
        {
            ComplexImage complexImage = image.ToComplexImage();

            complexImage.FastFourierTransform();
            FrequencyFilter filter = new FrequencyFilter(new FrequencyRange(min, max));
            filter.Apply(complexImage);
            complexImage.ReverseFastFourierTransform();

            SKPixmap pixmap = complexImage.ToSKPixmap(image);
            return pixmap;
        }

The FrequencyFilter method converts the image to a ComplexImage object, which then undergoes a 2D FFT. A FrequencyFilter object is then created, based on the values of the Slider objects displayed on the UI.The filter is applied to the ComplexImage object, and then inverse FFT’d, before being converted back to pixel data.

A Fourier transform produces a complex number valued output image that can be displayed as two images, for the real and imaginary coefficients, or as their magnitude and phase. In image processing, it’s usually the magnitude of the Fourier transform that’s displayed, as it contains most of the information about the geometric structure of the source image. However, to inverse transform the data after processing in the frequency domain, both the magnitude and phase of the Fourier data is required, and so must be preserved.

It’s possible to view the Fourier transformed image (known as a frequency spectra) by commenting out a couple of lines in the FrequencyFilter method shown above. I mentioned earlier that are additional complexities when implementing a FFT, and one of them is that the dynamic range of Fourier coefficients is too large to be displayed in an image, and the resulting image would appear all black. However, if a logarithmic transformation is applied to the coefficients (which the source code does), the Fourier transformed image can be displayed:

The image is shifted so that Frequency(0,0) is displayed at the center of the image. The further away from the center a point in the image is, the higher is its corresponding frequency. Therefore, this image tells us that the image largely consists of low frequencies. In addition, it’s a fact that low frequencies contain more image information than higher frequencies (which is taken advantage of by image compression algorithms). The spectra also tells us that there’s one dominating direction in the image, which passes vertically through the center. This originates from the many vertical lines present in the source image.

Frequency filtering is performed by the Apply method in the FrequencyFilter class:

        public void Apply(ComplexImage complexImage)
        {
            if (!complexImage.IsFourierTransformed)
            {
                throw new ArgumentException("The source complex image should be Fourier transformed.");
            }

            int width = complexImage.Width;
            int height = complexImage.Height;
            int halfWidth = width >> 1;
            int halfHeight = height >> 1;
            int min = frequencyRange.Min;
            int max = frequencyRange.Max;

            Complex[,] data = complexImage.Data;
            for (int i = 0; i < height; i++)
            {
                int y = i - halfHeight;
                for (int j = 0; j < width; j++)
                {
                    int x = j - halfWidth;
                    int d = (int)Math.Sqrt(x * x + y * y);

                    if ((d > max) || (d < min))
                    {
                        data[i, j].Re = 0;
                        data[i, j].Im = 0;
                    }
                }
            }
        }

This method iterates over the complex image data, and zeros the real and imaginery values that lie outside the frequency range specified by the min and max values. Conversely, frequency data within the min and max values is passed through. This method therefore implements a band-pass filter, which can be configured to operate at any frequency range.

It should therefore follow from this, that if a frequency filter with a min value of 0 and a max value of 1024 is applied, the resulting inverse transformed frequency filtered image should be a perfect greyscale representation of the original source image. The following screenshot shows this:

Furthermore, because the earlier frequency spectra shows that the image is largely comprised of low frequency data, a frequency filter with a min value of 0 and a max value of 128 still results (after inverse FFT) in a perfect greyscale representation of the original source image. The following screenshot shows this:

However, a frequency filter with a min value of 10 and a max value of 128 yields the following image:

In this image, because some of the low frequency data has been removed, only sharp changes in intensity values are being preserved, with the resulting image beginning to look as if it’s been edge detected. Similarly, a frequency filter with a min value of 20 and a max value of 128 furthers this effect:

Again, the output is now looking even more like the output of an edge detector.

While this example is of no immediate practical use, it hopefully shows the potential of what can be achieved with frequency filtering. One of the main uses of frequency filtering in image processing is to remove noise. If the frequency range of the noise can be identified, which it often can be, that frequency range can be removed, resulting in a denoised image. Another real world application of the Fourier transform in imaging, is producing images of steel rebar quality inside concrete (think steel rebar inside concrete walls, bridges etc.). In this case, the transducer data can be deconvolved (in the frequency domain) with the point spread function of the transducer, to yield images of steel rebar quality inside concrete, from which deterioration can be identified.

Wrapping up

What I set out to address here is, is the combination of Xamarin.Forms and SkiaSharp a viable platform for writing cross-platform imaging apps, when performing substantial image processing? My main criteria for answering this question are:

  1. Can most of the app be written in shared code?
  2. Can imaging algorithms be implemented so that they have a fast execution speed, particularly on Android?

The answer to both questions, is yes. I was happy with the speed of execution of the Fourier transform on both platform, especically when considering the size of the source image (2048x2048), and the sheer amount of processing that’s performed when frequency filtering an image. In addition, there are plenty of opportunities to further optimise my implementation, as my implementation focus was clarity rather than optimisation. In particular, a 2D FFT naturally lends itself to parallelisation.

While consumer imaging apps don’t typically contain operations that Fourier transform image data, it’s a mainstay of scientific imaging. Therefore, it’s safe to say that Xamarin.Forms and SkiaSharp also make a good combination for scientific imaging apps.

The sample this code comes from can be found on GitHub.

Tuesday, 16 July 2019

Cross-platform imaging with Xamarin.Forms and SkiaSharp II

Previously, I wrote about combining Xamarin.Forms and SkiaSharp to create a cross-platform imaging app. SkiaSharp offers a number of different approaches for accessing pixel data. I went with the most performant approach for doing this, which is to use the GetPixels method to return a pointer to the pixel data, dereference the pointer whenever you want to read/write a pixel value, and use pointer arithmetic to move the pointer to other pixels.

I created a basic app that loads/displays/saves images, and performs basic imaging algorithms. Surprisingly, the execution speed of the algorithms was excellent on both iOS and Android, despite the source image size (4032x3024). However, the imaging algorithms (greyscale, threshold, sepia) were pretty basic and so I decided to move things up a gear and see what the execution speed would be like when performing convolution, which increases the amount of processing performed on an image.

In this blog post I’ll discuss implementing convolution in SkiaSharp. The sample this code comes from can be found on GitHub.

Implementing convolution

In image processing, convolution is the process of adding each element of the image to its local neighbours, weighted by a convolution kernel. The kernel is a small matrix that defines the imaging operation, such as blurring, sharpening, embossing, edge detection, and more. For more information about convolution, see Kernel image processing.

The ConvolutionKernels class in the app defines a number of kernels that implement a different imaging algorithm, when convolved with an image. The following code shows three kernels from this class:

namespace Imaging
{
    public class ConvolutionKernels
    {
        public static float[] EdgeDetection => new float[9]
        {
            -1, -1, -1,
            -1,  8, -1,
            -1, -1, -1
        };

        public static float[] LaplacianOfGaussian => new float[25]
        {
             0,  0, -1,  0,  0,
             0, -1, -2, -1,  0,
            -1, -2, 16, -2, -1,
             0, -1, -2, -1,  0,
             0,  0, -1,  0,  0
        };

        public static float[] Emboss => new float[9]
        {
            -2, -1, 0,
            -1,  1, 1,
             0,  1, 2
        };
    }
}

I implemented my own convolution algorithm for performing convolution with 3x3 kernels and was reasonably happy with its execution speed. However, as my ConvolutionKernels class included kernels of different sizes, I had to extend the algorithm to handle NxN sized kernels. Unfortunately, for larger kernel sizes the execution speed slowed quite drammatically. This is because convolution has a complexity of O(N2). However, there are fast convolution algorithms that reduce the complexity to O(N log N). For more information, see Convolution theorem.

I was about to implement a fast convolution algorithm when I discovered the CreateMatrixConvolution method in the SKImageFilter class. While I set out to avoid using any filters baked into SkiaSharp, I was happy to use this method because (1) it allows you to specify kernels of arbitrary size, (2) it allows you to specify how edge pixels in the image are handled, and (3) it turned out it was lightning fast (I’m assuming it uses fast convolution under the hood, amongst other optimisation techniques).

After investigating this method, it seemed there  were a number of obvious approaches to using it:

  1. Load the image, select a kernel, apply an SKImageFilter, then draw the resulting image.
  2. Load the image, select a kernel, and apply an SKImageFilter while redrawing the image.
  3. Select a kernel, load the image and apply an SKImageFilter while drawing it.

I implemented both (1) and (2) and settled on (2) as my final implementation as it was less code and offered slightly better performance (presumably due to SkiaSharp being optimised for drawing). In addition, I discounted (3), purely because I like to see the source image before I process it.

The following code example shows how a selected kernel is applied to an image, when drawing it:

        void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs e)
        {
            SKImageInfo info = e.Info;
            SKCanvas canvas = e.Surface.Canvas;

            canvas.Clear();
            if (image != null)
            {
                if (kernelSelected)
                {
                    using (SKPaint paint = new SKPaint())
                    {
                        paint.FilterQuality = SKFilterQuality.High;
                        paint.IsAntialias = false;
                        paint.IsDither = false;
                        paint.ImageFilter = SKImageFilter.CreateMatrixConvolution(
                            sizeI, kernel, 1f, 0f, new SKPointI(1, 1),
                            SKMatrixConvolutionTileMode.Clamp, false);

                        canvas.DrawImage(image, info.Rect, ImageStretch.Uniform, paint: paint);
                        image = e.Surface.Snapshot();
                        kernel = null;
                        kernelSelected = false;
                    }
                }
                else
                {
                    canvas.DrawImage(image, info.Rect, ImageStretch.Uniform);
                }
            }
        }

The OnCanvasViewPaintSurface handler is invoked to draw an image on the UI, when the image is loaded and whenever processing is performed on it. This method will be invoked whenevermthe InvalidateSurface method is called on the SKCanvasView object. The code in the else clause executes when an image is loaded, and draws the image on the UI. The code in the if clause executes when the user has selected a kernel, and taps a Button to perform convolution. Convolution is performed by creating an SKPaint object, and setting various properties on the object. Importantly, this includes setting the ImageFilter property to the SKImageFilter object returned by the CreateMatrixConvolution method. The arguments to the CreateMatrixConvolution method are:

  • The kernel size in pixels, as a SKSizeI struct.
  • The image processing kernel, as a float[] .
  • A scale factor applied to each pixel after convolution, as a float. I use a value of 1, so no scaling is applied.
  • A bias factor added to each pixel after convolution, as a float. I use a value of 0, representing no bias factor.
  • A kernel offset, which is applied to each pixel before convolution, as a SKPointI struct. I use values of 1,1 to ensure that no offset values are applied.
  • A tile mode that represents how pixel accesses outside the image are treated, as a SKMatrixConvolutionTileMode enumeration value. I used the Clamp enumeration member to specify that the convolution should be clamped to the image’s edge pixels.
  • A boolean value that indicates whether the alpha channel should be included in the convolution. I specified false to ensure that only the RGB channels are processed.

In addition, further arguments for the CreateMatrixConvolution method can be specified, but weren’t required here. For example, you could choose to perform convolution only on a specified region in the image.

After defining the SKImageFilter, the image is re-drawn, using the SKPaint object that includes the SKImageFilter object. The result is an image that has been convolved with the kernel selected by the user. Then, the SKSurface.Snapshot method is called, so that the re-drawn image is returned as an SKImage. This ensures that if the user selects an another kernel, convolution occurs against the new image, rather than the originally loaded image.

The following iOS screenshot shows the source image convolved with a simple edge detection kernel:

The following iOS screenshot shows the source image convolved with a kernel designed to create an emboss effect:

The following iOS screenshot shows the source image convolved with a kernel that implements the Laplacian of a Gaussian:

The Laplacian of a Gaussian is an interesting kernel that performs edge detection on smoothed image data. The Laplacian operator highlights regions of rapid intensity change, and is applied to an image that has first been smoothed with a Gaussian smoothing filter in order to reduce its sensitivity to noise.

Wrapping up

In undertaking this work, the question I set out to answer is as follows: is the combination of Xamarin.Forms and SkiaShap a viable platform for writing cross-platform imaging apps, when performing more substantial image processing? My main criteria for answering this question are:

  1. Can most of the app be written in shared code?
  2. Can imaging algorithms be implemented so that they have a fast execution speed, particularly on Android?

The answer to both questions, at this stage, is yes. I was impressed with the execution speed of SkiaSharp’s convolution algorithm on both platforms. I was particularly impressed when considering the size of the source image (4032x3024), and the amount of processing that’s performed when convolving an image with a kernel, particularly for larger kernels (the largest one in the app is 7x7).

The reason I say at this stage is that while performing convolution is a step up from basic imaging algorithms (greyscale, thresholding, sepia), it’s still considered relatively basic in image processing terms, despite the processing performed during convolution. Therefore, in my next blog post I’ll look at performing frequency filtering, which significantly increases the amount of processing performed on an image.

The sample this code comes from can be found on GitHub.

Monday, 8 July 2019

Cross-platform imaging with Xamarin.Forms and SkiaSharp

Back in the Xamarin.Forms 1.x days, I attempted to show the power of Xamarin.Forms development by writing a cross-platform imaging app. This was a mistake. While I produced a working cross-platform app, the majority of the code was platform code, joined together through DependencyService calls from a shared UI. If anything, it showed that it wasn’t easily possible to create a cross-platform imaging app with shared code. So it never saw the light of day.

I’d been thinking about this project recently, and while I knew that it’s possible to write cross-platform imaging apps with Xamarin.Forms and SkiaSharp, I wasn’t sure if it was advisable, from an execution speed point of view. In particular, I was worried about the execution speed of imaging algorithms on Android, especially when considering the resolution of photos taken with recent mobile devices. So I decided to write a proof of concept app to find out if Xamarin.Forms and SkiaSharp was a viable platform for writing cross-platform imaging apps.

App requirements and assumptions

When I talk about writing a cross-platform imaging app, I’m not particularly interested in calling platform APIs to resize images, crop images etc. I’m interested in accessing pixel data quickly, and being able to manipulate that data.

The core platforms I wanted to support were iOS and Android. UWP support would be a bonus, but I’d be happy to drop UWP support at the first sign of any issues.

The core functionality of the app is to load/display/save images, and manipulate the pixel values of the images as quickly as possible, with as much of this happening through shared code as possible. I wanted to support the common image file formats, but was only interested in supporting 32 bit images. The consequence of this is that when loading a colour image and converting it to greyscale, it would be saved back out as a 32 bit image, rather than an 8 bit image.

Note that the app is just a proof of concept app. Therefore, I wasn’t bothered about creating a slick UI. I just needed a functional UI. Similarly, I didn’t get hung up on architectural decisions. At one point I was going to implement each imaging algorithm using a plugin architecture, so the app would detect the algorithms and let the user choose them. But that was missing the point. It’s only a proof of concept. So it’s code-behind all the way, and the algorithms are hard-coded into the app.

App overview

The app was created in Xamarin.Forms and SkiaSharp, and the vast majority of the code is shared code. Platform code was required for choosing images on each platform, but that was about it. Image load/display/save/manipulation is handled with SkiaSharp shared code. Code for the sample app can be found on GitHub.

As part of our SkiaSharp docs, we’ve covered how to load and display an image using SkiaSharp. We’ve also covered how to save images using SkiaSharp. Our docs also explain how to write code to pick photos from the device’s photo library. I won’t regurgitate these topics here. Instead, just know that the app uses the techniques covered in these docs. The only difference is that while I started by using the SKBitmap class, I soon moved to using the SKImage class, after discovering that Google have plans to deprecate SKBitmap. Here’s a screenshot of the app, that shows an image of my magilyzer, which I’ll use as a test image in this blog post:

We’ve also got docs on accessing pixel data in SkiaSharp. SkiaSharp offers a number of different approaches for doing this, and understanding them is key to creating a performant app. In particular, take a look at the table in the Comparing the techniques section of the doc. This table shows execution times in milliseconds for these different approaches. The TL;DR is that the fastest approach is to use the GetPixels method to return a pointer to the pixel data, deference the pointer whenever you want to read/write a pixel value, and use pointer arithmetic to move the pointer to process other pixels.

Using this approach requires knowledge of how pixel data is stored in memory on different platforms. On iOS and Android, each pixel is stored as four bytes in RGBA format, which is represented in SkiaSharp with the SKColorType.Rgba8888 type. On UWP, each pixel is stored as four bytes in BGRA format, which is represented in SkiaSharp with the SKColorType.Bgra8888 type. Initially, I coded my imaging algorithms for all three platforms, but I got sick of having to handle UWP’s special case, so at that point it was goodbye UWP!

Basic algorithms

As I mentioned earlier, the focus of the app isn’t on calling platform APIs to performing imaging operations. It’s on accessing pixel data and manipulating that data. If you want to know how to crop images with SkiaSharp, see Cropping SkiaSharp bitmaps. Similarly, SkiaSharp has functionality for resizing images. With all that said, the first imaging algorithm I always implement when getting to grips with a new platform is converting a colour image to greyscale, as it’s a simple algorithm. The following code example shows how I accomplished this in SkiaSharp:

public static unsafe SKPixmap ToGreyscale(this SKImage image) { SKPixmap pixmap = image.PeekPixels(); byte* bmpPtr = (byte*)pixmap.GetPixels().ToPointer(); int width = image.Width; int height = image.Height; byte* tempPtr; for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { tempPtr = bmpPtr; byte red = *bmpPtr++; byte green = *bmpPtr++; byte blue = *bmpPtr++; byte alpha = *bmpPtr++; // Assuming SKColorType.Rgba8888 - used by iOS and Android // (UWP uses SKColorType.Bgra8888) byte result = (byte)(0.2126 * red + 0.7152 * green + 0.0722 * blue); bmpPtr = tempPtr; *bmpPtr++ = result; // red *bmpPtr++ = result; // green *bmpPtr++ = result; // blue *bmpPtr++ = alpha; // alpha } } return pixmap; }

This method converts a colour image to greyscale by retrieving a pointer to the start of the pixel data, and then retrieving the R, G, B, and A components of each pixel by deferencing the pointer, and then incrementing it’s address. The greyscale pixel value is obtained by multiplying the R value by 0.2126, multiplying the G value by 0.7152, multiplying the B value by 0.0722, and then summing the results. Note that the input to this method is an image in RGBA8888 format, and the output is an image in RGBA8888 format, despite being a greyscale image. Therefore the R, G, and B components of each pixel are all set to the same value. The following screenshot shows the test image converted to greyscale, on iOS:

As an example of colour processing, I implemented an algorithm for converting an image to sepia, which is shown in the following example:

public static unsafe SKPixmap ToSepia(this SKImage image) { SKPixmap pixmap = image.PeekPixels(); byte* bmpPtr = (byte*)pixmap.GetPixels().ToPointer(); int width = image.Width; int height = image.Height; byte* tempPtr; for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { tempPtr = bmpPtr; byte red = *bmpPtr++; byte green = *bmpPtr++; byte blue = *bmpPtr++; byte alpha = *bmpPtr++; // Assuming SKColorType.Rgba8888 - used by iOS and Android // (UWP uses SKColorType.Bgra8888) byte intensity = (byte)(0.299 * red + 0.587 * green + 0.114 * blue); bmpPtr = tempPtr; *bmpPtr++ = (byte)((intensity > 206) ? 255 : intensity + 49); // red *bmpPtr++ = (byte)((intensity < 14) ? 0 : intensity - 14); // green *bmpPtr++ = (byte)((intensity < 56) ? 0 : intensity - 56); // blue *bmpPtr++ = alpha; // alpha } } return pixmap; }

This method first derives an intensity value for the pixel (essentially a greyscale representation of the pixel), based on its R, G, and B components, and then sets the R, G, and B components based on this intensity value. The following screenshot shows the test image converted to sepia, on iOS:

I also implemented Otsu’s thresholding algorithm, as an example of binarisation. This algorithm typically derives the threshold for an image by minimizing intra-class variance. However, the implementation I’ve used derives the threshold by maximising inter-class variance, which is equivalent. The threshold is then used to separate pixels into foreground and background classes. For more information about this algorithm, see Otsu’s method. The code for the algorithm can be found on GitHub. The following screenshot shows the test image thresholded with this algorithm, on iOS:

Wrapping up

The question I set out to answer is as follows: is the combination of Xamarin.Forms and a SkiaSharp a viable platform for writing cross-platform imaging apps? My main criteria for answering this question are:

  1. Can most of the app be written in shared code?
  2. Can imaging algorithms be implemented so that they have a fast execution speed, particularly on Android?

The answer to both questions, at this stage, is yes. In particular, I was impressed with the execution speed of the algorithms on both platforms (even Android!). I was particularly impressed when considering the size of the source image (4032x3024). The reason I say at this stage is because the algorithms I’ve implemented are quite basic. They don’t really do any heavy processing. Therefore, in my next blog post I’ll look at performing convolution operations, which up the amount of processing performed on an image.

The sample this code comes from can be found on GitHub.

Thursday, 4 July 2019

OAuth 2.0 for Native Apps using Xamarin.Forms

About two years ago I wrote some samples that demonstrate using Xamarin.Forms to implement OAuth 2.0 for Native Apps. This spec represents the best practices for OAuth 2.0 authentication flows from mobile apps. These include:

  • Authentication requests should only be made through external user agents, such as the browser. This results in better security, and enables use of the user’s current authentication state, making single sign-on possible. Conversely, this means that authentication requests should never be made through a WebView. WebView controls are unsafe for third parties, as they leave the authorization grant and user’s credentials vulnerable to recording or malicious use. In addition, WebView controls don’t share authentication state, meaning single sign-on isn’t possible.
  • Native apps must request user authorization by creating a URI with the appropriate grant types. The app then redirects the user to this request URI. A redirect URI that the native app can receive and parse must also be supplied.
  • Native apps must use the Proof Key for Code Exchange (PKCE) protocol, to defend against apps on the same device potentially intercepting the authorization code.
  • Native apps should use the authorization code grant flow with PKCE. Conversely, native apps shouldn’t use the implicit grant flow.
  • Cross-Site Request Forgery (CSRF) attacks should be mitigated by using the state parameter to link requests and responses.

More details can be found in the OAuth 2.0 for Native Apps spec. Ultimately though, it leads to the OAuth 2.0 authentication flow for native apps being:

  1. The native app opens a browser tab with the authorisation request.
  2. The authorisation endpoint receives the authorisation request, authenticates the user, and obtains authorisation.
  3. The authorisation server issues an authorization code to the redirect URI.
  4. The native app receives the authorisation code from the redirect URI.
  5. The native app presents presents the authorization code at the token endpoint.
  6. The token endpoint validates the authorization code and issues the requested tokens.

For a whole variety of reasons, the samples that demo this using Xamarin.Forms never saw the light of day, but they can now be found in my GitHub repo. There are two samples:

Both samples consume endpoints on a publically available IdentityServer site. The main things to note about the samples are that (1) they use custom URL schemes defined in the platform projects, and (2) each platform project has code to open/close the browser as required, which is invoked with the Xamarin.Forms DependencyService.

Hopefully the samples will be of use to people, and if you want to know how the code works you should thoroughly read the OAuth 2.0 for Native Apps spec.

Wednesday, 3 July 2019

What’s new in CollectionView in Xamarin.Forms 4.1

Xamarin.Forms 4.1 was released on Monday, and as well as new functionality such as CheckBox, it includes a number of updates to CollectionView. The main CollectionView updates are outlined below.

Item Spacing

By default, each item in a CollectionView lacks empty space around it. This can now be changed by setting properties on the items layout used by the CollectionView.

For a ListItemsLayout, set the ItemSpacing property to a double that represents the empty space around each item. For a GridItemsLayout, set the VerticalItemSpacing and HorizontalItemSpacing properties to double values that represent the empty space vertically and horizontally around each item.

For more info, see Item spacing.

Specifying Layout

The static VerticalList and HorizontalList members in the ListItemsLayout class have been renamed to Vertical and Horizontal.

In addition, CollectionView has gained some converters so that vertical and horizontal lists can be specified in XAML using strings, rather than static members:

<CollectionView ItemsSource="{Binding Monkeys}" ItemsLayout="HorizontalList" />

For more info, see CollectionView Layout.

Item Sizing Strategy

The ItemSizingStrategy enumeration is now implemented on Android. For more info, see Item sizing.

SelectedItem and SelectedItems

The SelectedItem property now uses a TwoWay binding by default, and the selection can be cleared by setting the property, or the object it binds to, to null.

The SelectedItems property now uses a OneWay binding by default, and is now bindable to view model properties. However, note that this property is defined as IList<object>, and must bind to a collection that implements IList, and that has an object generic type. Therefore, the bound collection should be, for example, ObservableCollection<object> rather than ObservableCollection<Monkey>. In addition, selections can be cleared by setting this property, of the collection it binds to, to null.

For more info, see CollectionView Selection.