Friday, 1 November 2019

Bind from a ControlTemplate to a ViewModel with Xamarin.Forms

The best new feature in Xamarin.Forms 4.3 is relative bindings. Relative bindings provide the ability to set the binding source relative to the position of the binding target, and are created with the RelativeSource markup extension, and set as the Source property of a binding expression. For more informatoin about relative bindings, see Xamarin.Forms Relative Bindings.

Relative bindings support a number of modes, including binding to self, binding to an ancestor, and binding from within a ControlTemplate to the templated parent (the runtime object instance to which the template is applied). They also support binding to a view model from within a ControlTemplate, even when the ControlTemplate binds to the templated parent. This makes it possible to support scenarios such as a ControlTemplate containing a Button that binds to a view model ICommand, while other controls in the ControlTemplate bind to the templated parent. This blog post will look at doing this.

The sample this code comes from can be found on GitHub.

Implementation

To demonstrate this scenario, I have a PeopleViewModel class that defines an ObservableCollection named People, and an ICommand named DeletePersonCommand:

    public class PeopleViewModel
    {
        public ObservableCollection People { get; set; }

        public ICommand DeletePersonCommand { get; private set; }

        public PeopleViewModel()
        {
            DeletePersonCommand = new Command((name) =>
            {
                People.Remove(People.FirstOrDefault(p => p.Name.Equals(name)));
            });

            People = new ObservableCollection
            {
                new Person
                {
                    Name = "John Doe",
                    Description = "Lorem ipsum dolor sit amet, consectetur adipiscing elit..."
                },
                new Person
                {
                    Name = "Jane Doe",
                    Description = "Phasellus eu convallis mi. In tempus augue eu dignissim fermentum..."
                },
                new Person
                {
                    Name = "Xamarin Monkey",
                    Description = "Aliquam sagittis, odio lacinia fermentum dictum, mi erat scelerisque..."
                }
            };
        }
    }

There’s also a ContentPage whose BindingContext is set to a PeopleViewModel instance. The ContentPage contains a StackLayout which uses a bindable layout to bind to the People collection:

<ContentPage ...>
    <ContentPage.BindingContext>
        <local:PeopleViewModel />
    </ContentPage.BindingContext>

    <StackLayout Margin="10,35,10,10"
                 BindableLayout.ItemsSource="{Binding People}"
                 BindableLayout.ItemTemplate="{StaticResource PersonTemplate}" />

</ContentPage>

The ItemTemplate of the bindable layout is set to the PersonTemplate resource:

        <DataTemplate x:Key="PersonTemplate">
            <local:CardView BorderColor="DarkGray"
                            CardName="{Binding Name}"
                            CardDescription="{Binding Description}"
                            ControlTemplate="{StaticResource CardViewControlTemplate}" />
        </DataTemplate>

This DataTemplate specifies that each item in the People collection will be displayed using a CardView object that simply defines CardName, CardDescription, BorderColor, and CardColor bindable properties. The appearance of each CardView object is defined using a ControlTemplate named CardViewControlTemplate:

        <ControlTemplate x:Key="CardViewControlTemplate">
            <Frame BindingContext="{Binding Source={RelativeSource TemplatedParent}}"
                   BackgroundColor="{Binding CardColor}"
                   BorderColor="{Binding BorderColor}"
                   CornerRadius="5"
                   HasShadow="True"
                   Padding="8"
                   HorizontalOptions="Center"
                   VerticalOptions="Center">
                <Grid>
                    <Grid.RowDefinitions>
                        <RowDefinition Height="75" />
                        <RowDefinition Height="4" />
                        <RowDefinition Height="Auto" />
                    </Grid.RowDefinitions>
                    <Label Text="{Binding CardName}"
                           FontAttributes="Bold"
                           FontSize="Large"
                           VerticalTextAlignment="Center"
                           HorizontalTextAlignment="Start" />
                    <BoxView Grid.Row="1"
                             BackgroundColor="{Binding BorderColor}"
                             HeightRequest="2"
                             HorizontalOptions="Fill" />
                    <Label Grid.Row="2"
                           Text="{Binding CardDescription}"
                           VerticalTextAlignment="Start"
                           VerticalOptions="Fill"
                           HorizontalOptions="Fill" />
                    <Button Text="Delete"
                            Command="{Binding Source={RelativeSource AncestorType={x:Type local:PeopleViewModel}}, 
                                              Path=DeletePersonCommand}"
                            CommandParameter="{Binding CardName}"
                            HorizontalOptions="End" />
                </Grid>
            </Frame>
        </ControlTemplate>

The root element of the CardViewControlTemplate is a Frame object. whose BindingContext is set to its templated parent (the CardView). Therefore, the Frame object, and all of its children, will resolve their bindings against CardView properties.

However, the Button within the CardViewControlTemplate binds to both its templated parent (the CardView), and to the ICommand in the PeopleViewModel instance. How is this possible? It’s possible because the Button.Command property redefines its binding source to be the binding context of an ancestor whose binding context type is PeopleViewModel. Let’s delve into this a little more.

The RelativeSource markup extension has a Mode property that can be set to one of the values of the RelativeBindingSourceMode enumeration: Self, FindAncestor, FindAncestorBindingContext, and TemplatedParent. The Mode property is the ContentProperty of the RelativeSourceExtension class, and so explicitly setting it using Mode= can be eliminated. In addition, the RelativeSource markup extension has a AncestorType property. Setting the AncestorType property to a type that derives from Element (any Xamarin.Forms control, or ContentView) will set the Mode property to FindAncestor. Similarly, setting the AncestorType property to a type that doesn’t derive from Element will set the Mode property to FindAncestorBindingContext.

Therefore, the relative binding expression Command=”{Binding Source={RelativeSource AncestorType={x:Type local:PeopleViewModel}}, Path=DeletePersonCommand}” sets the Mode property to FindAncestorBindingContext, because the type specified in the AncestorType property doesn’t derive from Element. The Source property is set the BindingContext property of the ancestor whose binding context is of type PeopleViewModel, which in this case is the StackLayout. The Path part of the expression can then resolve the DeletePersonCommand property. However, the Button.CommandParameter property doesn’t alter its binding source, instead inheriting it from its parent in the ControlTemplate. Therefore, this property binds to the CardName property of the CardView. The overall effect of the Button bindings is that when the Button is clicked, the DeletePersonCommand in the PeopleViewModel class is executed, with the value of the CardName property being passed to the ICommand.

Summary

The overall effect of this code is that StackLayout uses a bindable layout to display a collection of CardView objects:

The appearance of each CardView object is defined by a ControlTemplate, whose controls bind to properties on its templated parent (the CardView). However, the Button in the ControlTemplate redefines its binding source to be an ICommand in a view model. When clicked, the Button removes the specified CardView from the bindable layout:

The sample this code comes from can be found on GitHub.

Thursday, 17 October 2019

Xamarin: Connecting to localhost over HTTPS from simulators and emulators

Most mobile apps consume web services. During the development phase, it’s common to deploy a web service locally and consume it from a mobile app running in a simulator or emulator.

Consuming localhost web services that run over HTTP is straight forward enough. However, it’s more work when the web service runs over HTTPS. This involves:
  • Creating a self-signed developer certificate on your machine.
  • Choosing the network stack to use.
  • Specifying the address of your local machine.
  • Bypassing the local development certificate check.
This process is documented at Connect to Local Web Services from iOS Simulators and Android Emulators.

On iOS and Android, attempting to invoke a local secure web service from an app running in the iOS simulator or Android emulator results in an exception, even when the managed network stack is used. This is because the local HTTPS development certificate is self-signed, and self-signed certificates aren’t trusted by iOS or Android.

There are a number of approaches that can be used to work around this issue, but the typical approach was to use the managed HttpClient implementation in your DEBUG builds, and then set a callback for the System.Net.ServicePointManager.ServerCerticateValidationCallback that ignores the result of the localhost certificate check. This approach worked for both iOS and Android. However, in the last month it stopped working on Android and has left people wondering why.

In the last month, the new CoreFX implementation of HttpClient was dropped into the following Mono profiles:
  • Desktop Mono on Linux and OS X.
  • Web Assembly
  • Android
This implementation does not include the ServicePointManager API, because it’s not part of .NET Core. Instead, it includes the HttpClientHandler.ServerCertificateCustomValidationCallback property (API doc). Therefore, currently, the process for ignoring SSL certificate errors on Android has diverged from iOS.

SSL errors can be ignored on Android for local secure web services by setting the ServerCertificateCustomValidationCallback property to a callback that ignores the result of the certificate security check for the localhost certificate:
        public HttpClientHandler GetInsecureHandler()
        {
            var handler = new HttpClientHandler();
            handler.ServerCertificateCustomValidationCallback = (message, cert, chain, errors) =>
            {
                if (cert.Issuer.Equals("CN=localhost"))
                    return true;
                return errors == System.Net.Security.SslPolicyErrors.None;
            };
            return handler;
        }
This HttpClientHandler object returned by the GetInsecureHandler method should be passed as an argument to the HttpClient constructor. The advantage of using this new approach is that it hooks into the AndroidClientHandler native network stack, which is the recommended network stack on Android. Therefore, it’s no longer necessary to use the managed network stack on Android during development.

On iOS, it’s still recommended to use the managed network stack during development, with the old ServicePointManager API. However, iOS support for the CoreFX HttpClient implementation is in the works, and will hook into the NSUrlSession network stack. Once it’s released, the same approach to bypassing localhost certificate checks can be used on iOS and Android.

For a full sample that demonstrates this approach in Xamarin.Forms, see my GitHub repo.

Friday, 23 August 2019

What’s new in CollectionView in Xamarin.Forms 4.2?

Xamarin.Forms 4.2 was released this week, and includes a number of updates to CollectionView. The main updates are outlined below.

Data

CollectionView now supports loading data incrementally as users scroll through items. This enables scenarios such as asynchronously loading a page of data from a web service, as the user scrolls. In addition, the point at which more data is loaded is configurable so that users don’t see blank space, or are stopped from scrolling. For more information, see Load data incrementally.

Layout

It’s no longer necessary to set the layout orientation of a CollectionView, that uses the ListItemsLayout, with the x:Arguments syntax. Instead, the Orientation property can now be set directly on a ListItemsLayout in XAML. For more information, see Horizontal list.

In addition, CollectionView now supports presenting a header and footer that scroll with the items in the list. The header and footer can be strings, views, or DataTemplate objects. For more information, see Headers and footers.

Scrolling

CollectionView now defines a Scrolled event which is fired to indicate that scrolling occured. For more informatoin, see Detect scrolling.

CollectionView also now includes HorizontalScrollBarVisibility and VerticalScrollBarVisibility properties, which represents whether the horizontal or vertical scroll bar is visible. For more informatoin, see Scroll bar visibility.

In addition, CollectionView defines a ItemsUpdatingScrollMode property which represents the scrolling behaviour of the CollectionView when new items are added to it. This allows, for example, the last item to remain in view when new items are added. For more information, see Control scroll position when new items are added.

Grouping

CollectionView now supports displaying grouped data. However, this functionality is currently only available on iOS, and will be added to Android in due course. For more information, see Grouping.

Monday, 29 July 2019

Frequency filtering with Xamarin.Forms and SkiaSharp

Previously, I wrote about performing convolution in SkiaSharp. I used the CreateMatrixConvolution method from the SKImageFilter class to convolve kernels with the source image. This method allows you to specify kernels of arbitrary size, allows you to specify how the edge pixels in the image are handled, and is lightning fast. I was particularly impressed with the execution speed of the method when considering the size of the source image (4032x3024), and the amount of processing that’s performed when convolving an image with a kernel, particularly for larger kernels.

However, convolution is still considered relatively basic in image processing terms, despite the amount of processing performed. Therefore, I decided to move things up a gear and see what the execution speed would be like when performing frequency filtering, which substantially increases the amount of processing performed on an image.

In this blog post I’ll discuss implementing frequency filtering in SkiaSharp. The sample this code comes from can be found on GitHub.

Implementing frequency filtering

As any computer science undergrad can tell you, a Fourier transform take a signal from the time domain and transforms it into the frequency domain. What this means is it takes a signal, and decomposes it into its constituent frequencies. The world of signal processing is full of applications for the Fourier transform, with one of the stanard applications being musical instrument tuners, which compare the frequency of sound input to the known frequencies of specific notes. Fourier transforms can be performed using analog electronics, digital electronics, and even at the speed of light using optics. For more info about Fourier transforms, see Fourier transform.

Another standard application of the Fourier transform is to perform frequency filtering. A signal can be transformed to the frequency domain, where specific frequencies, or frequency ranges, can be removed. This enables a class of filters to be implemented, known as pass through filters: specifically high-pass, low-pass, and band-pass filters. In terms of image processing, a low-pass filter attentuates high frequencies and retains low frequencies unchanged, with the result being similar to that of a smoothing filter. A high-pass filter attentuates low frequencies, and retains high frequencies unchanged, with the results being similar to that of edge detection. A band-pass filter attentuates very low and very high frequencies, but retains a middle band of frequencies, and is often used to enhance edges while reducing noise.

In terms of image processing, the Fourier transform actually transforms an image from the spatial domain to the frequency domain, where frequency filtering can then be performed. The process of performing frequency filtering on an image is:

  • Fourier transform the image.
  • Filter the image frequencies in the frequency domain.
  • Inverse Fourier transform the frequency data back into the spatial domain.

In reality, the process of Fourier transforming an image is more complex than the bullet points suggest. The frequency data that results from a Fourier transform is expressed as a complex number (a number that has a real, and imaginary component), whose magnitude represents the amount of that frequency present in the original signal, with the phase offset being the basic sinusoid in that frequency. For more information about complex numbers, see complex numbers.

In addition, as well as handling complex numbers, there are other issues to be dealt with when Fourier transforming an image. Firstly, computing a Fourier transform is too slow to be practical, because it has a complexity of O(N2). However, this issue can be overcome by implementing a Fast Fourier Transform (FFT), which has a complexity of O(N log N). For more information about this algorithm, see Fast Fourier transform. The second issue is that Fourier transforms only operate on image dimensions that are a power of 2 (512x512, 1024x1024 etc.). While there are techniques to allow arbitrarily dimensoined images to be Fourier transformed, they are beyond the scope of this blog post. Therefore, any images processed by the algorithm must first be manually resized. The final issue is which image data do you Fourier transform? The answer is to Fourier transform a greyscale representation of the image, as it’s the intensity data that caries more information than colour channels.

For clarity, restating all this leads to the following assumptions for my implementation:

  • The input image for the Fourier transform will be a 32-bit image, with RGBA channels.
  • The input image dimensions must be a power of 2. This is due to the FFT algorithm (Radix-2 FFT) being used.
  • The input image will first be converted to a greyscale representation.
  • The greyscale representation will then be converted to a complex number representation.
  • The complex number data will undergo a 2D FFT.
  • Filtering will then be performed in the frequency domain.
  • The filtered complex data will be inverse FFT’d.
  • The filtered complex data will be converted back from complex numbers to pixel data for display.
  • The output image will be in greyscale, but still stored as a RGBA image.

Implementation

It takes more code to implement a 2D FFT than can be covered in a blog post, particularly as there are complexities I haven’t outlined here. Rather than show every last bit of code, I’ll just state that there’s a Complex type that represents a complex number, and a ComplexImage type that represents an image as a 2D array of complex numbers. The full source code can be found on GitHub.

The following code example shows a high level overview of how the frequency filtering process is implemented:

        public static unsafe SKPixmap FrequencyFilter(this SKImage image, int min, int max)
        {
            ComplexImage complexImage = image.ToComplexImage();

            complexImage.FastFourierTransform();
            FrequencyFilter filter = new FrequencyFilter(new FrequencyRange(min, max));
            filter.Apply(complexImage);
            complexImage.ReverseFastFourierTransform();

            SKPixmap pixmap = complexImage.ToSKPixmap(image);
            return pixmap;
        }

The FrequencyFilter method converts the image to a ComplexImage object, which then undergoes a 2D FFT. A FrequencyFilter object is then created, based on the values of the Slider objects displayed on the UI.The filter is applied to the ComplexImage object, and then inverse FFT’d, before being converted back to pixel data.

A Fourier transform produces a complex number valued output image that can be displayed as two images, for the real and imaginary coefficients, or as their magnitude and phase. In image processing, it’s usually the magnitude of the Fourier transform that’s displayed, as it contains most of the information about the geometric structure of the source image. However, to inverse transform the data after processing in the frequency domain, both the magnitude and phase of the Fourier data is required, and so must be preserved.

It’s possible to view the Fourier transformed image (known as a frequency spectra) by commenting out a couple of lines in the FrequencyFilter method shown above. I mentioned earlier that are additional complexities when implementing a FFT, and one of them is that the dynamic range of Fourier coefficients is too large to be displayed in an image, and the resulting image would appear all black. However, if a logarithmic transformation is applied to the coefficients (which the source code does), the Fourier transformed image can be displayed:

The image is shifted so that Frequency(0,0) is displayed at the center of the image. The further away from the center a point in the image is, the higher is its corresponding frequency. Therefore, this image tells us that the image largely consists of low frequencies. In addition, it’s a fact that low frequencies contain more image information than higher frequencies (which is taken advantage of by image compression algorithms). The spectra also tells us that there’s one dominating direction in the image, which passes vertically through the center. This originates from the many vertical lines present in the source image.

Frequency filtering is performed by the Apply method in the FrequencyFilter class:

        public void Apply(ComplexImage complexImage)
        {
            if (!complexImage.IsFourierTransformed)
            {
                throw new ArgumentException("The source complex image should be Fourier transformed.");
            }

            int width = complexImage.Width;
            int height = complexImage.Height;
            int halfWidth = width >> 1;
            int halfHeight = height >> 1;
            int min = frequencyRange.Min;
            int max = frequencyRange.Max;

            Complex[,] data = complexImage.Data;
            for (int i = 0; i < height; i++)
            {
                int y = i - halfHeight;
                for (int j = 0; j < width; j++)
                {
                    int x = j - halfWidth;
                    int d = (int)Math.Sqrt(x * x + y * y);

                    if ((d > max) || (d < min))
                    {
                        data[i, j].Re = 0;
                        data[i, j].Im = 0;
                    }
                }
            }
        }

This method iterates over the complex image data, and zeros the real and imaginery values that lie outside the frequency range specified by the min and max values. Conversely, frequency data within the min and max values is passed through. This method therefore implements a band-pass filter, which can be configured to operate at any frequency range.

It should therefore follow from this, that if a frequency filter with a min value of 0 and a max value of 1024 is applied, the resulting inverse transformed frequency filtered image should be a perfect greyscale representation of the original source image. The following screenshot shows this:

Furthermore, because the earlier frequency spectra shows that the image is largely comprised of low frequency data, a frequency filter with a min value of 0 and a max value of 128 still results (after inverse FFT) in a perfect greyscale representation of the original source image. The following screenshot shows this:

However, a frequency filter with a min value of 10 and a max value of 128 yields the following image:

In this image, because some of the low frequency data has been removed, only sharp changes in intensity values are being preserved, with the resulting image beginning to look as if it’s been edge detected. Similarly, a frequency filter with a min value of 20 and a max value of 128 furthers this effect:

Again, the output is now looking even more like the output of an edge detector.

While this example is of no immediate practical use, it hopefully shows the potential of what can be achieved with frequency filtering. One of the main uses of frequency filtering in image processing is to remove noise. If the frequency range of the noise can be identified, which it often can be, that frequency range can be removed, resulting in a denoised image. Another real world application of the Fourier transform in imaging, is producing images of steel rebar quality inside concrete (think steel rebar inside concrete walls, bridges etc.). In this case, the transducer data can be deconvolved (in the frequency domain) with the point spread function of the transducer, to yield images of steel rebar quality inside concrete, from which deterioration can be identified.

Wrapping up

What I set out to address here is, is the combination of Xamarin.Forms and SkiaSharp a viable platform for writing cross-platform imaging apps, when performing substantial image processing? My main criteria for answering this question are:

  1. Can most of the app be written in shared code?
  2. Can imaging algorithms be implemented so that they have a fast execution speed, particularly on Android?

The answer to both questions, is yes. I was happy with the speed of execution of the Fourier transform on both platform, especically when considering the size of the source image (2048x2048), and the sheer amount of processing that’s performed when frequency filtering an image. In addition, there are plenty of opportunities to further optimise my implementation, as my implementation focus was clarity rather than optimisation. In particular, a 2D FFT naturally lends itself to parallelisation.

While consumer imaging apps don’t typically contain operations that Fourier transform image data, it’s a mainstay of scientific imaging. Therefore, it’s safe to say that Xamarin.Forms and SkiaSharp also make a good combination for scientific imaging apps.

The sample this code comes from can be found on GitHub.

Tuesday, 16 July 2019

Cross-platform imaging with Xamarin.Forms and SkiaSharp II

Previously, I wrote about combining Xamarin.Forms and SkiaSharp to create a cross-platform imaging app. SkiaSharp offers a number of different approaches for accessing pixel data. I went with the most performant approach for doing this, which is to use the GetPixels method to return a pointer to the pixel data, dereference the pointer whenever you want to read/write a pixel value, and use pointer arithmetic to move the pointer to other pixels.

I created a basic app that loads/displays/saves images, and performs basic imaging algorithms. Surprisingly, the execution speed of the algorithms was excellent on both iOS and Android, despite the source image size (4032x3024). However, the imaging algorithms (greyscale, threshold, sepia) were pretty basic and so I decided to move things up a gear and see what the execution speed would be like when performing convolution, which increases the amount of processing performed on an image.

In this blog post I’ll discuss implementing convolution in SkiaSharp. The sample this code comes from can be found on GitHub.

Implementing convolution

In image processing, convolution is the process of adding each element of the image to its local neighbours, weighted by a convolution kernel. The kernel is a small matrix that defines the imaging operation, such as blurring, sharpening, embossing, edge detection, and more. For more information about convolution, see Kernel image processing.

The ConvolutionKernels class in the app defines a number of kernels that implement a different imaging algorithm, when convolved with an image. The following code shows three kernels from this class:

namespace Imaging
{
    public class ConvolutionKernels
    {
        public static float[] EdgeDetection => new float[9]
        {
            -1, -1, -1,
            -1,  8, -1,
            -1, -1, -1
        };

        public static float[] LaplacianOfGaussian => new float[25]
        {
             0,  0, -1,  0,  0,
             0, -1, -2, -1,  0,
            -1, -2, 16, -2, -1,
             0, -1, -2, -1,  0,
             0,  0, -1,  0,  0
        };

        public static float[] Emboss => new float[9]
        {
            -2, -1, 0,
            -1,  1, 1,
             0,  1, 2
        };
    }
}

I implemented my own convolution algorithm for performing convolution with 3x3 kernels and was reasonably happy with its execution speed. However, as my ConvolutionKernels class included kernels of different sizes, I had to extend the algorithm to handle NxN sized kernels. Unfortunately, for larger kernel sizes the execution speed slowed quite drammatically. This is because convolution has a complexity of O(N2). However, there are fast convolution algorithms that reduce the complexity to O(N log N). For more information, see Convolution theorem.

I was about to implement a fast convolution algorithm when I discovered the CreateMatrixConvolution method in the SKImageFilter class. While I set out to avoid using any filters baked into SkiaSharp, I was happy to use this method because (1) it allows you to specify kernels of arbitrary size, (2) it allows you to specify how edge pixels in the image are handled, and (3) it turned out it was lightning fast (I’m assuming it uses fast convolution under the hood, amongst other optimisation techniques).

After investigating this method, it seemed there  were a number of obvious approaches to using it:

  1. Load the image, select a kernel, apply an SKImageFilter, then draw the resulting image.
  2. Load the image, select a kernel, and apply an SKImageFilter while redrawing the image.
  3. Select a kernel, load the image and apply an SKImageFilter while drawing it.

I implemented both (1) and (2) and settled on (2) as my final implementation as it was less code and offered slightly better performance (presumably due to SkiaSharp being optimised for drawing). In addition, I discounted (3), purely because I like to see the source image before I process it.

The following code example shows how a selected kernel is applied to an image, when drawing it:

        void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs e)
        {
            SKImageInfo info = e.Info;
            SKCanvas canvas = e.Surface.Canvas;

            canvas.Clear();
            if (image != null)
            {
                if (kernelSelected)
                {
                    using (SKPaint paint = new SKPaint())
                    {
                        paint.FilterQuality = SKFilterQuality.High;
                        paint.IsAntialias = false;
                        paint.IsDither = false;
                        paint.ImageFilter = SKImageFilter.CreateMatrixConvolution(
                            sizeI, kernel, 1f, 0f, new SKPointI(1, 1),
                            SKMatrixConvolutionTileMode.Clamp, false);

                        canvas.DrawImage(image, info.Rect, ImageStretch.Uniform, paint: paint);
                        image = e.Surface.Snapshot();
                        kernel = null;
                        kernelSelected = false;
                    }
                }
                else
                {
                    canvas.DrawImage(image, info.Rect, ImageStretch.Uniform);
                }
            }
        }

The OnCanvasViewPaintSurface handler is invoked to draw an image on the UI, when the image is loaded and whenever processing is performed on it. This method will be invoked whenevermthe InvalidateSurface method is called on the SKCanvasView object. The code in the else clause executes when an image is loaded, and draws the image on the UI. The code in the if clause executes when the user has selected a kernel, and taps a Button to perform convolution. Convolution is performed by creating an SKPaint object, and setting various properties on the object. Importantly, this includes setting the ImageFilter property to the SKImageFilter object returned by the CreateMatrixConvolution method. The arguments to the CreateMatrixConvolution method are:

  • The kernel size in pixels, as a SKSizeI struct.
  • The image processing kernel, as a float[] .
  • A scale factor applied to each pixel after convolution, as a float. I use a value of 1, so no scaling is applied.
  • A bias factor added to each pixel after convolution, as a float. I use a value of 0, representing no bias factor.
  • A kernel offset, which is applied to each pixel before convolution, as a SKPointI struct. I use values of 1,1 to ensure that no offset values are applied.
  • A tile mode that represents how pixel accesses outside the image are treated, as a SKMatrixConvolutionTileMode enumeration value. I used the Clamp enumeration member to specify that the convolution should be clamped to the image’s edge pixels.
  • A boolean value that indicates whether the alpha channel should be included in the convolution. I specified false to ensure that only the RGB channels are processed.

In addition, further arguments for the CreateMatrixConvolution method can be specified, but weren’t required here. For example, you could choose to perform convolution only on a specified region in the image.

After defining the SKImageFilter, the image is re-drawn, using the SKPaint object that includes the SKImageFilter object. The result is an image that has been convolved with the kernel selected by the user. Then, the SKSurface.Snapshot method is called, so that the re-drawn image is returned as an SKImage. This ensures that if the user selects an another kernel, convolution occurs against the new image, rather than the originally loaded image.

The following iOS screenshot shows the source image convolved with a simple edge detection kernel:

The following iOS screenshot shows the source image convolved with a kernel designed to create an emboss effect:

The following iOS screenshot shows the source image convolved with a kernel that implements the Laplacian of a Gaussian:

The Laplacian of a Gaussian is an interesting kernel that performs edge detection on smoothed image data. The Laplacian operator highlights regions of rapid intensity change, and is applied to an image that has first been smoothed with a Gaussian smoothing filter in order to reduce its sensitivity to noise.

Wrapping up

In undertaking this work, the question I set out to answer is as follows: is the combination of Xamarin.Forms and SkiaShap a viable platform for writing cross-platform imaging apps, when performing more substantial image processing? My main criteria for answering this question are:

  1. Can most of the app be written in shared code?
  2. Can imaging algorithms be implemented so that they have a fast execution speed, particularly on Android?

The answer to both questions, at this stage, is yes. I was impressed with the execution speed of SkiaSharp’s convolution algorithm on both platforms. I was particularly impressed when considering the size of the source image (4032x3024), and the amount of processing that’s performed when convolving an image with a kernel, particularly for larger kernels (the largest one in the app is 7x7).

The reason I say at this stage is that while performing convolution is a step up from basic imaging algorithms (greyscale, thresholding, sepia), it’s still considered relatively basic in image processing terms, despite the processing performed during convolution. Therefore, in my next blog post I’ll look at performing frequency filtering, which significantly increases the amount of processing performed on an image.

The sample this code comes from can be found on GitHub.

Monday, 8 July 2019

Cross-platform imaging with Xamarin.Forms and SkiaSharp

Back in the Xamarin.Forms 1.x days, I attempted to show the power of Xamarin.Forms development by writing a cross-platform imaging app. This was a mistake. While I produced a working cross-platform app, the majority of the code was platform code, joined together through DependencyService calls from a shared UI. If anything, it showed that it wasn’t easily possible to create a cross-platform imaging app with shared code. So it never saw the light of day.

I’d been thinking about this project recently, and while I knew that it’s possible to write cross-platform imaging apps with Xamarin.Forms and SkiaSharp, I wasn’t sure if it was advisable, from an execution speed point of view. In particular, I was worried about the execution speed of imaging algorithms on Android, especially when considering the resolution of photos taken with recent mobile devices. So I decided to write a proof of concept app to find out if Xamarin.Forms and SkiaSharp was a viable platform for writing cross-platform imaging apps.

App requirements and assumptions

When I talk about writing a cross-platform imaging app, I’m not particularly interested in calling platform APIs to resize images, crop images etc. I’m interested in accessing pixel data quickly, and being able to manipulate that data.

The core platforms I wanted to support were iOS and Android. UWP support would be a bonus, but I’d be happy to drop UWP support at the first sign of any issues.

The core functionality of the app is to load/display/save images, and manipulate the pixel values of the images as quickly as possible, with as much of this happening through shared code as possible. I wanted to support the common image file formats, but was only interested in supporting 32 bit images. The consequence of this is that when loading a colour image and converting it to greyscale, it would be saved back out as a 32 bit image, rather than an 8 bit image.

Note that the app is just a proof of concept app. Therefore, I wasn’t bothered about creating a slick UI. I just needed a functional UI. Similarly, I didn’t get hung up on architectural decisions. At one point I was going to implement each imaging algorithm using a plugin architecture, so the app would detect the algorithms and let the user choose them. But that was missing the point. It’s only a proof of concept. So it’s code-behind all the way, and the algorithms are hard-coded into the app.

App overview

The app was created in Xamarin.Forms and SkiaSharp, and the vast majority of the code is shared code. Platform code was required for choosing images on each platform, but that was about it. Image load/display/save/manipulation is handled with SkiaSharp shared code. Code for the sample app can be found on GitHub.

As part of our SkiaSharp docs, we’ve covered how to load and display an image using SkiaSharp. We’ve also covered how to save images using SkiaSharp. Our docs also explain how to write code to pick photos from the device’s photo library. I won’t regurgitate these topics here. Instead, just know that the app uses the techniques covered in these docs. The only difference is that while I started by using the SKBitmap class, I soon moved to using the SKImage class, after discovering that Google have plans to deprecate SKBitmap. Here’s a screenshot of the app, that shows an image of my magilyzer, which I’ll use as a test image in this blog post:

We’ve also got docs on accessing pixel data in SkiaSharp. SkiaSharp offers a number of different approaches for doing this, and understanding them is key to creating a performant app. In particular, take a look at the table in the Comparing the techniques section of the doc. This table shows execution times in milliseconds for these different approaches. The TL;DR is that the fastest approach is to use the GetPixels method to return a pointer to the pixel data, deference the pointer whenever you want to read/write a pixel value, and use pointer arithmetic to move the pointer to process other pixels.

Using this approach requires knowledge of how pixel data is stored in memory on different platforms. On iOS and Android, each pixel is stored as four bytes in RGBA format, which is represented in SkiaSharp with the SKColorType.Rgba8888 type. On UWP, each pixel is stored as four bytes in BGRA format, which is represented in SkiaSharp with the SKColorType.Bgra8888 type. Initially, I coded my imaging algorithms for all three platforms, but I got sick of having to handle UWP’s special case, so at that point it was goodbye UWP!

Basic algorithms

As I mentioned earlier, the focus of the app isn’t on calling platform APIs to performing imaging operations. It’s on accessing pixel data and manipulating that data. If you want to know how to crop images with SkiaSharp, see Cropping SkiaSharp bitmaps. Similarly, SkiaSharp has functionality for resizing images. With all that said, the first imaging algorithm I always implement when getting to grips with a new platform is converting a colour image to greyscale, as it’s a simple algorithm. The following code example shows how I accomplished this in SkiaSharp:

public static unsafe SKPixmap ToGreyscale(this SKImage image) { SKPixmap pixmap = image.PeekPixels(); byte* bmpPtr = (byte*)pixmap.GetPixels().ToPointer(); int width = image.Width; int height = image.Height; byte* tempPtr; for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { tempPtr = bmpPtr; byte red = *bmpPtr++; byte green = *bmpPtr++; byte blue = *bmpPtr++; byte alpha = *bmpPtr++; // Assuming SKColorType.Rgba8888 - used by iOS and Android // (UWP uses SKColorType.Bgra8888) byte result = (byte)(0.2126 * red + 0.7152 * green + 0.0722 * blue); bmpPtr = tempPtr; *bmpPtr++ = result; // red *bmpPtr++ = result; // green *bmpPtr++ = result; // blue *bmpPtr++ = alpha; // alpha } } return pixmap; }

This method converts a colour image to greyscale by retrieving a pointer to the start of the pixel data, and then retrieving the R, G, B, and A components of each pixel by deferencing the pointer, and then incrementing it’s address. The greyscale pixel value is obtained by multiplying the R value by 0.2126, multiplying the G value by 0.7152, multiplying the B value by 0.0722, and then summing the results. Note that the input to this method is an image in RGBA8888 format, and the output is an image in RGBA8888 format, despite being a greyscale image. Therefore the R, G, and B components of each pixel are all set to the same value. The following screenshot shows the test image converted to greyscale, on iOS:

As an example of colour processing, I implemented an algorithm for converting an image to sepia, which is shown in the following example:

public static unsafe SKPixmap ToSepia(this SKImage image) { SKPixmap pixmap = image.PeekPixels(); byte* bmpPtr = (byte*)pixmap.GetPixels().ToPointer(); int width = image.Width; int height = image.Height; byte* tempPtr; for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { tempPtr = bmpPtr; byte red = *bmpPtr++; byte green = *bmpPtr++; byte blue = *bmpPtr++; byte alpha = *bmpPtr++; // Assuming SKColorType.Rgba8888 - used by iOS and Android // (UWP uses SKColorType.Bgra8888) byte intensity = (byte)(0.299 * red + 0.587 * green + 0.114 * blue); bmpPtr = tempPtr; *bmpPtr++ = (byte)((intensity > 206) ? 255 : intensity + 49); // red *bmpPtr++ = (byte)((intensity < 14) ? 0 : intensity - 14); // green *bmpPtr++ = (byte)((intensity < 56) ? 0 : intensity - 56); // blue *bmpPtr++ = alpha; // alpha } } return pixmap; }

This method first derives an intensity value for the pixel (essentially a greyscale representation of the pixel), based on its R, G, and B components, and then sets the R, G, and B components based on this intensity value. The following screenshot shows the test image converted to sepia, on iOS:

I also implemented Otsu’s thresholding algorithm, as an example of binarisation. This algorithm typically derives the threshold for an image by minimizing intra-class variance. However, the implementation I’ve used derives the threshold by maximising inter-class variance, which is equivalent. The threshold is then used to separate pixels into foreground and background classes. For more information about this algorithm, see Otsu’s method. The code for the algorithm can be found on GitHub. The following screenshot shows the test image thresholded with this algorithm, on iOS:

Wrapping up

The question I set out to answer is as follows: is the combination of Xamarin.Forms and a SkiaSharp a viable platform for writing cross-platform imaging apps? My main criteria for answering this question are:

  1. Can most of the app be written in shared code?
  2. Can imaging algorithms be implemented so that they have a fast execution speed, particularly on Android?

The answer to both questions, at this stage, is yes. In particular, I was impressed with the execution speed of the algorithms on both platforms (even Android!). I was particularly impressed when considering the size of the source image (4032x3024). The reason I say at this stage is because the algorithms I’ve implemented are quite basic. They don’t really do any heavy processing. Therefore, in my next blog post I’ll look at performing convolution operations, which up the amount of processing performed on an image.

The sample this code comes from can be found on GitHub.

Thursday, 4 July 2019

OAuth 2.0 for Native Apps using Xamarin.Forms

About two years ago I wrote some samples that demonstrate using Xamarin.Forms to implement OAuth 2.0 for Native Apps. This spec represents the best practices for OAuth 2.0 authentication flows from mobile apps. These include:

  • Authentication requests should only be made through external user agents, such as the browser. This results in better security, and enables use of the user’s current authentication state, making single sign-on possible. Conversely, this means that authentication requests should never be made through a WebView. WebView controls are unsafe for third parties, as they leave the authorization grant and user’s credentials vulnerable to recording or malicious use. In addition, WebView controls don’t share authentication state, meaning single sign-on isn’t possible.
  • Native apps must request user authorization by creating a URI with the appropriate grant types. The app then redirects the user to this request URI. A redirect URI that the native app can receive and parse must also be supplied.
  • Native apps must use the Proof Key for Code Exchange (PKCE) protocol, to defend against apps on the same device potentially intercepting the authorization code.
  • Native apps should use the authorization code grant flow with PKCE. Conversely, native apps shouldn’t use the implicit grant flow.
  • Cross-Site Request Forgery (CSRF) attacks should be mitigated by using the state parameter to link requests and responses.

More details can be found in the OAuth 2.0 for Native Apps spec. Ultimately though, it leads to the OAuth 2.0 authentication flow for native apps being:

  1. The native app opens a browser tab with the authorisation request.
  2. The authorisation endpoint receives the authorisation request, authenticates the user, and obtains authorisation.
  3. The authorisation server issues an authorization code to the redirect URI.
  4. The native app receives the authorisation code from the redirect URI.
  5. The native app presents presents the authorization code at the token endpoint.
  6. The token endpoint validates the authorization code and issues the requested tokens.

For a whole variety of reasons, the samples that demo this using Xamarin.Forms never saw the light of day, but they can now be found in my GitHub repo. There are two samples:

Both samples consume endpoints on a publically available IdentityServer site. The main things to note about the samples are that (1) they use custom URL schemes defined in the platform projects, and (2) each platform project has code to open/close the browser as required, which is invoked with the Xamarin.Forms DependencyService.

Hopefully the samples will be of use to people, and if you want to know how the code works you should thoroughly read the OAuth 2.0 for Native Apps spec.

Wednesday, 3 July 2019

What’s new in CollectionView in Xamarin.Forms 4.1

Xamarin.Forms 4.1 was released on Monday, and as well as new functionality such as CheckBox, it includes a number of updates to CollectionView. The main CollectionView updates are outlined below.

Item Spacing

By default, each item in a CollectionView lacks empty space around it. This can now be changed by setting properties on the items layout used by the CollectionView.

For a ListItemsLayout, set the ItemSpacing property to a double that represents the empty space around each item. For a GridItemsLayout, set the VerticalItemSpacing and HorizontalItemSpacing properties to double values that represent the empty space vertically and horizontally around each item.

For more info, see Item spacing.

Specifying Layout

The static VerticalList and HorizontalList members in the ListItemsLayout class have been renamed to Vertical and Horizontal.

In addition, CollectionView has gained some converters so that vertical and horizontal lists can be specified in XAML using strings, rather than static members:

<CollectionView ItemsSource="{Binding Monkeys}" ItemsLayout="HorizontalList" />

For more info, see CollectionView Layout.

Item Sizing Strategy

The ItemSizingStrategy enumeration is now implemented on Android. For more info, see Item sizing.

SelectedItem and SelectedItems

The SelectedItem property now uses a TwoWay binding by default, and the selection can be cleared by setting the property, or the object it binds to, to null.

The SelectedItems property now uses a OneWay binding by default, and is now bindable to view model properties. However, note that this property is defined as IList<object>, and must bind to a collection that implements IList, and that has an object generic type. Therefore, the bound collection should be, for example, ObservableCollection<object> rather than ObservableCollection<Monkey>. In addition, selections can be cleared by setting this property, of the collection it binds to, to null.

For more info, see CollectionView Selection.

Thursday, 3 January 2019

Using the Retry Pattern with Azure Storage from Xamarin.Forms

Back in 2017 I wrote about transient fault handling in Xamarin.Forms applications with the retry pattern. Transient faults include the momentary loss of network connectivity to services, the temporary unavailability of a service, or timeouts that arise when a service is busy. These faults can have a huge impact on the perceived quality of an application. Therefore, applications that communicate with remote services should ideally be able to:

  1. Detect faults when they occur, and determine if the faults are likely to be transient.
  2. Retry the operation if it’s determined that the fault is likely to be transient, and keep track of the number of times the operation is retried.
  3. Use an appropriate retry strategy, which specifies the number of retries, the delay between each attempt, and the actions to take after a failed attempt.

This transient fault handling can be achieved by wrapping all attempts to access a remote service in code that implements the retry pattern.

Traditionally, the typical approach to using the retry pattern with Azure Storage has been to use a library such as Polly to provide a retry policy. However, this isn’t necessary as the Azure Storage SDK includes the ability to specify a retry policy.The SDK provides different retry strategies, which define the retry interval and other details. There are classes that provide support for linear (constant delay) retry intervals, and exponential with randomization retry intervals. For more information about using Azure Storage from a Xamarin.Forms application, see Storing and Accessing Data in Azure Storage.

Retry policies are configured programmatically. When writing to/reading from blob storage, this can be accomplished by creating a BlobRequestOptions object and assigning to the DefaultRequestOptions property of the CloudBlobClient object:

public class AzureStorageService : IAzureStorageService { CloudStorageAccount _storageAccount; CloudBlobClient _client; public AzureStorageService() { _storageAccount = CloudStorageAccount.Parse(Constants.StorageConnection); _client = CreateBlobClient(); } CloudBlobClient CreateBlobClient() { CloudBlobClient client = _storageAccount.CreateCloudBlobClient(); client.DefaultRequestOptions = new BlobRequestOptions { RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(3), 4), LocationMode = LocationMode.PrimaryThenSecondary, MaximumExecutionTime = TimeSpan.FromSeconds(20) }; return client; } ... }

This code creates a retry policy that uses an exponential retry. The arguments to the ExponentialRetry constructor specify the back-off interval between retries, and the maximum number of retry attempts (4). A maximum execution time of 20 seconds is set for all potential retry attempts. The LocationMode property is used to indicate which location should receive the request, if the storage account is configured to use geo-redundant storage. Here, PrimaryThenSecondary specifies that requests are always sent to the primary location first, and if a request fails, it’s sent to the secondary location. Note that if you use this option you must ensure that your application can work with data that may be stale if the replication from the primary store hasn’t completed.

All operations with the CloudBlobClient object will then use the specified request options. For example, the following code, which uploads a file to blob storage, will use the retry policy defined in the CloudBlobClient.DefaultRequestOptions property:

public async Task<string> UploadFileAsync(ContainerType containerType, Stream stream) { var container = _client.GetContainerReference(containerType.ToString().ToLower()); await container.CreateIfNotExistsAsync(); var name = Guid.NewGuid().ToString(); var fileBlob = container.GetBlockBlobReference(name); await fileBlob.UploadFromStreamAsync(stream); return name; }

Choosing whether to use the linear retry policy or the exponential retry policy depends upon the business needs of your application. However, a general rule of thumb is to use a linear retry policy for interactive applications that are in the foreground, and to use a exponential retry policy when your application is backgrounded, or performing some batch processing.

The retry policies provided by the Azure Storage SDK will be sufficient for most applications. However, if there’s a need to implement a custom retry approach, the existing policies can be extended through the IExtendedRetryPolicy interface.

For retry guidance with Azure Storage, see Azure Storage.

The sample this code comes from can be found on GitHub.