Tuesday, 16 July 2019

Cross-platform imaging with Xamarin.Forms and SkiaSharp II

Previously, I wrote about combining Xamarin.Forms and SkiaSharp to create a cross-platform imaging app. SkiaSharp offers a number of different approaches for accessing pixel data. I went with the most performant approach for doing this, which is to use the GetPixels method to return a pointer to the pixel data, dereference the pointer whenever you want to read/write a pixel value, and use pointer arithmetic to move the pointer to other pixels.

I created a basic app that loads/displays/saves images, and performs basic imaging algorithms. Surprisingly, the execution speed of the algorithms was excellent on both iOS and Android, despite the source image size (4032x3024). However, the imaging algorithms (greyscale, threshold, sepia) were pretty basic and so I decided to move things up a gear and see what the execution speed would be like when performing convolution, which increases the amount of processing performed on an image.

In this blog post I’ll discuss implementing convolution in SkiaSharp. The sample this code comes from can be found on GitHub.

Implementing convolution

In image processing, convolution is the process of adding each element of the image to its local neighbours, weighted by a convolution kernel. The kernel is a small matrix that defines the imaging operation, such as blurring, sharpening, embossing, edge detection, and more. For more information about convolution, see Kernel image processing.

The ConvolutionKernels class in the app defines a number of kernels that implement a different imaging algorithm, when convolved with an image. The following code shows three kernels from this class:

namespace Imaging
{
    public class ConvolutionKernels
    {
        public static float[] EdgeDetection => new float[9]
        {
            -1, -1, -1,
            -1,  8, -1,
            -1, -1, -1
        };

        public static float[] LaplacianOfGaussian => new float[25]
        {
             0,  0, -1,  0,  0,
             0, -1, -2, -1,  0,
            -1, -2, 16, -2, -1,
             0, -1, -2, -1,  0,
             0,  0, -1,  0,  0
        };

        public static float[] Emboss => new float[9]
        {
            -2, -1, 0,
            -1,  1, 1,
             0,  1, 2
        };
    }
}

I implemented my own convolution algorithm for performing convolution with 3x3 kernels and was reasonably happy with its execution speed. However, as my ConvolutionKernels class included kernels of different sizes, I had to extend the algorithm to handle NxN sized kernels. Unfortunately, for larger kernel sizes the execution speed slowed quite drammatically. This is because convolution has a complexity of O(N2). However, there are fast convolution algorithms that reduce the complexity to O(N log N). For more information, see Convolution theorem.

I was about to implement a fast convolution algorithm when I discovered the CreateMatrixConvolution method in the SKImageFilter class. While I set out to avoid using any filters baked into SkiaSharp, I was happy to use this method because (1) it allows you to specify kernels of arbitrary size, (2) it allows you to specify how edge pixels in the image are handled, and (3) it turned out it was lightning fast (I’m assuming it uses fast convolution under the hood, amongst other optimisation techniques).

After investigating this method, it seemed there  were a number of obvious approaches to using it:

  1. Load the image, select a kernel, apply an SKImageFilter, then draw the resulting image.
  2. Load the image, select a kernel, and apply an SKImageFilter while redrawing the image.
  3. Select a kernel, load the image and apply an SKImageFilter while drawing it.

I implemented both (1) and (2) and settled on (2) as my final implementation as it was less code and offered slightly better performance (presumably due to SkiaSharp being optimised for drawing). In addition, I discounted (3), purely because I like to see the source image before I process it.

The following code example shows how a selected kernel is applied to an image, when drawing it:

        void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs e)
        {
            SKImageInfo info = e.Info;
            SKCanvas canvas = e.Surface.Canvas;

            canvas.Clear();
            if (image != null)
            {
                if (kernelSelected)
                {
                    using (SKPaint paint = new SKPaint())
                    {
                        paint.FilterQuality = SKFilterQuality.High;
                        paint.IsAntialias = false;
                        paint.IsDither = false;
                        paint.ImageFilter = SKImageFilter.CreateMatrixConvolution(
                            sizeI, kernel, 1f, 0f, new SKPointI(1, 1),
                            SKMatrixConvolutionTileMode.Clamp, false);

                        canvas.DrawImage(image, info.Rect, ImageStretch.Uniform, paint: paint);
                        image = e.Surface.Snapshot();
                        kernel = null;
                        kernelSelected = false;
                    }
                }
                else
                {
                    canvas.DrawImage(image, info.Rect, ImageStretch.Uniform);
                }
            }
        }

The OnCanvasViewPaintSurface handler is invoked to draw an image on the UI, when the image is loaded and whenever processing is performed on it. This method will be invoked whenevermthe InvalidateSurface method is called on the SKCanvasView object. The code in the else clause executes when an image is loaded, and draws the image on the UI. The code in the if clause executes when the user has selected a kernel, and taps a Button to perform convolution. Convolution is performed by creating an SKPaint object, and setting various properties on the object. Importantly, this includes setting the ImageFilter property to the SKImageFilter object returned by the CreateMatrixConvolution method. The arguments to the CreateMatrixConvolution method are:

  • The kernel size in pixels, as a SKSizeI struct.
  • The image processing kernel, as a float[] .
  • A scale factor applied to each pixel after convolution, as a float. I use a value of 1, so no scaling is applied.
  • A bias factor added to each pixel after convolution, as a float. I use a value of 0, representing no bias factor.
  • A kernel offset, which is applied to each pixel before convolution, as a SKPointI struct. I use values of 1,1 to ensure that no offset values are applied.
  • A tile mode that represents how pixel accesses outside the image are treated, as a SKMatrixConvolutionTileMode enumeration value. I used the Clamp enumeration member to specify that the convolution should be clamped to the image’s edge pixels.
  • A boolean value that indicates whether the alpha channel should be included in the convolution. I specified false to ensure that only the RGB channels are processed.

In addition, further arguments for the CreateMatrixConvolution method can be specified, but weren’t required here. For example, you could choose to perform convolution only on a specified region in the image.

After defining the SKImageFilter, the image is re-drawn, using the SKPaint object that includes the SKImageFilter object. The result is an image that has been convolved with the kernel selected by the user. Then, the SKSurface.Snapshot method is called, so that the re-drawn image is returned as an SKImage. This ensures that if the user selects an another kernel, convolution occurs against the new image, rather than the originally loaded image.

The following iOS screenshot shows the source image convolved with a simple edge detection kernel:

The following iOS screenshot shows the source image convolved with a kernel designed to create an emboss effect:

The following iOS screenshot shows the source image convolved with a kernel that implements the Laplacian of a Gaussian:

The Laplacian of a Gaussian is an interesting kernel that performs edge detection on smoothed image data. The Laplacian operator highlights regions of rapid intensity change, and is applied to an image that has first been smoothed with a Gaussian smoothing filter in order to reduce its sensitivity to noise.

Wrapping up

In undertaking this work, the question I set out to answer is as follows: is the combination of Xamarin.Forms and SkiaShap a viable platform for writing cross-platform imaging apps, when performing more substantial image processing? My main criteria for answering this question are:

  1. Can most of the app be written in shared code?
  2. Can imaging algorithms be implemented so that they have a fast execution speed, particularly on Android?

The answer to both questions, at this stage, is yes. I was impressed with the execution speed of SkiaSharp’s convolution algorithm on both platforms. I was particularly impressed when considering the size of the source image (4032x3024), and the amount of processing that’s performed when convolving an image with a kernel, particularly for larger kernels (the largest one in the app is 7x7).

The reason I say at this stage is that while performing convolution is a step up from basic imaging algorithms (greyscale, thresholding, sepia), it’s still considered relatively basic in image processing terms, despite the processing performed during convolution. Therefore, in my next blog post I’ll look at performing frequency filtering, which significantly increases the amount of processing performed on an image.

The sample this code comes from can be found on GitHub.

Monday, 8 July 2019

Cross-platform imaging with Xamarin.Forms and SkiaSharp

Back in the Xamarin.Forms 1.x days, I attempted to show the power of Xamarin.Forms development by writing a cross-platform imaging app. This was a mistake. While I produced a working cross-platform app, the majority of the code was platform code, joined together through DependencyService calls from a shared UI. If anything, it showed that it wasn’t easily possible to create a cross-platform imaging app with shared code. So it never saw the light of day.

I’d been thinking about this project recently, and while I knew that it’s possible to write cross-platform imaging apps with Xamarin.Forms and SkiaSharp, I wasn’t sure if it was advisable, from an execution speed point of view. In particular, I was worried about the execution speed of imaging algorithms on Android, especially when considering the resolution of photos taken with recent mobile devices. So I decided to write a proof of concept app to find out if Xamarin.Forms and SkiaSharp was a viable platform for writing cross-platform imaging apps.

App requirements and assumptions

When I talk about writing a cross-platform imaging app, I’m not particularly interested in calling platform APIs to resize images, crop images etc. I’m interested in accessing pixel data quickly, and being able to manipulate that data.

The core platforms I wanted to support were iOS and Android. UWP support would be a bonus, but I’d be happy to drop UWP support at the first sign of any issues.

The core functionality of the app is to load/display/save images, and manipulate the pixel values of the images as quickly as possible, with as much of this happening through shared code as possible. I wanted to support the common image file formats, but was only interested in supporting 32 bit images. The consequence of this is that when loading a colour image and converting it to greyscale, it would be saved back out as a 32 bit image, rather than an 8 bit image.

Note that the app is just a proof of concept app. Therefore, I wasn’t bothered about creating a slick UI. I just needed a functional UI. Similarly, I didn’t get hung up on architectural decisions. At one point I was going to implement each imaging algorithm using a plugin architecture, so the app would detect the algorithms and let the user choose them. But that was missing the point. It’s only a proof of concept. So it’s code-behind all the way, and the algorithms are hard-coded into the app.

App overview

The app was created in Xamarin.Forms and SkiaSharp, and the vast majority of the code is shared code. Platform code was required for choosing images on each platform, but that was about it. Image load/display/save/manipulation is handled with SkiaSharp shared code. Code for the sample app can be found on GitHub.

As part of our SkiaSharp docs, we’ve covered how to load and display an image using SkiaSharp. We’ve also covered how to save images using SkiaSharp. Our docs also explain how to write code to pick photos from the device’s photo library. I won’t regurgitate these topics here. Instead, just know that the app uses the techniques covered in these docs. The only difference is that while I started by using the SKBitmap class, I soon moved to using the SKImage class, after discovering that Google have plans to deprecate SKBitmap. Here’s a screenshot of the app, that shows an image of my magilyzer, which I’ll use as a test image in this blog post:

We’ve also got docs on accessing pixel data in SkiaSharp. SkiaSharp offers a number of different approaches for doing this, and understanding them is key to creating a performant app. In particular, take a look at the table in the Comparing the techniques section of the doc. This table shows execution times in milliseconds for these different approaches. The TL;DR is that the fastest approach is to use the GetPixels method to return a pointer to the pixel data, deference the pointer whenever you want to read/write a pixel value, and use pointer arithmetic to move the pointer to process other pixels.

Using this approach requires knowledge of how pixel data is stored in memory on different platforms. On iOS and Android, each pixel is stored as four bytes in RGBA format, which is represented in SkiaSharp with the SKColorType.Rgba8888 type. On UWP, each pixel is stored as four bytes in BGRA format, which is represented in SkiaSharp with the SKColorType.Bgra8888 type. Initially, I coded my imaging algorithms for all three platforms, but I got sick of having to handle UWP’s special case, so at that point it was goodbye UWP!

Basic algorithms

As I mentioned earlier, the focus of the app isn’t on calling platform APIs to performing imaging operations. It’s on accessing pixel data and manipulating that data. If you want to know how to crop images with SkiaSharp, see Cropping SkiaSharp bitmaps. Similarly, SkiaSharp has functionality for resizing images. With all that said, the first imaging algorithm I always implement when getting to grips with a new platform is converting a colour image to greyscale, as it’s a simple algorithm. The following code example shows how I accomplished this in SkiaSharp:

public static unsafe SKPixmap ToGreyscale(this SKImage image) { SKPixmap pixmap = image.PeekPixels(); byte* bmpPtr = (byte*)pixmap.GetPixels().ToPointer(); int width = image.Width; int height = image.Height; byte* tempPtr; for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { tempPtr = bmpPtr; byte red = *bmpPtr++; byte green = *bmpPtr++; byte blue = *bmpPtr++; byte alpha = *bmpPtr++; // Assuming SKColorType.Rgba8888 - used by iOS and Android // (UWP uses SKColorType.Bgra8888) byte result = (byte)(0.2126 * red + 0.7152 * green + 0.0722 * blue); bmpPtr = tempPtr; *bmpPtr++ = result; // red *bmpPtr++ = result; // green *bmpPtr++ = result; // blue *bmpPtr++ = alpha; // alpha } } return pixmap; }

This method converts a colour image to greyscale by retrieving a pointer to the start of the pixel data, and then retrieving the R, G, B, and A components of each pixel by deferencing the pointer, and then incrementing it’s address. The greyscale pixel value is obtained by multiplying the R value by 0.2126, multiplying the G value by 0.7152, multiplying the B value by 0.0722, and then summing the results. Note that the input to this method is an image in RGBA8888 format, and the output is an image in RGBA8888 format, despite being a greyscale image. Therefore the R, G, and B components of each pixel are all set to the same value. The following screenshot shows the test image converted to greyscale, on iOS:

As an example of colour processing, I implemented an algorithm for converting an image to sepia, which is shown in the following example:

public static unsafe SKPixmap ToSepia(this SKImage image) { SKPixmap pixmap = image.PeekPixels(); byte* bmpPtr = (byte*)pixmap.GetPixels().ToPointer(); int width = image.Width; int height = image.Height; byte* tempPtr; for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { tempPtr = bmpPtr; byte red = *bmpPtr++; byte green = *bmpPtr++; byte blue = *bmpPtr++; byte alpha = *bmpPtr++; // Assuming SKColorType.Rgba8888 - used by iOS and Android // (UWP uses SKColorType.Bgra8888) byte intensity = (byte)(0.299 * red + 0.587 * green + 0.114 * blue); bmpPtr = tempPtr; *bmpPtr++ = (byte)((intensity > 206) ? 255 : intensity + 49); // red *bmpPtr++ = (byte)((intensity < 14) ? 0 : intensity - 14); // green *bmpPtr++ = (byte)((intensity < 56) ? 0 : intensity - 56); // blue *bmpPtr++ = alpha; // alpha } } return pixmap; }

This method first derives an intensity value for the pixel (essentially a greyscale representation of the pixel), based on its R, G, and B components, and then sets the R, G, and B components based on this intensity value. The following screenshot shows the test image converted to sepia, on iOS:

I also implemented Otsu’s thresholding algorithm, as an example of binarisation. This algorithm typically derives the threshold for an image by minimizing intra-class variance. However, the implementation I’ve used derives the threshold by maximising inter-class variance, which is equivalent. The threshold is then used to separate pixels into foreground and background classes. For more information about this algorithm, see Otsu’s method. The code for the algorithm can be found on GitHub. The following screenshot shows the test image thresholded with this algorithm, on iOS:

Wrapping up

The question I set out to answer is as follows: is the combination of Xamarin.Forms and a SkiaSharp a viable platform for writing cross-platform imaging apps? My main criteria for answering this question are:

  1. Can most of the app be written in shared code?
  2. Can imaging algorithms be implemented so that they have a fast execution speed, particularly on Android?

The answer to both questions, at this stage, is yes. In particular, I was impressed with the execution speed of the algorithms on both platforms (even Android!). I was particularly impressed when considering the size of the source image (4032x3024). The reason I say at this stage is because the algorithms I’ve implemented are quite basic. They don’t really do any heavy processing. Therefore, in my next blog post I’ll look at performing convolution operations, which up the amount of processing performed on an image.

The sample this code comes from can be found on GitHub.

Thursday, 4 July 2019

OAuth 2.0 for Native Apps using Xamarin.Forms

About two years ago I wrote some samples that demonstrate using Xamarin.Forms to implement OAuth 2.0 for Native Apps. This spec represents the best practices for OAuth 2.0 authentication flows from mobile apps. These include:

  • Authentication requests should only be made through external user agents, such as the browser. This results in better security, and enables use of the user’s current authentication state, making single sign-on possible. Conversely, this means that authentication requests should never be made through a WebView. WebView controls are unsafe for third parties, as they leave the authorization grant and user’s credentials vulnerable to recording or malicious use. In addition, WebView controls don’t share authentication state, meaning single sign-on isn’t possible.
  • Native apps must request user authorization by creating a URI with the appropriate grant types. The app then redirects the user to this request URI. A redirect URI that the native app can receive and parse must also be supplied.
  • Native apps must use the Proof Key for Code Exchange (PKCE) protocol, to defend against apps on the same device potentially intercepting the authorization code.
  • Native apps should use the authorization code grant flow with PKCE. Conversely, native apps shouldn’t use the implicit grant flow.
  • Cross-Site Request Forgery (CSRF) attacks should be mitigated by using the state parameter to link requests and responses.

More details can be found in the OAuth 2.0 for Native Apps spec. Ultimately though, it leads to the OAuth 2.0 authentication flow for native apps being:

  1. The native app opens a browser tab with the authorisation request.
  2. The authorisation endpoint receives the authorisation request, authenticates the user, and obtains authorisation.
  3. The authorisation server issues an authorization code to the redirect URI.
  4. The native app receives the authorisation code from the redirect URI.
  5. The native app presents presents the authorization code at the token endpoint.
  6. The token endpoint validates the authorization code and issues the requested tokens.

For a whole variety of reasons, the samples that demo this using Xamarin.Forms never saw the light of day, but they can now be found in my GitHub repo. There are two samples:

Both samples consume endpoints on a publically available IdentityServer site. The main things to note about the samples are that (1) they use custom URL schemes defined in the platform projects, and (2) each platform project has code to open/close the browser as required, which is invoked with the Xamarin.Forms DependencyService.

Hopefully the samples will be of use to people, and if you want to know how the code works you should thoroughly read the OAuth 2.0 for Native Apps spec.

Wednesday, 3 July 2019

What’s new in CollectionView in Xamarin.Forms 4.1

Xamarin.Forms 4.1 was released on Monday, and as well as new functionality such as CheckBox, it includes a number of updates to CollectionView. The main CollectionView updates are outlined below.

Item Spacing

By default, each item in a CollectionView lacks empty space around it. This can now be changed by setting properties on the items layout used by the CollectionView.

For a ListItemsLayout, set the ItemSpacing property to a double that represents the empty space around each item. For a GridItemsLayout, set the VerticalItemSpacing and HorizontalItemSpacing properties to double values that represent the empty space vertically and horizontally around each item.

For more info, see Item spacing.

Specifying Layout

The static VerticalList and HorizontalList members in the ListItemsLayout class have been renamed to Vertical and Horizontal.

In addition, CollectionView has gained some converters so that vertical and horizontal lists can be specified in XAML using strings, rather than static members:

<CollectionView ItemsSource="{Binding Monkeys}" ItemsLayout="HorizontalList" />

For more info, see CollectionView Layout.

Item Sizing Strategy

The ItemSizingStrategy enumeration is now implemented on Android. For more info, see Item sizing.

SelectedItem and SelectedItems

The SelectedItem property now uses a TwoWay binding by default, and the selection can be cleared by setting the property, or the object it binds to, to null.

The SelectedItems property now uses a OneWay binding by default, and is now bindable to view model properties. However, note that this property is defined as IList<object>, and must bind to a collection that implements IList, and that has an object generic type. Therefore, the bound collection should be, for example, ObservableCollection<object> rather than ObservableCollection<Monkey>. In addition, selections can be cleared by setting this property, of the collection it binds to, to null.

For more info, see CollectionView Selection.

Thursday, 3 January 2019

Using the Retry Pattern with Azure Storage from Xamarin.Forms

Back in 2017 I wrote about transient fault handling in Xamarin.Forms applications with the retry pattern. Transient faults include the momentary loss of network connectivity to services, the temporary unavailability of a service, or timeouts that arise when a service is busy. These faults can have a huge impact on the perceived quality of an application. Therefore, applications that communicate with remote services should ideally be able to:

  1. Detect faults when they occur, and determine if the faults are likely to be transient.
  2. Retry the operation if it’s determined that the fault is likely to be transient, and keep track of the number of times the operation is retried.
  3. Use an appropriate retry strategy, which specifies the number of retries, the delay between each attempt, and the actions to take after a failed attempt.

This transient fault handling can be achieved by wrapping all attempts to access a remote service in code that implements the retry pattern.

Traditionally, the typical approach to using the retry pattern with Azure Storage has been to use a library such as Polly to provide a retry policy. However, this isn’t necessary as the Azure Storage SDK includes the ability to specify a retry policy.The SDK provides different retry strategies, which define the retry interval and other details. There are classes that provide support for linear (constant delay) retry intervals, and exponential with randomization retry intervals. For more information about using Azure Storage from a Xamarin.Forms application, see Storing and Accessing Data in Azure Storage.

Retry policies are configured programmatically. When writing to/reading from blob storage, this can be accomplished by creating a BlobRequestOptions object and assigning to the DefaultRequestOptions property of the CloudBlobClient object:

public class AzureStorageService : IAzureStorageService { CloudStorageAccount _storageAccount; CloudBlobClient _client; public AzureStorageService() { _storageAccount = CloudStorageAccount.Parse(Constants.StorageConnection); _client = CreateBlobClient(); } CloudBlobClient CreateBlobClient() { CloudBlobClient client = _storageAccount.CreateCloudBlobClient(); client.DefaultRequestOptions = new BlobRequestOptions { RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(3), 4), LocationMode = LocationMode.PrimaryThenSecondary, MaximumExecutionTime = TimeSpan.FromSeconds(20) }; return client; } ... }

This code creates a retry policy that uses an exponential retry. The arguments to the ExponentialRetry constructor specify the back-off interval between retries, and the maximum number of retry attempts (4). A maximum execution time of 20 seconds is set for all potential retry attempts. The LocationMode property is used to indicate which location should receive the request, if the storage account is configured to use geo-redundant storage. Here, PrimaryThenSecondary specifies that requests are always sent to the primary location first, and if a request fails, it’s sent to the secondary location. Note that if you use this option you must ensure that your application can work with data that may be stale if the replication from the primary store hasn’t completed.

All operations with the CloudBlobClient object will then use the specified request options. For example, the following code, which uploads a file to blob storage, will use the retry policy defined in the CloudBlobClient.DefaultRequestOptions property:

public async Task<string> UploadFileAsync(ContainerType containerType, Stream stream) { var container = _client.GetContainerReference(containerType.ToString().ToLower()); await container.CreateIfNotExistsAsync(); var name = Guid.NewGuid().ToString(); var fileBlob = container.GetBlockBlobReference(name); await fileBlob.UploadFromStreamAsync(stream); return name; }

Choosing whether to use the linear retry policy or the exponential retry policy depends upon the business needs of your application. However, a general rule of thumb is to use a linear retry policy for interactive applications that are in the foreground, and to use a exponential retry policy when your application is backgrounded, or performing some batch processing.

The retry policies provided by the Azure Storage SDK will be sufficient for most applications. However, if there’s a need to implement a custom retry approach, the existing policies can be extended through the IExtendedRetryPolicy interface.

For retry guidance with Azure Storage, see Azure Storage.

The sample this code comes from can be found on GitHub.

Tuesday, 30 October 2018

Creating a Hyperlink in Xamarin.Forms II

Previously, I wrote about creating a hyperlink in a Xamarin.Forms app by adding a TapGestureRecognizer to the GestureRecognizers collection of a Span, and setting its TextDecorations property to Underline. This can be achieved in XAML as follows:

<Span Text="Xamarin documentation" TextColor="Blue" TextDecorations="Underline"> <Span.GestureRecognizers> <TapGestureRecognizer Command="{Binding TapCommand}" CommandParameter="https://docs.microsoft.com/xamarin/" /> </Span.GestureRecognizers> </Span>

The problem with this approach is that it requires repetitive code every time you need a hyperlink in your app. A better approach would be to sub-class the Span class into a HyperlinkSpan class, with the gesture recognizer and text decoration added there. Unfortunately, the Span class is sealed and so can’t be inherited from. However, there’s an enhancement proposal to unseal the class, so that it can be inherited from. Go and up vote it here!

Therefore, for the purposes of this blog post I’ll demonstrate sub-classing the Label class to create a HyperlinkLabel class. Once the Span class is unsealed, the same approach can be followed.

The sample this code comes from can be found on GitHub.

Creating a HyperlinkLabel Class

The following code shows the HyperlinkLabel class:

public class HyperlinkLabel : Label { public static readonly BindableProperty UrlProperty = BindableProperty.Create(nameof(Url), typeof(string), typeof(HyperlinkLabel), null); public string Url { get { return (string)GetValue(UrlProperty); } set { SetValue(UrlProperty, value); } } public HyperlinkLabel() { TextDecorations = TextDecorations.Underline; TextColor = Color.Blue; GestureRecognizers.Add(new TapGestureRecognizer { Command = new Command(() => Device.OpenUri(new Uri(Url))) }); } }

The class defines a Url property (and BindableProperty), and the class constructor sets the hyperlink look and the TapGestureRecognizer that will respond when the hyperlink is tapped. When a HyperlinkLabel is tapped, the TapGestureRecognizer will respond by executing the Device.OpenUri method to open the URL, specified by the Url property, in a web browser.

The HyperlinkLabel class can be consumed simply by adding an XML namespace declaration to your XAML, and then an instance of the class:

<ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:local="clr-namespace:HyperlinkLabel" x:Class="HyperlinkLabel.MainPage"> <StackLayout Margin="20"> <Label Text="Hyperlink Demo" HorizontalOptions="Center" FontAttributes="Bold" /> <Label Text="Click the text below to view Xamarin documentation." /> <local:HyperlinkLabel Text="Xamarin Documentation" Url="https://docs.microsoft.com/xamarin/" HorizontalOptions="Center" /> </StackLayout> </ContentPage>

The HyperlinkLabel instance is rendered as follows:

Simulator Screen Shot - iPhone 8 - 2018-10-29 at 11.37.55

It’s Url property is set to the URL to be opened when the hyperlink is tapped. When this occurs, a web browser appears and the URL is navigated to:

Simulator Screen Shot - iPhone 8 - 2018-10-29 at 11.38.04


Summary

The repetitive code for creating a hyperlink on a Span or Label should be sub-classed into a HyperlinkSpan or HyperlinkLabel class, with the gesture recognizer and text decoration added there. Unfortunately, the Span class is sealed and currently can’t be inherited from. However, there’s an enhancement proposal to unseal the Span class. Once this is achieved, the approach taken in this blog post for the HyperlinkLabel class can be applied to the Span class.

The sample this code comes from can be found on GitHub.

Thursday, 25 October 2018

Xamarin.Forms Compiled Bindings FAQ

We recently announced compiled bindings for Xamarin.Forms. Bindings aren’t cost efficient because they are resolved at runtime using reflection. In some scenarios this can introduce a performance hit. In addition, there isn’t any compile-time validation of binding expressions, and so invalid bindings aren’t detected until runtime.

Compiled bindings aim to improve data binding performance in Xamarin.Forms applications by resolving binding expressions at compile-time, rather than runtime. As well as providing compile-time validation of binding expressions, more importantly they eliminate the reflection used to resolve the bindings at runtime.

How compiled bindings are used is documented here. Rather than repeat all that information, I thought it might be useful to provide a high-level FAQ about them. So here goes.

How do compiled bindings work?

When you create a binding in Xamarin.Forms XAML, it’s accomplished with the Binding markup extension, which in turn creates a Binding, which in turn inherits from the BindingBase class. This is what we call a classic binding. When you create a compiled binding in Xamarin.Forms XAML, it’s accomplished with the Binding markup extension, which in turn creates a TypedBinding, which in turn inherits from the TypedBindingBase class, which in turn inherits from the BindingBase class. This inheritance hierarchy is shown in the following simplified class diagram:

bindings

Rather than accept a binding path (like the Binding constructor does), the TypedBinding constructor takes a Func that gets the value from the source, an Action that sets it, and a list of property changed handlers. XAMLC then takes the binding path from your XAML and uses it to create the TypedBinding for you. Also, the Binding markup extension knows whether to return a Binding or a TypedBinding.

Are compiled bindings more performant than classic bindings?

Yes. Classic bindings use reflection. Compiled bindings eliminate this reflection, replacing it with a Func to get data, an Action to set data, and a list of handlers for property change notification.

How much more performant are compiled bindings than classic bindings?

How long is a piece of string? Ultimately it depends on the platform, OS, and the device but internal testing has shown that compiled bindings can be resolved 8-20 times quicker than classic bindings. See Performance for more information.

Which version of Xamarin.Forms supports compiled bindings?

Compiled bindings have been present in Xamarin.Forms for a while, but at the time of writing I’d recommend using Xamarin.Forms 3.3 as more compiled binding scenarios are supported in this release.

Can you create TypedBinding objects in code?

Technically yes, but it's not recommended for the following reasons:

  1. While the TypedBinding type is public, it's not intended to be used by app developers. It's public purely because it's consumed by the IL generated by XAMLC. It should be thought of as internal.
  2. Consequently, the TypedBinding type deliberately won't appear in your IDE intelligence.
  3. Why would you want to? You end writing code like this:

var binding = new TypedBinding<ComplexMockViewModel, string>( cmvm => cmvm.Model.Model.Text, (cmvm, s) => cmvm.Model.Model.Text = s, new [] { new Tuple<Func<ComplexMockViewModel, object>, string>(cmvm=>cmvm, "Model"), new Tuple<Func<ComplexMockViewModel, object>, string>(cmvm=>cmvm.Model, "Model"), new Tuple<Func<ComplexMockViewModel, object>, string>(cmvm=>cmvm.Model.Model, "Text") }) { Mode = BindingMode.OneWay };


Why do compiled bindings require XAMLC?

XAMLC will take the binding path from your XAML binding expression and use it to create the TypedBinding for you. Thus avoiding writing code like above.

Do compiled bindings work when the BindingContext is set in code, rather than XAML?

Yes.

Should I replace my classic bindings with compiled bindings?

It’s up to you. If you don’t have a performance problem with your classic bindings, why bother replacing them? On the other hand, there’s no harm in replacing classic bindings with compiled bindings.

Really, the key scenario for replacing classic bindings with compiled bindings is where you’ve identified a performance problem. Maybe you have a performance problem in a ListView on Android. Try switching to compiled bindings there to see if it helps.

Can every classic binding be replaced with a compiled binding?

No. Compiled bindings are currently disabled for any binding expressions that define the Source property. This is because the Source property is always set using the x:Reference markup extension, which can’t currently be resolved at compile time.