Monday, 22 December 2014

Encrypting and decrypting data in an Azure service using a certificate

I recently had a requirement to encrypt some data stored in a Web.config file for an Azure hosted service that’s accessed over HTTPS.

To help secure information in configuration files, ASP.NET provides a feature called protected configuration, which enables the encryption of sensitive data in a configuration file. The recommended approach is to protect configuration using either the DpapiProtectedConfigurationProvider class or the RsaProtectedConfigurationProvider class that are both included in the .NET framework.

Unfortunately these protected configuration providers do not work with Azure. The DpapiProtectedConfigurationProvider class uses a machine-specific key that cannot be transferred to Azure. While the RsaProtectedConfigurationProvider enables transferring an RSA key pair in an XML file to different machines, and then importing the key to a key container, the XML file is meant to be removed from the machine after the key has been imported. On Azure, since the account running the Web role doesn’t have permissions to delete files in the web root, it is not possible to remove the XML file.

I needed a solution that would:

  1. Allow data to be decrypted when running the service locally, and when running the service in Azure.
  2. Allow data to be encrypted and decrypted on any machine in our organization.
  3. Not be so onerous to prevent the periodic changing of the data to be encrypted/decrypted.

The recommended approach for Azure is to use the Pkcs12 custom protected configuration provider and the Aspnet_regiss.exe tool to encrypt sections of the configuration file. The Pkcs12 format enables transfer of certificates and their corresponding private keys from one machine to another. This provider is similar to the RsaProtectedConfigurationProvider, with the difference being that instead of transferring the RSA key pair in an XML file, it relies on the transfer to occur using a certificate in .PFX format. This approach has been used by P&P in the Autoscaling Application Block. While this provider works with the built-in tooling in ASP.NET that can read configuration automatically, it is an onerous solution for configuration that may change periodically.

This blog post details my solution, which was to use the SSL cert for the service to encrypt data once on a local machine, and then use the SSL certificate to decrypt data both locally and in Azure. The advantage of this approach is that for a service delivered over HTTPS, Azure will already have the SSL certificate stored in its certificate store, and so no additional key transfer is required.

Implementation

To encrypt a piece of data you must first retrieve the SSL certificate from its location in the certificate store, and then encrypt the data using the certificate. The following code example shows this process.

1 private static string Encrypt(string plainText)
2 {
3 // Thumb value for SSL cert
4 var thumb = "<thumbprint goes here>";
5
6 var passwordBytes = UTF8Encoding.UTF8.GetBytes(plainText);
7 var contentInfo = new ContentInfo(passwordBytes);
8 var env = new EnvelopedCms(contentInfo);
9 X509Store store = null;
10 string cipherText = null;
11
12 try
13 {
14 store = new X509Store(StoreName.My, StoreLocation.LocalMachine);
15 store.Open(OpenFlags.ReadOnly);
16 var cert = store.Certificates.Cast<X509Certificate2>().Where(xc => xc.Thumbprint == thumb).Single();
17 env.Encrypt(new CmsRecipient(cert));
18
19 cipherText = Convert.ToBase64String(env.Encode());
20 }
21 finally
22 {
23 if (store != null)
24 store.Close();
25 }
26 return cipherText;
27 }

 


 

 

 

The X509Store class is used to provide access to the X.509 store, which is the physical store where certificates are persisted and managed. Once the store has been opened in read only mode, the SSL certificate is retrieved by searching for its thumbprint value. The data to be encrypted is stored in a CMS/PKCS #7 enveloped data structure, and is encrypted using the EnvelopedCms.Encrypt method. It’s then base64 encoded before being returned from the method.

Decrypting the data simply reverses the process. The same SSL certificate is retrieved from the certificate store, and then is used to decrypt the data. The following code example shows this process.

1 private static string Decrypt(string cipherText)
2 {
3 // Thumb value for SSL cert
4 var thumb = "<thumbprint goes here>";
5 X509Store store = null;
6 string plainText = null;
7
8 try
9 {
10 store = new X509Store(StoreName.My, StoreLocation.LocalMachine);
11 store.Open(OpenFlags.ReadOnly);
12 var cert = store.Certificates.Cast<X509Certificate2>().Where(xc => xc.Thumbprint == thumb).Single();
13 var bytes = Convert.FromBase64String(cipherText);
14 var env = new EnvelopedCms();
15 env.Decode(bytes);
16 env.Decrypt();
17 plainText = Encoding.UTF8.GetString(env.ContentInfo.Content);
18 }
19 finally
20 {
21 if (store != null)
22 store.Close();
23 }
24 return plainText;
25 }

 

 

 

 

The X509Store class is used to provide access to the X.509 store, with the constructor taking arguments that indicate which part of the certificate store should be opened. Once the store has been opened in read only mode, the SSL certificate is retrieved by searching for its thumbprint value. The data to be decrypted is converted to a byte representation, from its base64 representation, before being decoded and decrypted by the EnvelopedCMS class. The plain text is then returned from the method.

This approach to encryption and decryption enables you to store sensitive information in your configuration file in an encrypted form, which can then be decrypted both when running the service locally, and when running it in Azure. It offers the advantage that it’s not necessary to transfer additional keys to Azure in order to perform decryption, and it’s not too onerous a task to change encrypted data periodically.  It can be further strengthened through the use of additional security techniques, including using the SecureString class at appropriate places in the code.

Summary

This blog post has demonstrated how to encrypt and decrypt data using an SSL cert. My requirement was to encrypt/decrypt data stored in a configuration file, but it may equally be used with other data. The advantage of this approach is that it enables you to decrypt data both when running a service locally, and when running it in Azure, without the onerous task of copying additional encryption key data to Azure.

Monday, 15 December 2014

Using basic authentication in an Azure Cloud Service

I recently had a requirement to use transport security with basic authentication in a web service hosted in Azure. Basic authentication is a mechanism for a HTTP user agent to provide credentials when making a request to the server, and is supported by all major browsers and servers. It doesn’t require cookies, session identifiers, or login pages. Instead it uses a static, standard HTTP header which means that no handshakes need to be performed.

IIS web servers provide basic authentication against Windows accounts on the server or through active directory. This situation is further complicated in services hosted in Azure. The following code shows how transport security with basic authentication can be specified in a web.config file.

1 <bindings>
2 <basicHttpsBinding>
3 <binding name="TransportSecurity">
4 <security mode="Transport">
5 <transport clientCredentialType="Basic"/>
6 </security>
7 </binding>
8 </basicHttpsBinding>
9 </bindings>

However, when you run an Azure cloud service with this configuration you’ll receive the following error message:

The authentication schemes configured on the host ('Anonymous') do not allow those configured on the binding 'BasicHttpsBinding' ('Basic').  Please ensure that the SecurityMode is set to Transport or TransportCredentialOnly.  Additionally, this may be resolved by changing the authentication schemes for this application through the IIS management tool, through the ServiceHost.Authentication.AuthenticationSchemes property, in the application configuration file at the <serviceAuthenticationManager> element, by updating the ClientCredentialType property on the binding, or by adjusting the AuthenticationScheme property on the HttpTransportBindingElement.

The initial problem is that basic authentication is unavailable by default for Azure web roles. It can be enabled either by enabling RDP on the virtual machine that the service is running on, RDPing in and adding basic authentication to IIS and then enabling it. Alternatively you could write a Powershell script to install basic authentication, and configure it to run from your VS solution when the web role starts up. Then you need to create a Windows account in the virtual machine that will be used during basic authentication.

This solution was not ideal. Moving forwards the service could have many different users, and I didn’t like the thought of having to create Windows accounts for each user. Furthermore, I’m a firm believer in trying to keep web services as provider agnostic as possible, in order to reduce problems if the service needs to be moved to another provider.

An alternative solution would be to use a different basic authentication module, provided by a third party. This is also not ideal, as it involves additional effort in identifying a suitable third party module to use, and then much time thoroughly testing it.

In this blog post I’ll outline my solution to this problem, which is to implement your own basic authentication mechanism. Basic authentication uses a simple protocol:

  1. The “username:password” format is used to combine username and password into one string.

  2. The resulting string is then base64 encoded.

  3. The encoding string is sent to the server in an Authorization header sent with the web request:
Authorization: Basic <base64 encoded username:password goes here>

Implementation

My solution is in three parts:

  1. Configure the service to use transport security but no authentication.
  2. In the service, intercept all web requests and parse out the Authorization header that contains the basic authentication credentials. The extracted credentials can then be compared against the actual credentials.
  3. In the client, intercept all web requests and add an appropriate Authorization header using the convention specified for basic authentication.

The following code shows how to configure the service to use transport security but not authentication.

1 <bindings>
2 <basicHttpsBinding>
3 <binding name="TransportSecurity">
4 <security mode="Transport">
5 <transport clientCredentialType="None"/>
6 </security>
7 </binding>
8 </basicHttpsBinding>
9 </bindings>

To intercept web requests to the service I created a class called BasicAuthenticationManager that derives from the ServiceAuthorizationManager class, which provides authorization access checking for service operations. This class overrides the CheckAccessCore method which checks authorization for the given operation context. In this method you can obtain the headers for the web request, and you can then parse out the Authorization header. The following code example shows this.

1 protected override bool CheckAccessCore(OperationContext operationContext)
2 {
3 var authHeader = WebOperationContext.Current.IncomingRequest.Headers["Authorization"];
4
5 if (!string.IsNullOrWhiteSpace(authHeader))
6 {
7 if (authHeader.StartsWith("Basic"))
8 {
9 var credentials = ASCIIEncoding.ASCII.GetString(Convert.FromBase64String(authHeader.Substring(6))).Split(':');
10
11 // Compare credentials against stored encrypted credentials
12 // If equal return true, otherwise false
13 }
14 }
15 }

The logic is straight forward: The Authorization header is extracted from the web request, and then the credentials are extracted from the Authorization header. The credentials can then be validated using your chosen approach (such as comparing them against encrypted credentials stored in configuration). If the credentials are valid, then return true. Otherwise return false, or throw the exception of your choosing to prevent the service operation being executed.

The service must then be configured to use the BasicAuthenticationManager class. This can be accomplished by adding a serviceAuthorization element to your web.config. Note that the format for specifying the class is AssemblyNamespace.Classname, AssemblyNamespace.

1 <?xml version="1.0"?>
2 <configuration>
3
4 <system.serviceModel>
5 ...
6 <behaviors>
7 <serviceBehaviors>
8 <behavior>
9 ...
10 <serviceAuthorization serviceAuthorizationManagerType="FullyQualifiedTypeName, AssemblyName" />
11 ...
12 </behavior>
13 </serviceBehaviors>
14 </behaviors>
15 </system.serviceModel>
16
17
18 </configuration>

The client that invokes the web service must then be updated to create the Authorization header for every service operation. The following code example shows this.

1 using (var client = new Proxy.Client())
2 {
3 var userName = "<username goes here>";
4 var password = "<password goes here>";
5
6 // Create the authorization header
7 var httpRequestProperty = new HttpRequestMessageProperty();
8 httpRequestProperty.Headers[HttpRequestHeader.Authorization] = "Basic " +
9 Convert.ToBase64String(Encoding.ASCII.GetBytes(userName + ":" + password));
10
11 using (new OperationContextScope(client.InnerChannel))
12 {
13 // Add the authorization header to every outgoing message
14 OperationContext.Current.OutgoingMessageProperties[HttpRequestMessageProperty.Name] = httpRequestProperty;
15
16 // Make web requests
17 }
18 }


 

The HttpRequestMessageProperty class is used to provide access to the HTTP request, with the Headers property providing access to the HTTP headers from the request. It’s then easy to add an Authorization header that comprises “Basic “ and the base64 encoded “username:password” string. The OperationContextScope class is then used to add the authorization header to every outgoing message.

Summary

This blog post has demonstrated how to use basic authentication in an Azure cloud service, without having to expose the underlying virtual machine that the service runs on, and without then having to undertake messy configuration of the virtual machine. It offers more flexibility than using ISS basic authentication, as you can specify the credentials in your service, instead of having to rely upon basic authentication against a Windows account.

Thursday, 21 August 2014

Reading the response message from a PUT request using PHP and cURL

Previously I’ve mentioned that I had to write some PHP to PUT some JSON data to a RESTful web API. After calling curl_exec() to make the PUT request I called curl_getInfo() to retrieve the HTTP status code from the response message, in order to output a success or failure message.

While debugging this function it was sometimes necessary to examine the request message being sent to the web API, to ensure its format was correct. This required setting some additional CURLOPT parameters, as shown in the code below.

1 function callRestAPI($uri, $signature, $json) {
2 $headers = array (
3 "Content-Type: application/json; charset=utf-8",
4 "Content-Length: " .strlen($json),
5 "X-Signature: " . $signature
6 );
7
8 $channel = curl_init($uri);
9 curl_setopt($channel, CURLOPT_RETURNTRANSFER, true);
10 curl_setopt($channel, CURLOPT_CUSTOMREQUEST, "PUT");
11 curl_setopt($channel, CURLOPT_HTTPHEADER, $headers);
12 curl_setopt($channel, CURLOPT_POSTFIELDS, $json);
13 curl_setopt($channel, CURLOPT_SSL_VERIFYPEER, false);
14 curl_setopt($channel, CURLOPT_CONNECTTIMEOUT, 10);
15 curl_setopt($channel, CURLOPT_VERBOSE, true);
16 curl_setopt($channel, CURLOPT_HEADER, true);
17 curl_setopt($channel, CURLINFO_HEADER_OUT, true);
18
19 $response = curl_exec($channel);
20 $request = curl_getInfo($channel, CURLINFO_HEADER_OUT);
21 $statusCode = curl_getInfo($channel, CURLINFO_HTTP_CODE);
22
23 echo $request . "<BR>";
24 echo $response . "<BR>";
25 echo $statusCode . "<BR>";
26
27 curl_close($channel);
28 return $statusCode;
29 }

 

 

 

 

The additions are to set:

  1. CURLOPT_VERBOSE to true in order to output verbose information.
  2. CURLOPT_HEADER to true to include the header in the output.
  3. CURLINFO_HEADER_OUT to true to track the request message.

Then, after executing the PUT request I called curl_getInfo to get information about the PUT request. Specifically I requested the request message string by  using the CURLINFO_HEADER_OUT constant. This provides the request message which can then be output for debugging purposes.

Monday, 18 August 2014

Making a PUT request using PHP and cURL

I recently had to write some PHP to PUT some JSON data to a RESTful web API. My final solution involved reading numerous blog posts to piece together exactly what I needed. Bearing that in mind I decided to document my solution in case it’s useful to others, and for my own future use.

The conventional technique for invoking a PUT operation is to set CURLOPT_PUT to true. However, this option is used to PUT a file, with the file to PUT being set with CURLOPT_INFILE and CURLOPT_INFILESIZE. Using this approach involves writing your JSON data to a file first, and removing the file after the operation. Aside from the inefficiency of this approach, functions such as tmpfile(), fwrite() and fseek() are not always supported in some environments.

The standard solution is to set CURLOPT_CUSTOMREQUEST to “PUT” in order to specify the request type, and then set the CURLOPT_POSTFIELDS to the JSON data you want to PUT. In addition, you must set CURLOPT_RETURNTRANSFER to true in order to return the transfer value as a string, rather than it being output directly by curl_exec(). This is a well documented solution, most notably by LornaJane.

However, this solution needed extending in order to enable communication over SSL. In addition, I needed to append some custom header information to the request in order to support the authentication required by the web service. My solution is shown below.

1 function callRestAPI($uri, $signature, $json) {
2 $headers = array (
3 "Content-Type: application/json; charset=utf-8",
4 "Content-Length: " .strlen($json),
5 "X-Signature: " . $signature
6 );
7
8 $channel = curl_init($uri);
9 curl_setopt($channel, CURLOPT_RETURNTRANSFER, true);
10 curl_setopt($channel, CURLOPT_CUSTOMREQUEST, "PUT");
11 curl_setopt($channel, CURLOPT_HTTPHEADER, $headers);
12 curl_setopt($channel, CURLOPT_POSTFIELDS, $json);
13 curl_setopt($channel, CURLOPT_SSL_VERIFYPEER, false);
14 curl_setopt($channel, CURLOPT_CONNECTTIMEOUT, 10);
15
16 curl_exec($channel);
17 $statusCode = curl_getInfo($channel, CURLINFO_HTTP_CODE);
18 curl_close($channel);
19 return $statusCode;
20 }

The important additions are that I set:

  1. CURLOPT_SSL_VERIFYPEER to false to stop cURL from verifying the peer’s certificate, thus enabling communication over SSL. By default this value is true, and you are meant to set CURLOPT_CAINFO to specify the file holding the certificate to verify the peer with. I opted for stopping cURL from verifying the peer’s certificate in the interests of simplicity.
  2. CURLOPT_HTTPHEADER to an array of HTTP header fields, containing the data required by the web service, including an authentication parameter.

In addition, after executing the PUT request I called curl_getInfo to get information about the PUT request. Specifically I requested the status code of the operation using the CURLINFO_HTTP_CODE constant. This provides the HTTP status code contained within the response message, which is then returned to the calling function, so that an appropriate message can be output.

Monday, 4 August 2014

Delivering and consuming media from Azure Media Services

Previously I’ve summarised how to use the Azure Media Encoder to encode uploaded media for delivery to client apps. To do this you create a media processing job that enables you to schedule and automate the encoding of assets.

In this blog post I’ll summarise how Azure Media Services content can be downloaded, or directly accessed by using locator URLs. For more information about locator URLs see my previous blog post. Content to be delivered can include media assets that are stored in Media Services, or media assets that have been encoded and processed in different ways.

Delivering media

There are a number of approaches that can be used to deliver Media Services content:

  • Direct download from Azure Storage
  • Access media in Azure Storage
  • Stream media to a client application
  • Send media content to another application or to another content provider.

Media content can be directly downloaded as long as you have the Media Services credentials for the account that uploaded and encoded the asset. Content can be downloaded by creating a shared access signature (SAS) locator, which contains a URL to the asset that contains the requested file. The URL can then be used by anyone to download the asset.

Alternatively, you may want to give users access to content stored in Azure Storage. To do this you must create a full SAS URL to each file contained in the media asset. This is achieved by appending the file name to the SAS locator.

Media Services also provides the means to directly access streaming media content. This is accomplished by creating an on-demand origin locator, which allows direct access to streaming content. With on-demand origin locators, you build a full URL to a streaming manifest file in an asset. You then provide the URL to a client application that can play streaming content.

Content can also be delivered by using a Content Delivery Network (CDN), in order to offer improved performance and scalability when streaming media. For more information see How to Manage Origins in a Media Services Account.

When streaming media using on-demand origin locators you can take advantage of dynamic packaging. When using dynamic packaging, your video is stored in one encoded format, usually an adaptive bitrate MP4 file set. When a video player requests the video it specifies the format it requires, and the Media Services Origin Service converts the MP4 file to the format requested by the player. This allows you to store only one format of your videos, therefore reducing storage costs. For more information about the Origin Service see Azure Media Services Origin Service.

The following diagram shows a high-level overview of the media delivery process that we used in the Building an On-Demand Video Service with Microsoft Azure Media Services project.

IC721580

Client apps request media through a REST web interface. The Contoso web service queries the Content Management System, which returns the URL of the media asset in Azure Storage. The media asset could be a single media file, or a manifest file which references multiple media files. The client application then requests the URL content from the Origin Service, which processes the outbound stream from storage to client application. For a code walkthrough of this process see Browsing videos, Playing videos, and Retrieving recommendations.

You can scale Media Services delivery by specifying the number of on-demand streaming reserved units that you would like your account to be provisioned with. For more information see Scale a Media Service.

Summary

This blog post has summarised how Azure Media Services content can be downloaded, or directly accessed by using locator URLs. Content to be delivered can include media assets that are stored in Media Services, or media assets that have been encoded and processed in different ways.

Monday, 28 July 2014

Encoding and processing media in Azure Media Services

Previously I’ve summarised the media upload process from client apps into a video on-demand service that uses Azure Media Services.

In this blog post I’ll summarise how to incorporate Media Services’ encoding and processing functionality into an app. To do this you create processing jobs that enable you to schedule and automate the encoding and processing of assets.

Encoding and processing media

Media services provides a number of media processors that enable media to be processed. Media processors handle a specific processing task, such as encoding, format conversion, encrypting, or decrypting content. Encoding video is the most common Media Services processing operation, and is performed by the Azure Media Encoder. The Media Encoder is configured using encoder preset strings, with each preset specifying a group of settings required for the encoder. For a list of all the presets see Azure Media Encoder Presets.

Media Services supports progressive download of video and streaming. When encoding for progressive download you encode to a single bitrate. To be able to stream content it must first be converted into a streaming format. There are two types of streaming offered by Media Services:

  • Single bitrate streaming
  • Adaptive bitrate streaming

With single bitrate streaming a video is encoded to a single bitrate stream and divided into chunks. The stream is delivered to the client one chunk at a time. The chunk is displayed and the client then requests the next chunk. When encoding for adaptive bitrate streaming you encode to an MP4 bitrate set that creates a number of different bitrate streams. These streams are also broken into chunks. However, adaptive bitrate technologies allow the client to determine network conditions and select from among several bitrates. When network conditions degrade, the client can select a lower bitrate allowing the video to continue to play at a lower quality. Once network conditions improve the client can switch back to a higher bitrate with improved quality.

Media Services supports three adaptive bitrate streaming technologies:

  1. Smooth streaming, created by Microsoft
  2. HTTP Live Streaming (HLS), created by Apple
  3. MPEG-DASH, an ISO standard
Accessing media processors

Processing jobs involve calling a specific media process to process the job. Media Services supports the following media processors:

Media processor Description
Azure Media Encoder Allows you to run encoding tasks using the Media Encoder,
Azure Media Packager Allows you to convert media assets from MP4 to Smooth Streaming format, and Smooth Streaming assets to HLS format..
Azure Media Encryptor Allows you to encrypt media assets using PlayReady protection.
Storage Decryption Allows you to decrypt media assets that were encrypted using storage encryption.

To use a specific media processor you should pass the name of the processor into the GetLatestMediaProcessorByName method.

private IMediaProcessor GetLatestMediaProcessorByName(string mediaProcessorName)
{
    var processor = this.context.MediaProcessors.Where(p => p.Name == mediaProcessorName)
        .ToList().OrderBy(p => new Version(p.Version)).LastOrDefault();
 
    if (processor == null)
    {
        throw new ArgumentException(string.Format("Unknown media processor: {0}", mediaProcessorName));
    }
                
    return processor;
}

This method retrieves the specified media processor and returns an instance of it. It can be invoked as follows:

IMediaProcessor mediaProcessor = this.GetLatestMediaProcessorByName(MediaProcessorNames.WindowsAzureMediaEncoder);
Creating encoding jobs

After media has been uploaded into Media Services it can be encoded into one of the formats supported by the Media Services Encoder. The Media Services Encoder supports encoding using the H.264 and VC-1 codecs, and can generate MP4 and Smooth Streaming content. However, MP4 and Smooth Streaming content can be converted to HLS v3 or MPEG-DASH by using dynamic packaging.

Encoding jobs are created and controlled using a Job. Each Job contains metadata about the processing to be performed, and contains one or more Tasks that specify a processing task, its input Assets, output Assets, and a media processor and its settings. Tasks within a Job can be chained together, where the output asset of one task is given as the input asset to the next task. By following this approach one Job can contain all of the processing required for a media presentation.

The following diagram shows a high-level overview of the media encoding process used in the Building an On-Demand Video Service with Microsoft Azure Media Services project.

untitled1

The EncodingService class retrieves the asset details from the CMS database and passes the encoding job to Media Services, where it’s submitted to the Azure Media Encoder. The encoding job and video details are saved to the CMS database while the Media Encoder processes the job, retrieving the input asset from Azure Storage, and writing the output assets to Azure Storage. When encoding is complete Media Services notifies the EncodingService class, which generate locator URLS to the output assets in Azure Storage, and updates the encoding job and video details in the CMS database. For a code walkthrough of this process see Encoding process in the Contoso Azure Media Services web service.

By default, each Media Services account can have one active encoding task at a time. However, you can reserve encoding units that allow you to have multiple encoding tasks running concurrently. For more information see How to Scale a Media Service.

Accessing encoded media

Accessing content in Media Services requires a locator, which combines the URL to the media file with a set of time-based access permissions. There are two types of locators – shared access signature (SAS) locators and on-demand origin locators.

A SAS locator grants access rights to a specific media asset through a URL. By using the URL you are granting users who have the URL access to a specific resource for a period of time, in addition to specifying what operations can be performed on the resource.

On-demand origin locators are used when streaming content to a client app, and are exposed by the Media Services Origin Service which pulls the content from Azure Storage and delivers it to the client. An on-demand origin locator URL will point to a streaming manifest file in asset. For more information about the Origin Service see Origin Service.

Summary

This blog post has summarised how to use the Azure Media Encoder to encode uploaded media for delivery to client apps.To do this you create a media processing job that enables you to schedule and automate the encoding of assets. For more info see Building an On-Demand Video Service with Microsoft Azure Media Services.

In my next blog post I’ll discuss the final step in the Media Services workflow – delivering and consuming media.

Friday, 25 July 2014

Using a custom overlay in a Windows Phone QR code scanning app

Previously I’ve demonstrated how to build a simple Windows Phone app to perform QR code scanning, using the ZXing.Net.Mobile library. This library makes the scanning and decoding of bar codes effortless, leaving you to focus on other user experiences in your app.

In this blog post I’m going to extend the sample app so that it doesn’t use the default UI included with ZXing.Net.Mobile, when QR code scanning. Instead, a custom UI will be created and used during the QR code scanning process. This custom UI is referred to as an overlay.

Implementation

The first step is to define the overlay in the XAML code for the page.

<Grid Name="Overlay" Visibility="Collapsed">
    <Grid Background="Transparent">
        <Grid.RowDefinitions>
            <RowDefinition Height="*" />
            <RowDefinition Height="Auto" />
        </Grid.RowDefinitions>
        <Grid.ColumnDefinitions>
            <ColumnDefinition Width="*" />
            <ColumnDefinition Width="*" />
        </Grid.ColumnDefinitions>
        <Button Background="Black" Grid.Row="1" Grid.Column="0" Name="ButtonCancel">Cancel</Button>
        <Button Background="Black" Grid.Row="1" Grid.Column="1" Name="ButtonTorch">Torch</Button>
    </Grid>
</Grid>

This code defines a custom overlay named Overlay, which contains a Cancel button and a Torch button (for toggling on/off the flash) that will appear at the bottom of the page. The next step is pass the overlay to the ZXing.Net.Mobile library.

private UIElement _overlay = null;
 
private async void MainPage_Loaded(object sender, RoutedEventArgs e)
{
    _scanner = new MobileBarcodeScanner(this.Dispatcher);
 
    if (_overlay == null)
    {
        _overlay = this.Overlay.Children[0];
        this.Overlay.Children.RemoveAt(0);
    }
 
    this.ButtonCancel.Click += (s, e2) =>
        {
            _scanner.Cancel();
        };
    this.ButtonTorch.Click += (s, e2) =>
        {
            _scanner.ToggleTorch();
        };
 
    _scanner.CustomOverlay = _overlay;
    _scanner.UseCustomOverlay = true;
 
    var result = await _scanner.Scan();
    ProcessScanResult(result);
}

After creating an instance of the MobileBarcodeScanner class the Overlay is retrieved from the visual tree and stored in a UIElement instance named _overlay. Then, the Cancel and Torch button Click events are wired up to methods in the MobileBarcodeScanner class, to cancel scanning, and toggle the torch respectively. The CustomOverlay property of the MobileBarcodeScanner instance is then set to _overlay, the UseCustomOverlay property of the MoblileBarcodeScanner instance is set to true to indicate that a custom overlay will be used during the QR scanning process, before the Scan method of the MobileBarcodeScanner instance is invoked. The following screenshot shows the custom overlay being displayed during the QR code scanning process (QR code image courtesy of Google Images):

image

While the overlay shown here is basic, it does show the mechanism used for creating a UI for the QR code scanning process that matches the rest of the UI used in your app.

When a QR code is successfully recognized the decoded result can be passed to a method for processing, in this case the ProcessScanResult method. For an explanation of this method see my previous blog post.

Summary

This blog post has demonstrated how to replace the default UI used by ZXing.Net.Mobile, with a custom UI of your own design. This allows you to create a UI for the QR code scanning process that matches the rest of the UI used in your app.

The sample app can be downloaded here.