Wednesday, 24 March 2021

Adventures in ARKit - display a 3D model

In my previous blog post I discussed how to overlay an image of the earth on a SphereNode, which derives from SCNNode, and manipulate it through touch gestures.

In this blog post I’ll discuss how to display a 3D model in a scene. Specifically, it’ll be a model of the moon that can be manipulated similarly through touch gestures. Although the end result, a rotating moon, appears to be similar to a rotating earth, they are accomplished via different techniques.

The sample this code comes from can be found on GitHub.

Display a 3D model

ARKit and SceneKit support many different 3D model formats, including .dae, .usdz, .obj and .mtl, and many more. The exact formats supported are dependent upon the version of iOS you are using. Apple currently recommends using .usdz files (and has some samples), but this format can’t be consumed by the first release of ARKit. Therefore, for maximum compatibility, I’ve used a .dae model.

free3d.com is a good source of 3D models, both free and paid. However, it’s quite likely that any 3D model you download will first need manipulating to fit your requirements. This can be accomplished in a tool such as Blender. I used Blender to convert the model I downloaded to .dae format, and to scale it to my needs. Note that there’s a learning curve in getting to grips with Blender.

Once you have a 3D model ready to use it’s worth opening it in Xcode, for two reasons. Firstly, Xcode can be used to reveal the name of the root node in the model, that you may need when adding the model to your scene. Secondly, the model will display in Xcode exactly how it will display in your scene. So you can use Xcode to discover any problems with your model, and even fix some of them. For example, my model of the moon was displaying in red only. This is because, for memory reasons when handling greyscale images assigned as the Diffuse property, SceneKit will store the greyscale data in the red channel, but will zero the blue and green channels. This can be fixed converting any greyscale images to RGB, and sometimes by manipulating the Components drop down for the Diffuse property in Xcode.

Once you have a 3D model that renders correctly in Xcode, it can be added to your ARKit app. 3D models are added to a scene as a SCNNode, which can then be positioned and manipulated as required. As always, this can be accomplished in the ViewDidAppear method in the ViewController class:

public override void ViewDidAppear(bool animated)
{
    base.ViewDidAppear(animated);

    sceneView.Session.Run(new ARWorldTrackingConfiguration
    {
        AutoFocusEnabled = true,
        LightEstimationEnabled = true,
        WorldAlignment = ARWorldAlignment.Gravity
    }, ARSessionRunOptions.ResetTracking | ARSessionRunOptions.RemoveExistingAnchors);

    SCNScene scene = SCNScene.FromFile("moon.dae");
    SCNNode node = scene.RootNode;
    node.Position = new SCNVector3(0, 0, -25f);
    sceneView.Scene.RootNode.AddChildNode(node);
    ... 
}

In this example, the 3D model of the moon is retreived using the SCNScene type, and its root node is retrieved from the scene as a SCNNode. The node is then positioned and added to the scene. In addition, gesture recognisers are added to the SCNNode that aren’t shown in the code above.

The overall effect is that when the app runs, a SCNNode that resembles the moon appears:

Tapping on the SCNNode starts it rotating, and while rotating, tapping it a second time stops it rotating. In addition, the pinch gesture will resize the SCNNode, and the rotate gesture enables the Z-axis of the SCNNode to be manipulated.

If you want to manipulate a particular node in the 3D model, you’ll need to know its name. This can be determined by opening the model in Xcode and navigating the scene graph for the model until you find the name for the required part of the model. This can then be retrieved as a SCNNode:

SCNNode node = scene.RootNode.FindChildNode("MyModelPart", true);

Once you’ve retreived the desired part of the model as a SCNNode, it can be manipulated as required. For example, you could use this technique to retrieve the arm from a model of a person, and then animate it.

Friday, 19 March 2021

Adventures in ARKit - rotating earth

In my previous blog post I discussed how to animate a node in a scene. Specifically, I animated a cube by rotating it continuously through 360 degrees on the Y axis. However, I originally wanted to animate a sphere, with a view to creating a rotating earth. In this blog post I’ll do just that.

The sample this code comes from can be found on GitHub.

Rotating earth

In order to add a sphere to the scene, I created a SphereNode type that derives from SCNNode:

using SceneKit;
using UIKit;

namespace ARKitFun.Nodes
{
    public class SphereNode : SCNNode
    {
        public SphereNode(float size, string filename)
        {
            SCNNode node = new SCNNode
            {
                Geometry = CreateGeometry(size, filename),
                Opacity = 0.975f
            };

            AddChildNode(node);
        }

        SCNGeometry CreateGeometry(float size, string filename)
        {
            SCNMaterial material = new SCNMaterial();
            material.Diffuse.Contents = UIImage.FromFile(filename);
            material.DoubleSided = true;

            SCNSphere geometry = SCNSphere.Create(size);
            geometry.Materials = new[] { material };

            return geometry;
        }
    }
}

The SphereNode constructor takes float and string arguments. The float argument represents the size of the sphere, and the string argument represents the filename of an image to overlay on the sphere. The constructor creates the material and geometry for the sphere, and adds the node as a child node to the SCNNode. The power of ARKit is demonstrated by the CreateGeometry method, which loads the supplied image and maps it onto the geometry as a material. The result is that a regular 2D rectangular image (in this case a map of the world) is automatically mapped onto the sphere geometry.

The ViewDidAppear method in the ViewController class can then be modified to add a SphereNode to the scene:

using System;
using System.Linq;
using ARKit;
using ARKitFun.Extensions;
using ARKitFun.Nodes;
using CoreGraphics;
using SceneKit;
using UIKit;

namespace ARKitFun
{
    public partial class ViewController : UIViewController
    {
        readonly ARSCNView sceneView;
        const float size = 0.1f;
        const float zPosition = -0.5f;
        bool isAnimating;
        float zAngle;
        ...
        
        public override void ViewDidAppear(bool animated)
        {
            base.ViewDidAppear(animated);

            sceneView.Session.Run(new ARWorldTrackingConfiguration
            {
                AutoFocusEnabled = true,
                LightEstimationEnabled = true,
                PlaneDetection = ARPlaneDetection.Horizontal,
                WorldAlignment = ARWorldAlignment.Gravity
            }, ARSessionRunOptions.ResetTracking | ARSessionRunOptions.RemoveExistingAnchors);

            SphereNode sphereNode = new SphereNode(size, "world-map.jpg");
            sphereNode.Position = new SCNVector3(0, 0, zPosition);

            sceneView.Scene.RootNode.AddChildNode(sphereNode);

            UIRotationGestureRecognizer rotationGestureRecognizer = new UIRotationGestureRecognizer(HandleRotateGesture);
            sceneView.AddGestureRecognizer(rotationGestureRecognizer);
            ...
        }
        ...
    }
}

In this example, a SphereNode is added to the scene and positioned at (0,0,-0.5). The SphereNode constructor specifies a world map image that will be mapped to the geometry of the SCNNode. In addition, a UIRotationGestureRecognizer is added to the scene.

The following code example shows the HandleRotateGesture method:

void HandleRotateGesture(UIRotationGestureRecognizer sender)
{
    SCNView areaPanned = sender.View as SCNView;
    CGPoint point = sender.LocationInView(areaPanned);
    SCNHitTestResult[] hitResults = areaPanned.HitTest(point, new SCNHitTestOptions());
    SCNHitTestResult hit = hitResults.FirstOrDefault();

    if (hit != null)
    {
        SCNNode node = hit.Node;
        zAngle += (float)(-sender.Rotation);
        node.EulerAngles = new SCNVector3(node.EulerAngles.X, node.EulerAngles.Y, zAngle);
    }
}

In this example, the node on which the rotate gesture was detected is determined. Then the node is rotated on the Z-axis by the rotation angle requested by the gesture.

The overall effect is that when the app runs, a SphereNode that resembles the earth appears:

Tapping on the SphereNode starts it rotating, and while rotating, tapping it a second time stops it rotating. In addition, the pinch gesture will resize the SphereNode, and the rotate gesture enables the Z-axis of the SphereNode to be manipulated.

In my next blog post I’ll discuss displaying a 3D model in a scene.

Thursday, 18 March 2021

Adventures in ARKit - animation

In my previous blog post I discussed how to interact with nodes in a scene, using touch. This involved creating gesture recognisers and adding them to the ARSCNView instance with the AddGestureRecognizer method.

In this blog post I’ll examine animating a node in a scene. I originally wanted to animate a sphere, to make it rotate. However, it can be difficult to observe a sphere with a diffuse colour rotating. Therefore, I switched to rotating a cube.

The sample this code comes from can be found on GitHub.

Animate a node

In order to add a cube to the scene, I created a CubeNode type that derives from SCNNode:

using SceneKit;
using UIKit;

namespace ARKitFun.Nodes
{
    public class CubeNode : SCNNode
    {
        public CubeNode(float size, UIColor color)
        {
            SCNMaterial material = new SCNMaterial();
            material.Diffuse.Contents = color;

            SCNBox geometry = SCNBox.Create(size, size, size, 0);
            geometry.Materials = new[] { material };

            SCNNode node = new SCNNode
            {
                Geometry = geometry
            };

            AddChildNode(node);
        }
    }
}

The CubeNode constructor takes a float argument that represents the size of each side of the cube, and a UIColor argument that represents the colour of the cube. The constructor creates the materials and geometry for the cube, before creating a SCNNode and assigning the geometry to its Geometry property, and adds the node as a child node to the SCNNode.

Nodes can be animated with the SCNAction type, which represents a reusable animation that changes attributes of any node you attach it to. SCNAction objects are created with specific class methods, and are executed by calling a node object’s RunAction method, passing the action object as an argument.

For example, the following code creates a rotate action and applies it to a CubeNode:

SCNAction rotateAction = SCNAction.RotateBy(0, (float)Math.PI, 0, 5); // X,Y,Z,secs
CubeNode cubeNode = new CubeNode(0.1f, UIColor.Blue);
cubeNode.RunAction(rotateAction);
sceneView.Scene.RootNode.AddChildNode(cubeNode);

In this example, the CubeNode is rotated 360 degrees on the Y axis over 5 seconds. To rotate the cube indefinitely, use the following code:

SCNAction rotateAction = SCNAction.RotateBy(0, (float)Math.PI, 0, 5);
SCNAction indefiniteRotation = SCNAction.RepeatActionForever(rotateAction);
CubeNode cubeNode = new CubeNode(0.1f, UIColor.Blue);
cubeNode.RunAction(indefiniteRotation);
sceneView.Scene.RootNode.AddChildNode(cubeNode);

In this example, the CubeNode is rotated 360 degrees on the Y axisover 5 seconds. That is, it takes 5 seconds to complete a full 360 degree rotation. Then the animation is looped.

This code can be generalised into an extension method that can be called on any SCNNode type:

using System;
using SceneKit;

namespace ARKitFun.Extensions
{
    public static class SCNNodeExtensions
    {
        public static void AddRotationAction(this SCNNode node, SCNActionTimingMode mode, double secs, bool loop = false)
        {
            SCNAction rotateAction = SCNAction.RotateBy(0, (float)Math.PI, 0, secs);
            rotateAction.TimingMode = mode;

            if (loop)
            {
                SCNAction indefiniteRotation = SCNAction.RepeatActionForever(rotateAction);
                node.RunAction(indefiniteRotation, "rotation");
            }
            else
                node.RunAction(rotateAction, "rotation");
        }
    }
}

The AddRotationAction extension method adds a rotate animation to the specified SCNNode. The SCNActionTimingMode argument defines the easing function for the animation. The secs argument defines the number of seconds to complete a full rotation of the node, and the loop argument defines whether to animate the node indefinitely. The RunAction method calls both specify a string key argument. This enables the animation to be stopped programmatically by specifying the key as an argument to the RemoveAction method.

The ViewDidAppear method in the ViewController class can then be modified to add a CubeNode to the scene, and animate it:

using System;
using System.Linq;
using ARKit;
using ARKitFun.Extensions;
using ARKitFun.Nodes;
using CoreGraphics;
using SceneKit;
using UIKit;

namespace ARKitFun
{
    public partial class ViewController : UIViewController
    {
        readonly ARSCNView sceneView;
        const float size = 0.1f;
        const float zPosition = -0.5f;
        bool isAnimating;        
        ...
        
        public override void ViewDidAppear(bool animated)
        {
            base.ViewDidAppear(animated);

            sceneView.Session.Run(new ARWorldTrackingConfiguration
            {
                AutoFocusEnabled = true,
                LightEstimationEnabled = true,
                PlaneDetection = ARPlaneDetection.Horizontal,
                WorldAlignment = ARWorldAlignment.Gravity
            }, ARSessionRunOptions.ResetTracking | ARSessionRunOptions.RemoveExistingAnchors);

            CubeNode cubeNode = new CubeNode(size, UIColor.Blue);
            cubeNode.Position = new SCNVector3(0, 0, zPosition);

            sceneView.Scene.RootNode.AddChildNode(cubeNode);

            UITapGestureRecognizer tapGestureRecognizer = new UITapGestureRecognizer(HandleTapGesture);
            sceneView.AddGestureRecognizer(tapGestureRecognizer);
            ...
        }
        ...

In this example, a blue CubeNode is created and positioned in the scene at (0,0,-0.5). In addition, a UITapGestureRecognizer is added to the scene.

The following code example shows the HandleTapGesture method:

void HandleTapGesture(UITapGestureRecognizer sender)
{
    SCNView areaPanned = sender.View as SCNView;
    CGPoint point = sender.LocationInView(areaPanned);
    SCNHitTestResult[] hitResults = areaPanned.HitTest(point, new SCNHitTestOptions());
    SCNHitTestResult hit = hitResults.FirstOrDefault();

    if (hit != null)
    {
        SCNNode node = hit.Node;
        if (node != null)
        {
            if (!isAnimating)
            {
                node.AddRotationAction(SCNActionTimingMode.Linear, 3, true);
                isAnimating = true;
            }
            else
            {
                node.RemoveAction("rotation");
                isAnimating = false;
            }
        }                    
    }
}

In this example, the node on which the tap gesture was detected is determined. If the node isn’t being animated, an indefinite rotation SCNAction is added to the node, which fully rotates the node every 3 seconds. Then, provided that the node is being animated, when it’s tapped again the animation ceases by calling the RemoveAction method, specifying the key value for the action.

The overall effect is that when the app runs, tapping on the node animates it. When animated, tapping on the node stops the animation. Then a new animation will begin on the subsequent tap:

As I mentioned at the beginning of this blog post, I originally wanted to rotate a sphere, with a view to creating a rotating earth. However, it can be difficult to see a sphere with a diffuse colour rotating.

In my next blog post I’ll discuss rotating a sphere to create a rotating earth.

Wednesday, 17 March 2021

Adventures in ARKit - respond to touch

In my previous blog post I discussed how to overlay an image on the camera output in an ARKit app.

Objects that you overlay on the camera output are called nodes. By default, nodes don’t have a shape. Instead, you give them a geometry (shape) and apply materials to the geometry to provide a visual appearance.

Overlaying a node, or multiple nodes, on a scene is typically the first step in creating an augmented reality app. However, such apps typically require interaction with the nodes. In this blog post I’ll examine touch interaction with the ImageNode from the previous blog post.

The sample this code comes from can be found on GitHub.

Respond to touch

Augmented reality apps usually allow interaction with the nodes that are overlayed on a scene. This interaction is typically touch-based. The UIGestureRecognizer types can be used to detect gestures on nodes, that can then be manipulated as required.

The ARSCNView instance must be told to listen for gestures, in order for an ARKit app to respond to different touch interactions. This can be accomplished by creating the required gesture recognisers and adding them to the ARSCNView instance with the AddGestureRecognizer method:

using System;
using System.Linq;
using ARKit;
using ARKitFun.Nodes;
using CoreGraphics;
using SceneKit;
using UIKit;

namespace ARKitFun
{
    public partial class ViewController : UIViewController
    {
	readonly ARSCNView sceneView;
	...
        public override void ViewDidAppear(bool animated)
        {
            ...
            UITapGestureRecognizer tapGestureRecognizer = new UITapGestureRecognizer(HandleTapGesture);
            sceneView.AddGestureRecognizer(tapGestureRecognizer);

            UIPinchGestureRecognizer pinchGestureRecognizer = new UIPinchGestureRecognizer(HandlePinchGesture);
            sceneView.AddGestureRecognizer(pinchGestureRecognizer);
        }
        ...
    }
}

In this example, gesture recognisers are added for the tap gesture and the pinch gesture. The following code example shows the HandleTapGesture and HandlePinchGesture methods that are used to process these gestures:

void HandleTapGesture(UITapGestureRecognizer sender)
{
    SCNView areaPanned = sender.View as SCNView;
    CGPoint point = sender.LocationInView(areaPanned);
    SCNHitTestResult[] hits = areaPanned.HitTest(point, new SCNHitTestOptions());
    SCNHitTestResult hit = hits.FirstOrDefault();

    if (hit != null)
    {
        SCNNode node = hit.Node;
        if (node != null)
            node.RemoveFromParentNode();
    }
}

void HandlePinchGesture(UIPinchGestureRecognizer sender)
{
    SCNView areaPanned = sender.View as SCNView;
    CGPoint point = sender.LocationInView(areaPanned);
    SCNHitTestResult[] hits = areaPanned.HitTest(point, new SCNHitTestOptions());
    SCNHitTestResult hit = hits.FirstOrDefault();

    if (hit != null)
    {
        SCNNode node = hit.Node;

        float scaleX = (float)sender.Scale * node.Scale.X;
        float scaleY = (float)sender.Scale * node.Scale.Y;

        node.Scale = new SCNVector3(scaleX, scaleY, zPosition / 2);
        sender.Scale = 1; // Reset the node scale value
    }
}

Both methods share common code that determines the node on which a gesture was detected. Code that interacts with the node is then performed. For example, the HandleTapGesture removes the node from the scene, when it’s tapped. The HandlePinchGesture scales the width and height of the node using the pinch gesture. Similarly, it’s possible to add other gesture recognisers to move nodes, rotate them etc.

The overall effect is that the node can be removed from the scene with a tap, or scaled with a pinch:

In my next blog post I’ll discuss animating a node in a scene.

Tuesday, 16 March 2021

Adventures in ARKit - overlay an image

In my previous blog post I discussed how to create a basic ARKit app on Xamarin.iOS, that displays the camera output. In this blog post I’ll take the first steps into augmenting the experience by overlaying an image on the camera output.

Before I got to grips with overlaying images, I first overlayed basic geometric shapes - spheres, cones, cylinders etc. There’s nothing I want to call out about doing that. However, my experimentation can be found in a sample. For info on how it works, you could buy the book .

The sample this code comes from can be found on GitHub.

Overlay an image

Objects that you overlay on the camera output are called nodes. By default, nodes don’t have a shape. Instead, you give them a geometry (shape) and apply materials to the geometry to provide a visual appearance. Nodes are represented by the SceneKit SCNNode type.

One of the geometries provided by SceneKit is SCNPlane, which represents a square or rectangle. This type essentially acts as a surface on which to place other objects.

In my sample app, I defined a ImageNode type, that derives from SCNNode, which can be re-used when overlaying an image onto a scene:

using SceneKit;
using UIKit;
using Foundation;

namespace ARKitFun.Nodes
{
    public class ImageNode : SCNNode
    {
        public ImageNode(string image, float width, float height)
        {
            SCNNode node = new SCNNode
            {
                Geometry = CreateGeometry(image, width, height)
            };
            AddChildNode(node);
        }

        SCNGeometry CreateGeometry(string resource, float width, float height)
        {
            UIImage image;

            if (resource.StartsWith("http"))
                image = FromUrl(resource);
            else
                image = UIImage.FromFile(resource);

            SCNMaterial material = new SCNMaterial();
            material.Diffuse.Contents = image;
            material.DoubleSided = true; // Ensure geometry viewable from all angles

            SCNPlane geometry = SCNPlane.Create(width, height);
            geometry.Materials = new[] { material };
            return geometry;
        }

        UIImage FromUrl(string url)
        {
            using (NSUrl nsUrl = new NSUrl(url))
            using (NSData imageData = NSData.FromUrl(nsUrl))
                return UIImage.LoadFromData(imageData);
        }
    }
}

The ImageNode constructor takes a string argument that represents the filename or URI of an image, and float arguments that represent the width and height of the image in the scene. The constructor creates a SCNNode, assigns a geometry to its Geometry property, and adds the node as a child node to the SCNNode.

The CreateGeometry method creates a UIImage object that represents the local or remote image, and creates a SCNMaterial object that represents the image. Then, a SCNPlane object is created, of size width x height, and the the SCNMaterial object is assigned to the geometry. Therefore, the shape of the node is defined by the SCNPlane object of width x height, and the material (the image) defines the visual appearance of the node.

The code from my previous blog post can be modified to overlay an image on the camera output. This is accomplished by modifying the ViewDidAppear method in the ViewController class:

using System;
using ARKit;
using ARKitFun.Nodes;
using SceneKit;
using UIKit;

namespace ARKitFun
{
    public partial class ViewController : UIViewController
    {
		...
        public override void ViewDidAppear(bool animated)
        {
            base.ViewDidAppear(animated);

            sceneView.Session.Run(new ARWorldTrackingConfiguration
            {
                AutoFocusEnabled = true,
                LightEstimationEnabled = true,
                PlaneDetection = ARPlaneDetection.Horizontal,
                WorldAlignment = ARWorldAlignment.Gravity
            }, ARSessionRunOptions.ResetTracking | ARSessionRunOptions.RemoveExistingAnchors);

            ImageNode imageNode = new ImageNode("Xamagon.png", 0.1f, 0.1f);
            imageNode.Position = new SCNVector3(0, 0, -0.25f); // X,Y,Z

            sceneView.Scene.RootNode.AddChildNode(imageNode);
        }
        ...
    }
}

When the session for the ARSCNView runs, it automatically sets the camera to be the background of the view. In addition, the initial device location is registered as the world origin (X=0, Y=0, Z=0). Any objects you place in the scene will be relative to the world origin.

If you don’t specify the position of a node within a scene, it will by default be placed at the world origin (0,0,0). However, a node can be positioned in 3D space by setting the Position property of the SCNNode to a SCNVector3 object that defines the X,Y,Z coordinates of the node. The values of X,Y,Z are floats where 1f = 1m, 0.1f = 10cm, and 0.01f = 1cm.

In the example, an ImageNode is then created for a local file, that’s included in the project, of dimensions 10cm x 10cm. The ImageNode is placed at the world origin (0,0) for the X and Y coordinates, and 25cm forwards on the Z-axis. ImageNode is then added to the scene by the AddChildNode method call.

The overall effect is that the Xamagon is placed in the scene at the specified coordinates:

Note that the image contains transparency, and so blends well in the scene.

In my next blog post I’ll discuss interacting with the image in the scene.

Monday, 15 March 2021

Adventures in ARKit - platform setup

Lee Englestone has recently written a book about developing augmented reality apps using ARKit on Xamarin.iOS. It piqued my interest so I bought it. His book appears to be the best source of information for learning ARKit on Xamarin.iOS, and he was a companion website that’s pretty damn good.

After experimenting with ARKit, I’ve already forgotten some of the early lessons I learnt. So, as a reminder to my future self, I’ve decided to blog about getting to grips with ARKit.

The sample this code comes from can be found on GitHub.

Platform setup

The first thing to note is that you’ll need a physical iPhone to run an ARKit app. ARKit requires the use of the camera, and you won’t have much joy with that in the iOS Simulator.

Deploying an ARKit app to your iPhone will require an Apple ID, and optionally an Apple Developer account. However, I created a new Apple ID that I didn’t link to an Apple Developer account. This made it possible to use free provisioning in VSMac to deploy the app to my device, without having to register for on the Apple Developer program.

The most convenient approach to learning ARKit is via a Single View app. Once you’ve created the app you’ll need to give the app permission to use the device camera, in Info.plist:

	<key>NSCameraUsageDescription</key>
	<string>Use camera?</string>

To check you can deploy an app to your phone, it’s useful to create an ARKit app that displays the camera output. This can be accomplished by modifying the ViewController class in your single view app:

using System;
using ARKit;
using UIKit;

namespace ARKitFun
{
    public partial class ViewController : UIViewController
    {
        readonly ARSCNView sceneView;

        public ViewController(IntPtr handle) : base(handle)
        {
            sceneView = new ARSCNView
            {
                AutoenablesDefaultLighting = true,
                ShowsStatistics = true
            };
            View.AddSubview(sceneView);
        }

        public override void ViewDidLoad()
        {
            base.ViewDidLoad();
            sceneView.Frame = View.Frame;
        }

        public override void ViewDidAppear(bool animated)
        {
            base.ViewDidAppear(animated);

            sceneView.Session.Run(new ARWorldTrackingConfiguration
            {
                AutoFocusEnabled = true,
                LightEstimationEnabled = true,
                WorldAlignment = ARWorldAlignment.Gravity
            }, ARSessionRunOptions.ResetTracking | ARSessionRunOptions.RemoveExistingAnchors);
        }

        public override void ViewDidDisappear(bool animated)
        {
            base.ViewDidDisappear(animated);
            sceneView.Session.Pause();
        }

        public override void DidReceiveMemoryWarning()
        {
            base.DidReceiveMemoryWarning();
        }
    }
}

The core type being used here is ARSCNView, which stands for Augmented Reality Scene View. When the session for the ARSCNView runs, it automatically sets the camera to be the background of the view.

When you first attempt to run your app you’ll have to instruct your device to trust apps from you, the “untrusted” developer. You can do this in the device settings at General > Device Management > Trust developer.

When the app starts, the initial device location is registered as the world origin (X=0, Y=0, Z=0). Any objects you place in the scene will be relative to the world origin.

In my next blog post I’ll discuss overlaying an image onto the scene.

Tuesday, 9 March 2021

Process navigation data using IQueryAttributable in Xamarin.Forms Shell apps

I recently discovered, courtesy of @PureWeen, another mechanism for receiving navigation data in Xamarin.Forms Shell apps.

If you write Shell apps, you’ll know that the mechanism for passing data between pages is to pass it as query parameters using URI-based navigation. Then, in your receiving class (be it a page, or a view model), you decorate the class with a QueryPropertyAttribute for each query parameter. This works quite well, but it can be a pain having to add a QueryPropertyAttribute for each item of data that gets passed. In addition, because each QueryPropertyAttribute sets a property, you can end up with multiple pieces of plumbing code just to receive data.

So what’s the solution? Enter the oddly named IQueryAttributable interface.

Process navigation data using a single method

IQueryAttributable is a Xamarin.Forms interface that specifies that an implementing class must implement a method named ApplyQueryAttributes. This method has a query argument, of type IDictionary<string, string>, that contains any data passed during navigation. Each key in the dictionary is a query parameter id, with its value being the query parameter value.

For example, the following code shows a view model class that implements IQueryAttributable:

public class MyViewModel : IQueryAttributable
{
    public void ApplyQueryAttributes(IDictionary query)
    {
        // The query parameter requires URL decoding.
        string name = HttpUtility.UrlDecode(query["name"]);
        ...
    }
    ...
}

In this example, the ApplyQueryAttributes method retrieves the value of the name query parameter from the URI in the GoToAsync method call. Then, whatever custom logic is desired can be executed. Note that query parameter values that are received via the IQueryAttributable interface aren’t automatically URL decoded.

It’s also possible to process multiple items of navigation data:

public class MyViewModel : IQueryAttributable
{
    public void ApplyQueryAttributes(IDictionary query)
    {
        // The query parameters require URL decoding.
        string name = HttpUtility.UrlDecode(query["name"]);
        string location = HttpUtility.UrlDecode(query["location"]);
        ...
    }
    ...
}

In this example, the ApplyQueryAttributes method retrieves the value of the name and location query parameters from the URI in the GoToAsync method call.

The advantage of using this approach is that navigation data can be processed using a single method, which can be useful when you have multiple items of navigation data that require processing as a whole. Compare that against the QueryPropertyAttribute approach, with multiple properties per item of data, and it’s a no brainer which is more elegant.

Want to know more about Shell navigation? Then see Xamarin.Forms Shell navigation.