Wednesday, 14 July 2021

Get data from a tilt hydrometer on an Arduino

A tilt hydrometer is a submersible thermometer and hydrometer for monitoring fermentation. It’s a Bluetooth LE device that reports data via the iBeacon protocol. Specifically, it broadcasts a major and minor value, which represent the temperature and gravity of the liquid its submersed in, via the manufacturer data. These values can be extracted from the manufacturer data and converted into decimal values.

The manufacturer data it broadcasts is a hex string, such as the following:

4c000215a495bb30c5b14b44b5121370f02d74de0047042b00

This data breaks down as follows:

Apple beacon Type Length Device UUID Major (temperature) Minor (gravity) ???
4c 00 02 15 a4 95 bb 30 c5 b1 4b 44 b5 12 13 70 f0 2d 74 de 00 47 04 2b 00

The Device UUID is shared between devices of a specific colour (it’s unimportant what this means, other than to know that the Device UUID above identifies it as a black tilt).

The temperature is in degrees Fahrenheit, and is a 16 bit unsigned integer in big endian format. The gravity is a 16 bit unsigned integer in big endian format, that must be divided by 1000 to obtain the correct value. For the message above, the temperature is 71F (21.67 C) and the gravity is 1.067.

Normally I monitor the data the tilt returns via an app on my phone, but for various reasons I decided to build my own device to display the data.

My microcontroller of choice is Arduino. They are fantastic, cheap, reliable, and powerful devices and I’ve used them in several projects. The Arduino IDE is a bit basic, but I’m still constantly surprised that Arduino’s “just work”, particularly when I can’t say that about many other technology stacks.

My Arduino of choice, when I require connectivity, is the Arduino Nano 33 IoT. It’s perfect for small devices that require WiFi and Bluetooth functionality. The process for using the Arduino to get data from the tilt hydrometer is as follows:

  • Start bluetooth.
  • Scan for your tilt hydrometer. Once the tilt is found, stop scanning.
  • Retrieve the manufacturer data from the tilt, and extract the temperature and gravity.
  • Stop bluetooth.

Note that because an iBeacon device broadcasts its data, there’s no need to connect to the device.

The ArduinoBLE library can be used to manage bluetooth connectivity. If you’re interested in how the library works, see its GitHub repo. The problem with the library is that it doesn’t support reading manufacturer data. However, an unmerged PR has added that functionality. This version of the library must be installed to your Arduino IDE’s library directory for the sketch below to work (clone the ArduinoBLE repo, switch to the branch containing the PR, zip the repo contents, place the zip in the library directory for the Arduino IDE).

My sketch that gets the manufacturer data from the tilt, and decodes/extracts the temperature and gravity is shown below:

#include <ArduinoBLE.h>
 
char tiltMacAddress[] = "your tilt MAC address goes here e.g. aa:bb:cc:dd:ee:ff";
float temperature = 0;
float gravity = 0;
 
void setup()
{
    Serial.begin(9600);
    while (!Serial); 
    
    BLE.setEventHandler(BLEDiscovered, OnBLEDiscovered);
    StartBluetooth();
}
 
void loop()
{
    if (temperature == 0 || gravity == 0)
    {
        BLE.poll();
    }
}
 
void StartBluetooth()
{
    if (!BLE.begin())
    {
        Serial.println("Can't start BLE");
        return;
    }
    Serial.println("Started bluetooth");
    BLE.scanForAddress(tiltMacAddress);
    Serial.println("Started scan");
}
 
void OnBLEDiscovered(BLEDevice peripheral)
{
    if (peripheral.hasManufacturerData())
    {
        Serial.println("Tilt detected");
        StopScan();
        GetTiltData(peripheral);
        StopBluetooth();
    }
}
 
void StopScan()
{
    BLE.stopScan();
    Serial.println("Stopped scan");
}
 
void StopBluetooth()
{
    BLE.end();
    Serial.println("Stopped bluetooth");
}
 
void GetTiltData(BLEDevice peripheral)
{
    Serial.println("Address: " + peripheral.address());
    Serial.println("RSSI: " + String(peripheral.rssi()));
   
    String tiltData = peripheral.manufacturerData();
    Serial.println("Data: " + tiltData);
   
    String tempHex = tiltData.substring(40, 44);
    String gravityHex = tiltData.substring(44, 48);
   
    char tempHexChar[5];
    tempHex.toCharArray(tempHexChar, 5);
    float tempF = strtol(tempHexChar, NULL, 16);
    temperature = (tempF - 32) * .5556;
    Serial.println("Temp: " + String(temperature));
   
    char gravChar[5]; 
    gravityHex.toCharArray(gravChar, 5);
    long grav = strtol(gravChar, NULL, 16);
    gravity = grav / 1000.0f;
    Serial.println("Gravity: " + String(gravity, 3));
}

After starting bluetooth on the Arduino, a tilt hydrometer can be scanned for using BLE.scanForAddress(tiltMacAddress). I’d recommend using the scanForAddress method over the scan method, as it will take less time to find your tilt. Obviously, this requires knowing the MAC address of your tilt, which can easily be obtained by free bluetooth scanners on most platforms.

Once the tilt with the specified MAC address is discovered, the BLEDiscovered event fires, which in turn executes the OnBLEDiscovered handler. This handler retrieves the manufacturer data with the BLEDevice.manufacturerData method, which returns a hex string. The major and minor values can then be extracted from the hex string, and converted into decimal-based temperature and gravity values.

Outputting the data to the serial port shows that it’s been successfully retrieved:

Having successfully retrieved the tilt data, it’s then possible to output it to Nixie tubes. This involved some refactoring of the above code to make it more robust to the appearance and disappearance of the tilt, and to only retrieve data every hour, rather than continuously (the data only changes very slowly).

So here it is - a one of a kind Nixie device (using 4x IN12A and 2x IN12B tubes) that displays the time (set on startup from an NTP server, and resynchronised every 24 hours):

When a tilt hydrometer is detected, it also displays the fermentation data. Temperature:

Gravity:

Provided that a tilt hydrometer is detected, the device displays the time for a minute, followed by temperature for 30 seconds, and gravity for 30 seconds. If there’s no tilt detected, the time is displayed permanently. The device also includes programmable RGB leds, which act as backlights to each Nixie tube.

Tuesday, 20 April 2021

Adventures in ARKit - image detection

In my previous blog post I discussed how to display a 3D model in a scene. In this blog I’ll discuss how to perform image detection in a scene. Specifically, the app will identify the following image in a scene, and highlight it:

The sample this code comes from can be found on GitHub.

Image detection

The simplest approach to declaring the image to be detected is to add it to your app’s asset catalog as an AR Reference Image inside an AR Resource Group.

Writing code to detect the image is a two-step process:

  1. Create an ARSCNViewDelegate class that defines the code to be executed when the image is detected.
  2. Consume the ARSCNViewDelegate instance in your ViewController class, to detect the image.

The following code example shows the SceneViewDelegate class, which derives from ARSCNViewDelegate:

using System;
using ARKit;
using ARKitFun.Nodes;
using SceneKit;
using UIKit;

namespace ARKitFun
{
    public class SceneViewDelegate : ARSCNViewDelegate
    {
        public override void DidAddNode(ISCNSceneRenderer renderer, SCNNode node, ARAnchor anchor)
        {
            if (anchor is ARImageAnchor imageAnchor)
            {
                ARReferenceImage image = imageAnchor.ReferenceImage;
                nfloat width = image.PhysicalSize.Width;
                nfloat height = image.PhysicalSize.Height;

                PlaneNode planeNode = new PlaneNode(width, height, new SCNVector3(0, 0, 0), UIColor.Red);
                float angle = (float)(-Math.PI / 2);
                planeNode.EulerAngles = new SCNVector3(angle, 0, 0);
                node.AddChildNode(planeNode);
            }
        }
    }
}

The SceneViewDelegate class overrides the DidAddNode method, which is executed when the image is detected in the scene. This method first checks that the detected image is an ARImageAnchor, which represents an anchor for a known image that ARKit detects in the scene. Then the dimensions of the detected image are determined, and a red PlaneNode (of the same dimensions) is created and overlaid on the detected image. In addition, the overlaid PlaneNode will always orient itself correctly over the detected image.

The PlaneNode class is simply an SCNNode, which uses an SCNPlane geometry that represents a square or rectangle:

using System;
using SceneKit;
using UIKit;

namespace ARKitFun.Nodes
{
    public class PlaneNode : SCNNode
    {
        public PlaneNode(nfloat width, nfloat length, SCNVector3 position, UIColor color)
        {
            SCNNode node = new SCNNode
            {
                Geometry = CreateGeometry(width, length, color),
                Position = position,
                Opacity = 0.5f
            };

            AddChildNode(node);
        }

        SCNGeometry CreateGeometry(nfloat width, nfloat length, UIColor color)
        {
            SCNMaterial material = new SCNMaterial();
            material.Diffuse.Contents = color;
            material.DoubleSided = false;

            SCNPlane geometry = SCNPlane.Create(width, length);
            geometry.Materials = new[] { material };

            return geometry;
        }
    }
}

The PlaneNode constructor takes arguments that represent the width and height of the node, it’s position, and a color. The constructor creates a SCNNode, assigns a geometry to its Geometry property, sets its position and opacity, and adds the child node to the SCNNode.

The SceneViewDelegate class can then be consumed in your ViewController class, to detect the image:

using System;
using ARKit;
using Foundation;
using UIKit;

namespace ARKitFun
{
    public partial class ViewController : UIViewController
    {
        readonly ARSCNView sceneView;

        public ViewController(IntPtr handle) : base(handle)
        {
            sceneView = new ARSCNView
            {
                ShowsStatistics = true,
                Delegate = new SceneViewDelegate()
            };
            View.AddSubview(sceneView);
        }

        public override void ViewDidAppear(bool animated)
        {
            base.ViewDidAppear(animated);

            NSSet<ARReferenceImage> images = ARReferenceImage.GetReferenceImagesInGroup("AR Resources", null);

            sceneView.Session.Run(new ARWorldTrackingConfiguration
            {
                AutoFocusEnabled = true,
                LightEstimationEnabled = true,
                DetectionImages = images
            }, ARSessionRunOptions.ResetTracking | ARSessionRunOptions.RemoveExistingAnchors);
        }
        ...
    }
}

The ViewController constructor creates an instance of the SceneViewDelegate class and sets the instance as the Delegate property of the ARSCNView. In addition, the ViewDidAppear method is modified to retrieve the image to be detected from the asset catalog, and set it as the DetectionImages property of the ARWorldTrackingConfiguration object.

The overall effect is that when the image is detected in the scene, a red rectangle is overlaid on it:

Then, the red rectangle reorients itself in realtime if the orientation of the detected image in the scene changes:

Once an object has been identified in a scene, it can be manipulated, and this will be what I explore in my next blog post.

Wednesday, 24 March 2021

Adventures in ARKit - display a 3D model

In my previous blog post I discussed how to overlay an image of the earth on a SphereNode, which derives from SCNNode, and manipulate it through touch gestures.

In this blog post I’ll discuss how to display a 3D model in a scene. Specifically, it’ll be a model of the moon that can be manipulated similarly through touch gestures. Although the end result, a rotating moon, appears to be similar to a rotating earth, they are accomplished via different techniques.

The sample this code comes from can be found on GitHub.

Display a 3D model

ARKit and SceneKit support many different 3D model formats, including .dae, .usdz, .obj and .mtl, and many more. The exact formats supported are dependent upon the version of iOS you are using. Apple currently recommends using .usdz files (and has some samples), but this format can’t be consumed by the first release of ARKit. Therefore, for maximum compatibility, I’ve used a .dae model.

free3d.com is a good source of 3D models, both free and paid. However, it’s quite likely that any 3D model you download will first need manipulating to fit your requirements. This can be accomplished in a tool such as Blender. I used Blender to convert the model I downloaded to .dae format, and to scale it to my needs. Note that there’s a learning curve in getting to grips with Blender.

Once you have a 3D model ready to use it’s worth opening it in Xcode, for two reasons. Firstly, Xcode can be used to reveal the name of the root node in the model, that you may need when adding the model to your scene. Secondly, the model will display in Xcode exactly how it will display in your scene. So you can use Xcode to discover any problems with your model, and even fix some of them. For example, my model of the moon was displaying in red only. This is because, for memory reasons when handling greyscale images assigned as the Diffuse property, SceneKit will store the greyscale data in the red channel, but will zero the blue and green channels. This can be fixed converting any greyscale images to RGB, and sometimes by manipulating the Components drop down for the Diffuse property in Xcode.

Once you have a 3D model that renders correctly in Xcode, it can be added to your ARKit app. 3D models are added to a scene as a SCNNode, which can then be positioned and manipulated as required. As always, this can be accomplished in the ViewDidAppear method in the ViewController class:

public override void ViewDidAppear(bool animated)
{
    base.ViewDidAppear(animated);

    sceneView.Session.Run(new ARWorldTrackingConfiguration
    {
        AutoFocusEnabled = true,
        LightEstimationEnabled = true,
        WorldAlignment = ARWorldAlignment.Gravity
    }, ARSessionRunOptions.ResetTracking | ARSessionRunOptions.RemoveExistingAnchors);

    SCNScene scene = SCNScene.FromFile("moon.dae");
    SCNNode node = scene.RootNode;
    node.Position = new SCNVector3(0, 0, -25f);
    sceneView.Scene.RootNode.AddChildNode(node);
    ... 
}

In this example, the 3D model of the moon is retreived using the SCNScene type, and its root node is retrieved from the scene as a SCNNode. The node is then positioned and added to the scene. In addition, gesture recognisers are added to the SCNNode that aren’t shown in the code above.

The overall effect is that when the app runs, a SCNNode that resembles the moon appears:

Tapping on the SCNNode starts it rotating, and while rotating, tapping it a second time stops it rotating. In addition, the pinch gesture will resize the SCNNode, and the rotate gesture enables the Z-axis of the SCNNode to be manipulated.

If you want to manipulate a particular node in the 3D model, you’ll need to know its name. This can be determined by opening the model in Xcode and navigating the scene graph for the model until you find the name for the required part of the model. This can then be retrieved as a SCNNode:

SCNNode node = scene.RootNode.FindChildNode("MyModelPart", true);

Once you’ve retreived the desired part of the model as a SCNNode, it can be manipulated as required. For example, you could use this technique to retrieve the arm from a model of a person, and then animate it.

Friday, 19 March 2021

Adventures in ARKit - rotating earth

In my previous blog post I discussed how to animate a node in a scene. Specifically, I animated a cube by rotating it continuously through 360 degrees on the Y axis. However, I originally wanted to animate a sphere, with a view to creating a rotating earth. In this blog post I’ll do just that.

The sample this code comes from can be found on GitHub.

Rotating earth

In order to add a sphere to the scene, I created a SphereNode type that derives from SCNNode:

using SceneKit;
using UIKit;

namespace ARKitFun.Nodes
{
    public class SphereNode : SCNNode
    {
        public SphereNode(float size, string filename)
        {
            SCNNode node = new SCNNode
            {
                Geometry = CreateGeometry(size, filename),
                Opacity = 0.975f
            };

            AddChildNode(node);
        }

        SCNGeometry CreateGeometry(float size, string filename)
        {
            SCNMaterial material = new SCNMaterial();
            material.Diffuse.Contents = UIImage.FromFile(filename);
            material.DoubleSided = true;

            SCNSphere geometry = SCNSphere.Create(size);
            geometry.Materials = new[] { material };

            return geometry;
        }
    }
}

The SphereNode constructor takes float and string arguments. The float argument represents the size of the sphere, and the string argument represents the filename of an image to overlay on the sphere. The constructor creates the material and geometry for the sphere, and adds the node as a child node to the SCNNode. The power of ARKit is demonstrated by the CreateGeometry method, which loads the supplied image and maps it onto the geometry as a material. The result is that a regular 2D rectangular image (in this case a map of the world) is automatically mapped onto the sphere geometry.

The ViewDidAppear method in the ViewController class can then be modified to add a SphereNode to the scene:

using System;
using System.Linq;
using ARKit;
using ARKitFun.Extensions;
using ARKitFun.Nodes;
using CoreGraphics;
using SceneKit;
using UIKit;

namespace ARKitFun
{
    public partial class ViewController : UIViewController
    {
        readonly ARSCNView sceneView;
        const float size = 0.1f;
        const float zPosition = -0.5f;
        bool isAnimating;
        float zAngle;
        ...
        
        public override void ViewDidAppear(bool animated)
        {
            base.ViewDidAppear(animated);

            sceneView.Session.Run(new ARWorldTrackingConfiguration
            {
                AutoFocusEnabled = true,
                LightEstimationEnabled = true,
                PlaneDetection = ARPlaneDetection.Horizontal,
                WorldAlignment = ARWorldAlignment.Gravity
            }, ARSessionRunOptions.ResetTracking | ARSessionRunOptions.RemoveExistingAnchors);

            SphereNode sphereNode = new SphereNode(size, "world-map.jpg");
            sphereNode.Position = new SCNVector3(0, 0, zPosition);

            sceneView.Scene.RootNode.AddChildNode(sphereNode);

            UIRotationGestureRecognizer rotationGestureRecognizer = new UIRotationGestureRecognizer(HandleRotateGesture);
            sceneView.AddGestureRecognizer(rotationGestureRecognizer);
            ...
        }
        ...
    }
}

In this example, a SphereNode is added to the scene and positioned at (0,0,-0.5). The SphereNode constructor specifies a world map image that will be mapped to the geometry of the SCNNode. In addition, a UIRotationGestureRecognizer is added to the scene.

The following code example shows the HandleRotateGesture method:

void HandleRotateGesture(UIRotationGestureRecognizer sender)
{
    SCNView areaPanned = sender.View as SCNView;
    CGPoint point = sender.LocationInView(areaPanned);
    SCNHitTestResult[] hitResults = areaPanned.HitTest(point, new SCNHitTestOptions());
    SCNHitTestResult hit = hitResults.FirstOrDefault();

    if (hit != null)
    {
        SCNNode node = hit.Node;
        zAngle += (float)(-sender.Rotation);
        node.EulerAngles = new SCNVector3(node.EulerAngles.X, node.EulerAngles.Y, zAngle);
    }
}

In this example, the node on which the rotate gesture was detected is determined. Then the node is rotated on the Z-axis by the rotation angle requested by the gesture.

The overall effect is that when the app runs, a SphereNode that resembles the earth appears:

Tapping on the SphereNode starts it rotating, and while rotating, tapping it a second time stops it rotating. In addition, the pinch gesture will resize the SphereNode, and the rotate gesture enables the Z-axis of the SphereNode to be manipulated.

In my next blog post I’ll discuss displaying a 3D model in a scene.

Thursday, 18 March 2021

Adventures in ARKit - animation

In my previous blog post I discussed how to interact with nodes in a scene, using touch. This involved creating gesture recognisers and adding them to the ARSCNView instance with the AddGestureRecognizer method.

In this blog post I’ll examine animating a node in a scene. I originally wanted to animate a sphere, to make it rotate. However, it can be difficult to observe a sphere with a diffuse colour rotating. Therefore, I switched to rotating a cube.

The sample this code comes from can be found on GitHub.

Animate a node

In order to add a cube to the scene, I created a CubeNode type that derives from SCNNode:

using SceneKit;
using UIKit;

namespace ARKitFun.Nodes
{
    public class CubeNode : SCNNode
    {
        public CubeNode(float size, UIColor color)
        {
            SCNMaterial material = new SCNMaterial();
            material.Diffuse.Contents = color;

            SCNBox geometry = SCNBox.Create(size, size, size, 0);
            geometry.Materials = new[] { material };

            SCNNode node = new SCNNode
            {
                Geometry = geometry
            };

            AddChildNode(node);
        }
    }
}

The CubeNode constructor takes a float argument that represents the size of each side of the cube, and a UIColor argument that represents the colour of the cube. The constructor creates the materials and geometry for the cube, before creating a SCNNode and assigning the geometry to its Geometry property, and adds the node as a child node to the SCNNode.

Nodes can be animated with the SCNAction type, which represents a reusable animation that changes attributes of any node you attach it to. SCNAction objects are created with specific class methods, and are executed by calling a node object’s RunAction method, passing the action object as an argument.

For example, the following code creates a rotate action and applies it to a CubeNode:

SCNAction rotateAction = SCNAction.RotateBy(0, (float)Math.PI, 0, 5); // X,Y,Z,secs
CubeNode cubeNode = new CubeNode(0.1f, UIColor.Blue);
cubeNode.RunAction(rotateAction);
sceneView.Scene.RootNode.AddChildNode(cubeNode);

In this example, the CubeNode is rotated 360 degrees on the Y axis over 5 seconds. To rotate the cube indefinitely, use the following code:

SCNAction rotateAction = SCNAction.RotateBy(0, (float)Math.PI, 0, 5);
SCNAction indefiniteRotation = SCNAction.RepeatActionForever(rotateAction);
CubeNode cubeNode = new CubeNode(0.1f, UIColor.Blue);
cubeNode.RunAction(indefiniteRotation);
sceneView.Scene.RootNode.AddChildNode(cubeNode);

In this example, the CubeNode is rotated 360 degrees on the Y axisover 5 seconds. That is, it takes 5 seconds to complete a full 360 degree rotation. Then the animation is looped.

This code can be generalised into an extension method that can be called on any SCNNode type:

using System;
using SceneKit;

namespace ARKitFun.Extensions
{
    public static class SCNNodeExtensions
    {
        public static void AddRotationAction(this SCNNode node, SCNActionTimingMode mode, double secs, bool loop = false)
        {
            SCNAction rotateAction = SCNAction.RotateBy(0, (float)Math.PI, 0, secs);
            rotateAction.TimingMode = mode;

            if (loop)
            {
                SCNAction indefiniteRotation = SCNAction.RepeatActionForever(rotateAction);
                node.RunAction(indefiniteRotation, "rotation");
            }
            else
                node.RunAction(rotateAction, "rotation");
        }
    }
}

The AddRotationAction extension method adds a rotate animation to the specified SCNNode. The SCNActionTimingMode argument defines the easing function for the animation. The secs argument defines the number of seconds to complete a full rotation of the node, and the loop argument defines whether to animate the node indefinitely. The RunAction method calls both specify a string key argument. This enables the animation to be stopped programmatically by specifying the key as an argument to the RemoveAction method.

The ViewDidAppear method in the ViewController class can then be modified to add a CubeNode to the scene, and animate it:

using System;
using System.Linq;
using ARKit;
using ARKitFun.Extensions;
using ARKitFun.Nodes;
using CoreGraphics;
using SceneKit;
using UIKit;

namespace ARKitFun
{
    public partial class ViewController : UIViewController
    {
        readonly ARSCNView sceneView;
        const float size = 0.1f;
        const float zPosition = -0.5f;
        bool isAnimating;        
        ...
        
        public override void ViewDidAppear(bool animated)
        {
            base.ViewDidAppear(animated);

            sceneView.Session.Run(new ARWorldTrackingConfiguration
            {
                AutoFocusEnabled = true,
                LightEstimationEnabled = true,
                PlaneDetection = ARPlaneDetection.Horizontal,
                WorldAlignment = ARWorldAlignment.Gravity
            }, ARSessionRunOptions.ResetTracking | ARSessionRunOptions.RemoveExistingAnchors);

            CubeNode cubeNode = new CubeNode(size, UIColor.Blue);
            cubeNode.Position = new SCNVector3(0, 0, zPosition);

            sceneView.Scene.RootNode.AddChildNode(cubeNode);

            UITapGestureRecognizer tapGestureRecognizer = new UITapGestureRecognizer(HandleTapGesture);
            sceneView.AddGestureRecognizer(tapGestureRecognizer);
            ...
        }
        ...

In this example, a blue CubeNode is created and positioned in the scene at (0,0,-0.5). In addition, a UITapGestureRecognizer is added to the scene.

The following code example shows the HandleTapGesture method:

void HandleTapGesture(UITapGestureRecognizer sender)
{
    SCNView areaPanned = sender.View as SCNView;
    CGPoint point = sender.LocationInView(areaPanned);
    SCNHitTestResult[] hitResults = areaPanned.HitTest(point, new SCNHitTestOptions());
    SCNHitTestResult hit = hitResults.FirstOrDefault();

    if (hit != null)
    {
        SCNNode node = hit.Node;
        if (node != null)
        {
            if (!isAnimating)
            {
                node.AddRotationAction(SCNActionTimingMode.Linear, 3, true);
                isAnimating = true;
            }
            else
            {
                node.RemoveAction("rotation");
                isAnimating = false;
            }
        }                    
    }
}

In this example, the node on which the tap gesture was detected is determined. If the node isn’t being animated, an indefinite rotation SCNAction is added to the node, which fully rotates the node every 3 seconds. Then, provided that the node is being animated, when it’s tapped again the animation ceases by calling the RemoveAction method, specifying the key value for the action.

The overall effect is that when the app runs, tapping on the node animates it. When animated, tapping on the node stops the animation. Then a new animation will begin on the subsequent tap:

As I mentioned at the beginning of this blog post, I originally wanted to rotate a sphere, with a view to creating a rotating earth. However, it can be difficult to see a sphere with a diffuse colour rotating.

In my next blog post I’ll discuss rotating a sphere to create a rotating earth.

Wednesday, 17 March 2021

Adventures in ARKit - respond to touch

In my previous blog post I discussed how to overlay an image on the camera output in an ARKit app.

Objects that you overlay on the camera output are called nodes. By default, nodes don’t have a shape. Instead, you give them a geometry (shape) and apply materials to the geometry to provide a visual appearance.

Overlaying a node, or multiple nodes, on a scene is typically the first step in creating an augmented reality app. However, such apps typically require interaction with the nodes. In this blog post I’ll examine touch interaction with the ImageNode from the previous blog post.

The sample this code comes from can be found on GitHub.

Respond to touch

Augmented reality apps usually allow interaction with the nodes that are overlayed on a scene. This interaction is typically touch-based. The UIGestureRecognizer types can be used to detect gestures on nodes, that can then be manipulated as required.

The ARSCNView instance must be told to listen for gestures, in order for an ARKit app to respond to different touch interactions. This can be accomplished by creating the required gesture recognisers and adding them to the ARSCNView instance with the AddGestureRecognizer method:

using System;
using System.Linq;
using ARKit;
using ARKitFun.Nodes;
using CoreGraphics;
using SceneKit;
using UIKit;

namespace ARKitFun
{
    public partial class ViewController : UIViewController
    {
	readonly ARSCNView sceneView;
	...
        public override void ViewDidAppear(bool animated)
        {
            ...
            UITapGestureRecognizer tapGestureRecognizer = new UITapGestureRecognizer(HandleTapGesture);
            sceneView.AddGestureRecognizer(tapGestureRecognizer);

            UIPinchGestureRecognizer pinchGestureRecognizer = new UIPinchGestureRecognizer(HandlePinchGesture);
            sceneView.AddGestureRecognizer(pinchGestureRecognizer);
        }
        ...
    }
}

In this example, gesture recognisers are added for the tap gesture and the pinch gesture. The following code example shows the HandleTapGesture and HandlePinchGesture methods that are used to process these gestures:

void HandleTapGesture(UITapGestureRecognizer sender)
{
    SCNView areaPanned = sender.View as SCNView;
    CGPoint point = sender.LocationInView(areaPanned);
    SCNHitTestResult[] hits = areaPanned.HitTest(point, new SCNHitTestOptions());
    SCNHitTestResult hit = hits.FirstOrDefault();

    if (hit != null)
    {
        SCNNode node = hit.Node;
        if (node != null)
            node.RemoveFromParentNode();
    }
}

void HandlePinchGesture(UIPinchGestureRecognizer sender)
{
    SCNView areaPanned = sender.View as SCNView;
    CGPoint point = sender.LocationInView(areaPanned);
    SCNHitTestResult[] hits = areaPanned.HitTest(point, new SCNHitTestOptions());
    SCNHitTestResult hit = hits.FirstOrDefault();

    if (hit != null)
    {
        SCNNode node = hit.Node;

        float scaleX = (float)sender.Scale * node.Scale.X;
        float scaleY = (float)sender.Scale * node.Scale.Y;

        node.Scale = new SCNVector3(scaleX, scaleY, zPosition / 2);
        sender.Scale = 1; // Reset the node scale value
    }
}

Both methods share common code that determines the node on which a gesture was detected. Code that interacts with the node is then performed. For example, the HandleTapGesture removes the node from the scene, when it’s tapped. The HandlePinchGesture scales the width and height of the node using the pinch gesture. Similarly, it’s possible to add other gesture recognisers to move nodes, rotate them etc.

The overall effect is that the node can be removed from the scene with a tap, or scaled with a pinch:

In my next blog post I’ll discuss animating a node in a scene.

Tuesday, 16 March 2021

Adventures in ARKit - overlay an image

In my previous blog post I discussed how to create a basic ARKit app on Xamarin.iOS, that displays the camera output. In this blog post I’ll take the first steps into augmenting the experience by overlaying an image on the camera output.

Before I got to grips with overlaying images, I first overlayed basic geometric shapes - spheres, cones, cylinders etc. There’s nothing I want to call out about doing that. However, my experimentation can be found in a sample. For info on how it works, you could buy the book .

The sample this code comes from can be found on GitHub.

Overlay an image

Objects that you overlay on the camera output are called nodes. By default, nodes don’t have a shape. Instead, you give them a geometry (shape) and apply materials to the geometry to provide a visual appearance. Nodes are represented by the SceneKit SCNNode type.

One of the geometries provided by SceneKit is SCNPlane, which represents a square or rectangle. This type essentially acts as a surface on which to place other objects.

In my sample app, I defined a ImageNode type, that derives from SCNNode, which can be re-used when overlaying an image onto a scene:

using SceneKit;
using UIKit;
using Foundation;

namespace ARKitFun.Nodes
{
    public class ImageNode : SCNNode
    {
        public ImageNode(string image, float width, float height)
        {
            SCNNode node = new SCNNode
            {
                Geometry = CreateGeometry(image, width, height)
            };
            AddChildNode(node);
        }

        SCNGeometry CreateGeometry(string resource, float width, float height)
        {
            UIImage image;

            if (resource.StartsWith("http"))
                image = FromUrl(resource);
            else
                image = UIImage.FromFile(resource);

            SCNMaterial material = new SCNMaterial();
            material.Diffuse.Contents = image;
            material.DoubleSided = true; // Ensure geometry viewable from all angles

            SCNPlane geometry = SCNPlane.Create(width, height);
            geometry.Materials = new[] { material };
            return geometry;
        }

        UIImage FromUrl(string url)
        {
            using (NSUrl nsUrl = new NSUrl(url))
            using (NSData imageData = NSData.FromUrl(nsUrl))
                return UIImage.LoadFromData(imageData);
        }
    }
}

The ImageNode constructor takes a string argument that represents the filename or URI of an image, and float arguments that represent the width and height of the image in the scene. The constructor creates a SCNNode, assigns a geometry to its Geometry property, and adds the node as a child node to the SCNNode.

The CreateGeometry method creates a UIImage object that represents the local or remote image, and creates a SCNMaterial object that represents the image. Then, a SCNPlane object is created, of size width x height, and the the SCNMaterial object is assigned to the geometry. Therefore, the shape of the node is defined by the SCNPlane object of width x height, and the material (the image) defines the visual appearance of the node.

The code from my previous blog post can be modified to overlay an image on the camera output. This is accomplished by modifying the ViewDidAppear method in the ViewController class:

using System;
using ARKit;
using ARKitFun.Nodes;
using SceneKit;
using UIKit;

namespace ARKitFun
{
    public partial class ViewController : UIViewController
    {
		...
        public override void ViewDidAppear(bool animated)
        {
            base.ViewDidAppear(animated);

            sceneView.Session.Run(new ARWorldTrackingConfiguration
            {
                AutoFocusEnabled = true,
                LightEstimationEnabled = true,
                PlaneDetection = ARPlaneDetection.Horizontal,
                WorldAlignment = ARWorldAlignment.Gravity
            }, ARSessionRunOptions.ResetTracking | ARSessionRunOptions.RemoveExistingAnchors);

            ImageNode imageNode = new ImageNode("Xamagon.png", 0.1f, 0.1f);
            imageNode.Position = new SCNVector3(0, 0, -0.25f); // X,Y,Z

            sceneView.Scene.RootNode.AddChildNode(imageNode);
        }
        ...
    }
}

When the session for the ARSCNView runs, it automatically sets the camera to be the background of the view. In addition, the initial device location is registered as the world origin (X=0, Y=0, Z=0). Any objects you place in the scene will be relative to the world origin.

If you don’t specify the position of a node within a scene, it will by default be placed at the world origin (0,0,0). However, a node can be positioned in 3D space by setting the Position property of the SCNNode to a SCNVector3 object that defines the X,Y,Z coordinates of the node. The values of X,Y,Z are floats where 1f = 1m, 0.1f = 10cm, and 0.01f = 1cm.

In the example, an ImageNode is then created for a local file, that’s included in the project, of dimensions 10cm x 10cm. The ImageNode is placed at the world origin (0,0) for the X and Y coordinates, and 25cm forwards on the Z-axis. ImageNode is then added to the scene by the AddChildNode method call.

The overall effect is that the Xamagon is placed in the scene at the specified coordinates:

Note that the image contains transparency, and so blends well in the scene.

In my next blog post I’ll discuss interacting with the image in the scene.