Wednesday, 17 March 2021

Adventures in ARKit - respond to touch

In my previous blog post I discussed how to overlay an image on the camera output in an ARKit app.

Objects that you overlay on the camera output are called nodes. By default, nodes don’t have a shape. Instead, you give them a geometry (shape) and apply materials to the geometry to provide a visual appearance.

Overlaying a node, or multiple nodes, on a scene is typically the first step in creating an augmented reality app. However, such apps typically require interaction with the nodes. In this blog post I’ll examine touch interaction with the ImageNode from the previous blog post.

The sample this code comes from can be found on GitHub.

Respond to touch

Augmented reality apps usually allow interaction with the nodes that are overlayed on a scene. This interaction is typically touch-based. The UIGestureRecognizer types can be used to detect gestures on nodes, that can then be manipulated as required.

The ARSCNView instance must be told to listen for gestures, in order for an ARKit app to respond to different touch interactions. This can be accomplished by creating the required gesture recognisers and adding them to the ARSCNView instance with the AddGestureRecognizer method:

using System;
using System.Linq;
using ARKit;
using ARKitFun.Nodes;
using CoreGraphics;
using SceneKit;
using UIKit;

namespace ARKitFun
{
    public partial class ViewController : UIViewController
    {
	readonly ARSCNView sceneView;
	...
        public override void ViewDidAppear(bool animated)
        {
            ...
            UITapGestureRecognizer tapGestureRecognizer = new UITapGestureRecognizer(HandleTapGesture);
            sceneView.AddGestureRecognizer(tapGestureRecognizer);

            UIPinchGestureRecognizer pinchGestureRecognizer = new UIPinchGestureRecognizer(HandlePinchGesture);
            sceneView.AddGestureRecognizer(pinchGestureRecognizer);
        }
        ...
    }
}

In this example, gesture recognisers are added for the tap gesture and the pinch gesture. The following code example shows the HandleTapGesture and HandlePinchGesture methods that are used to process these gestures:

void HandleTapGesture(UITapGestureRecognizer sender)
{
    SCNView areaPanned = sender.View as SCNView;
    CGPoint point = sender.LocationInView(areaPanned);
    SCNHitTestResult[] hits = areaPanned.HitTest(point, new SCNHitTestOptions());
    SCNHitTestResult hit = hits.FirstOrDefault();

    if (hit != null)
    {
        SCNNode node = hit.Node;
        if (node != null)
            node.RemoveFromParentNode();
    }
}

void HandlePinchGesture(UIPinchGestureRecognizer sender)
{
    SCNView areaPanned = sender.View as SCNView;
    CGPoint point = sender.LocationInView(areaPanned);
    SCNHitTestResult[] hits = areaPanned.HitTest(point, new SCNHitTestOptions());
    SCNHitTestResult hit = hits.FirstOrDefault();

    if (hit != null)
    {
        SCNNode node = hit.Node;

        float scaleX = (float)sender.Scale * node.Scale.X;
        float scaleY = (float)sender.Scale * node.Scale.Y;

        node.Scale = new SCNVector3(scaleX, scaleY, zPosition / 2);
        sender.Scale = 1; // Reset the node scale value
    }
}

Both methods share common code that determines the node on which a gesture was detected. Code that interacts with the node is then performed. For example, the HandleTapGesture removes the node from the scene, when it’s tapped. The HandlePinchGesture scales the width and height of the node using the pinch gesture. Similarly, it’s possible to add other gesture recognisers to move nodes, rotate them etc.

The overall effect is that the node can be removed from the scene with a tap, or scaled with a pinch:

In my next blog post I’ll discuss animating a node in a scene.

No comments:

Post a comment