Augmented Reality allows developers to create new engaging experiences, projecting digital objects in a real-world environment. Apple has contributed to the development of this amazing technology by releasing powerful frameworks like RealityKit, which has the capability to render 3D graphics in real time on the most recent Apple devices.
In this tutorial, you’ll understand how RealityKit works behind the scenes and how to create a beautiful app in SwiftUI that implements RealityKit.
RealityKit vs ARKit
Diving into the world of augmented reality can be a little confusing for beginners since Apple has different technologies available for this purpose. ARKit and RealityKit are the main frameworks for building amazing AR experiences, and while they work seamlessly together, they have two different roles:
RealityKit provides high-performance 3D simulation and rendering capabilities you can use to create visionOS apps or to create augmented reality (AR) apps for iOS, macOS, and tvOS. RealityKit is an AR-first 3D framework that leverages ARKit to seamlessly integrate virtual objects into the real world.
ARKit integrates hardware sensing features to produce augmented reality apps and games combining device motion tracking, world tracking, scene understanding, and display conveniences to simplify building an AR experience.
To understand better the workflow behind these two frameworks, let’s consider the process of displaying a 3D asset on a surface, like a table:
- ARKit is responsible for tracking the device’s position and orientation in the real world and detecting features like surfaces and objects.
- ARKit provides information about the environment, such as camera images, depth information, and tracking status.
- RealityKit uses this information to create and update the AR scene. It places virtual entities in the AR space, aligning them with the detected real-world features.