What’s new in Kotlin for Android?
Check out what is new in Kotlin, a widely used programming language for Android.
Financial services
Expertise in core banking, BaaS integrations, payments, and GenAI-enhanced financial solutions.
Healthcare
People-centric healthcare design and solutions, from virtual care, integrations, to smart devices.
Insurance
Modern solutions including self-service, on-demand, and algorithm-driven personalization.
We’re purposefully shaping the digital future across a range of industries.
Discover some of our specific industry services.
Discover moreApril 8, 2024
As our App Solutions Studio explored Apple Vision Pro and started building apps for the platform, we put together an overview of key resources, best practices, and advice for adapting existing apps – all highlighted in this blog.
Apple’s latest foray into spatial computing, Apple Vision Pro, marks a significant leap towards redefining how we interact with technology. Spatial computing transcends traditional boundaries, allowing users to engage with both 2D and 3D content in a way that’s seamlessly integrated into the physical world.
There are two types of experiences that we can focus on:
Some examples of basic experiences that just use the 3D space to place windows include:
Source: Screenshots from Apple Keynote WWDC 2023
Here are some examples of experiences we can all imagine now being possible:
To grasp the fundamentals of spatial computing and start building apps that leverage this technology, here are some essential resources:
While developing apps for spatial computing requires to some extent a new approach to structure and design, everything is done with the SwiftUI framework, with which we are all familiar.
The architecture follows a logical progression from app struct to scenes, and finally, to windows, volumes, or immersive spaces. Notably, windows have evolved to support 3D content and can now adopt volumetric shapes with .windowStyle(.volumetric). Furthermore, a groundbreaking type of scene, ImmersiveSpace, allows developers to position SwiftUI views outside conventional containers.
Source: Get started with building apps for spatial computing, Apple
At the heart of visionOS are Windows, which are akin to SwiftUI Scenes on Mac, capable of housing traditional views and 3D content. Each app can feature one or more windows, offering a versatile platform for developers to present their content. The innovation doesn’t stop at traditional 2D views; windows in visionOS support 3D content, extending the canvas for creators to paint their visions.
Volumes, introduced through WindowGroup + .windowStyle(.volumetric), represent another type of scene specifically designed for showcasing 3D content. Unlike windows, volumes are optimized for 3D, making them ideal for presenting complex models like a heart or the Earth. This distinction allows developers to choose the right container for their content, ensuring the best possible user experience.
Spaces in visionOS introduce a new dimension of app interaction. With ImmersiveSpace and .immersionStyle(selection:in:), developers can create shared or full spaces, defining how their apps integrate with the user’s environment. Shared Spaces mimic a desktop environment where apps coexist, while Full Spaces offer a more controlled, immersive experience, potentially leveraging ARKit APIs for enhanced reality.
If you want to make the most out of SwiftUI, ARKit and RealityKit, we encourage you to use ImmersiveSpace together with the new RealityView, since they were designed to be used together.
USDZ files, co-developed by Apple and Pixar, serve as the cornerstone for placing 3D objects in augmented reality environments.
Image sources: Understand USD fundamentals, Apple
We can get these files from Reality Composer Pro – which is a pivotal tool, offering a graphical interface for composing, editing, and previewing 3D content directly within Xcode, enhancing the creation process with features like particle emitters and audio authoring.
To dive into the USD ecosystem and Reality Composer Pro, we recommend reviewing these sessions from WWDC22 and WWDC23 which delve into the intricacies of the USD ecosystem, providing insights into tools, rendering techniques, and the broader landscape of USDZ files. These resources are invaluable for developers looking to master spatial computing with Apple Vision Pro:
Interactions in spatial computing go beyond traditional input methods, incorporating taps, drags, and hand tracking. RealityKit and ARKit form the backbone of this interactive framework, allowing for the augmentation of app windows with 3D content and the creation of fully immersive environments.
RealityKit: A closer look
RealityKit simplifies the integration of AR features, supporting accurate lighting, shadows, and animations. It operates on the entity-component-system (ECS) model, where entities are augmented with components to achieve desired behaviors and appearances. Custom components and built-in animations enrich the spatial experience, further bridged to SwiftUI through RealityView attachments.
Here is a code snippet of the RealityView structure, with some comments explaining the purpose of each code block.
RealityView { content, attachments in
// load entities and add attachments to the root entity
} update: { content, attachments in
// update your RealityKit entities when the view rerenders
} attachments: {
// to add SwiftUI views with the tag property that allows the RealityView to translate the SwiftUI views into entities
}
ARKitSession, DataProvider, and Anchor concepts introduce a structured approach to building AR experiences. Features like world tracking, scene geometry, and image tracking enable precise placement of virtual content in the real world, supported by comprehensive sessions and tutorials tailored for spatial computing.
The ObjectCapture API and RoomPlan API stand out as tools for generating 3D models from real objects and spaces. These APIs facilitate the creation of highly realistic and context-aware applications, extending the capabilities of developers to include custom object and room scanning functionalities in their apps.
Quick Look offers volumetric windowed previews for spatial computing, while SharePlay is designed for syncing experiences across devices. These features underscore the platform’s commitment to collaborative and interactive applications, supported by detailed WWDC sessions on building spatial SharePlay experiences.
visionOS supports a broad spectrum of existing apps, offering pathways for automatic support and full native experiences. We encourage developers to explore sessions and documentation on elevating their apps for spatial computing, addressing compatibility issues, and harnessing the full potential of visionOS functionalities.
There are two flavors of compatibility: automatic support, and full native experience. With the first you’ll get the same experience you have on iPhone/iPad, but with the second you’ll be able to tap into the core visionOS functionalities. In both cases there are some considerations to have, since there are some platform differences that can generate a poor UX if not handled.
When adding the visionPro device as a supported device you can choose between two options. Adopt the new xrOS SDK in visionPro depending on your requirements
Adapting apps for this platform requires a thoughtful approach to interactivity, compatibility, and user engagement. Here are key considerations to polish your apps for visionOS:
The transition into this new dimension of computing is not without its challenges, including adapting existing apps and mastering new development tools. However, the potential for creating engaging, immersive experiences that seamlessly blend the digital with the physical is unparalleled. In addition, this transition is aided significantly by the fact that everything is done with SwiftUI, meaning developers don’t need to learn a whole new language or framework – rather just the APIs and patterns related to this platform. I hope this article helps in your journey. Below you’ll find additional useful resources.
Development resources
Apple’s example apps
Hello World – Shows how to transition between different visual modes with the 3D glob
Happy Beam – Shows how to create a game that leverages an immersive space including custom hand gestures
Destination Video – Shows how to build a shared immersive playback experience that incorporates 3D video and spatial audio
Design resources
By William Tift
Receive regular updates about our latest work
Check out what is new in Kotlin, a widely used programming language for Android.
Check out some tips on how to save the UI state during Android development.
Learn more about why you should utilize ML Kit over Mobile Vision in Android app development.
Receive regular updates about our latest work
Get in touch with our experts to review your idea or product, and discuss options for the best approach
Get in touch