I am a Game Developer who has worked and studied in game development for about 6 years, both in College and University. In the last 2 years, I’ve worked in Mixed Reality game development with the Microsoft HoloLens as part of my university degree with Abertay University in Dundee, Scotland.
This consisted of two projects:
· The first project was as part of a team, in which we created a networked co-operative multiplayer game between a player in the HTC Vive, utilising Virtual Reality, and a player using the HoloLens, utilising Mixed Reality. This project made it into the UK Finals of the Imagine Cup for 2017 and was shown off at various events around Scotland.
· The second project was part of my Dissertation into Augmented/Mixed Reality Game Development, in which I created a Real-Time Strategy game with the HoloLens, that captures the Spatial Mapping of a room and converts it into the game’s environment in real-time. This project also made it into the UK Finals of the Imagine Cup for 2018.
From both projects, I’ve gathered knowledge and experience in developing games, especially so for the HoloLens. My intention with this post is to share some of that knowledge to those interested by the idea of developing for the HoloLens (not just for games).
Important Information for Getting Started
Before continuing, it needs to be understood that I developed for the HoloLens using the game engine Unity and it’s C# scripting language. As a result, my position may be subjective based on the platform used but design and interaction factors should still be of use to everyone.
With that said, I must paraphrase the excellent Dr. Iain Donald, who I had the opportunity to learn from at Abertay University, with regards to developing anything, games or otherwise.
“If the wheel has already been invented, consider if inventing your own is actually to your benefit.”
To clarify what I mean, a great deal of the work needed for developing for the HoloLens or other Mixed Reality headsets has, to be frank, already be made for anyone to use with Unity in C#.
This being the Mixed Reality Toolkit (formerly the Holo Toolkit), seen here https://github.com/Microsoft/MixedRealityToolkit-Unity.
This set of tools and example projects is by far, the single most important thing for learning and creating anything for the HoloLens. The toolkit handles gesture and voice inputs, spatial mapping, understanding and similar and provides a great deal of useful directions on how you might want something to look or feel with the example projects. It has most of the heavy lifting work done for you, while also leaving the ability to modify anything as needed open to you.
In addition to this, let me suggest two very important sources of knowledge to go along with this:
· The tutorials provided by Microsoft, which pair well with the toolkit, seen here https://docs.microsoft.com/en-us/windows/mixed-reality/academy.
· The Holodeveloper Slack channel, which is a collective group of HoloLens developers. If you’re having an issue doing something, it is likely one of them has encountered it before or may just know why the issue is occurring. It’s also just a cool place to talk with other developers in Mixed Reality to see what people are working on and what has already been made. You can request to join here https://holodevelopersslack.azurewebsites.net/.
Design and Development Experience in Mixed Reality
Now, in terms of what I’ve come to learn about Augmented/Mixed Reality, this was primarily researched for its application to games but should be overall applicable to the development and understanding of applications in general for the HoloLens.
There are three primary factors when making an application for Mixed Reality:
· What space is needed for holographical elements to work as intended?
· What space is needed for interactions to be clear and doable for the user?
· What inputs are required from the player to interact with the application?
When considering the use of the HoloLens, or any AR technology that utilises something like Spatial Mapping, space must be considered. Within games, an area is usually plotted out as the boundaries of the “world” that the player inhabits. With the HoloLens, these boundaries can be varying by giving the user the ability to define their space within the scale of a room, to the scale of an entire building, to even technically extending infinitely.
As a result, an understanding of how elements within an application can interact or respond to the types of spaces or changes within a space is needed.
An example of where this can be important is if you want to allow users to play a large-scale game within a small environment, you would need to consider how the space can used without impacting the application. A solution to this could be allowing all holographic elements to automatically scale their size based on the space’s floor size.
This could then also be applied to the user interface, in which the player must not be able to lose sight of essential elements or have the real-world obscure their vision, such as if a button prompt is lost behind a real-world surface. This could be alleviated with 3D movement and alignment by the holographic elements of the user interface with surfaces in the real-world.
As a slight addition to this, I’ve come to notice with Mixed Reality, that user interfaces and prompts are easier to utilise in 3D rather than 2D, as the added depth put into the space functions better with the real world, and allows the ability to consider having interactive elements be placeable into the user’s space rather than being stuck to them if it isn’t essential and can create more convenience as a result.
Gesture Inputs seen in the Mixed Reality Toolkit are built to try and be universal with all HoloLens applications, and as a result, are easier to quickly utilise than custom ones. However, the amount of gesture-based inputs made available at this moment in time are low, meaning that if voice commands are not utilised, inputs should be built to accommodate functioning as minimally as possible, or through clever usage of input criteria.
For example, defining the differences between an AirTap and multiple inputs of an AirTap in quick succession or Tap and Hold with no hand movement and Tap and Hold with hand movement would increase the available input variety from two to four.
The HoloLens continues to amaze me as to what is to come in the near-future, just based on what we can already do with it at this very moment. If there were ever a time to start developing for Mixed Reality, there is no better time like the present with how quickly the technology is coming along and with how quickly companies and industries are beginning to utilise it. I wholeheartedly suggest to anyone who has even the slightest inkling to develop for the HoloLens or Mixed Reality in general to give it a try and see what you can do with it.