High Fidelity Open Alpha

High Fidelity Open Alpha

High Fidelity’s open source software is now available for early alpha use, enabling you to download client and server installers, deploy your own domain servers, create user accounts, register unique placenames, and start building and experimenting. This is a very early release, and High Fidelity is still very much a work in progress.  The look and visual quality is far from complete, and big things like avatar movement animation and physics are still not in place.  There are lots of bugs to fix, and content formats will continue to change.  But enough systems are now functional to make us feel that High Fidelity is useful for some types of work, experimentation, and exploration. Having run a small and controlled early alpha to iron out the really show-stopping bugs, we’re now eager to engage a larger group and recruit open source contributions from other developers working on building the metaverse.

You can create your own virtual world by downloading and running the Stack Manager.  The client software that you use to enter your world or those others have created is available by running Interface, and runs on Windows, Mac, and Linux.  We are working on a GearVR/Android version as well, but it isn't ready yet.

You can build content by importing models and using JavaScript to create interactive objects and behaviors.  You can communicate with your voice and with facial expressions, and you can optionally use the Oculus Rift HMD and other input devices like the Razer Hydra to touch and edit the world.   We will also support the HTC Vive HMD and hand controllers as soon as they are available.

3D Audio is operational:  If you are using a high-quality headset, you can hear other people and objects in the environment at their correct locations, with very low latency, and with the echoes of your own voice off the virtual walls.  Sounds can also be made by interactive objects in-world, and audio is mixed together by a server node so that many people can talk together without increasing the audio bit-rate each person receives.

Avatars can be created with a variety of characteristics, whose faces are animated in real time using both head motion and audio (for HMD users) or more highly detailed expressions gathered from a depth-camera  (for desktop users), as you can see in this video from our recent funding announcement:


Avatar hands and bodies can also be moved using the Razer Hydra and Leap Motion.

There is also a working physics system with the ability to create complex movement and behaviors using javascript.  Here are some quick pictures of building with blocks, guns, dice, and a planetary gravity simulator.


A basic marketplace is also up, enabling digital goods like scripts, games, building components, avatars, and educational supplies to be freely shared between developers wanting to help each other build at this early stage.  Next big steps with the marketplace will be to add payment systems and the ability to create derivative works.

Screen Shot 2015-03-31 at 10.35.03 AM

You cannot yet share your servers with each other to increase scaling capacity and run portions of each others’ worlds; this will be available in the coming months.   We also have not completed our work on LOD for large scenes and large numbers of avatars, so frame rates will drop with lots of people and content in one place.

You can expect continuous and substantial changes as we complete new features; we will likely break content as we continue to design and experiment.   The transition from 'alpha' to 'beta', which we expect will happen over a year or so, will signal greater stability in the content formats.  But as an open source project with contributions from many developers and with a broad set of features working, we think the time is right to open things up completely for early use.

Have fun, and we look forward to seeing you in your worlds and ours!