Virtual Reality (VR) is the current hot topic, but Augmented Reality (AR) and Mixed Reality (MR) is on its way…
Virtual Reality may be grabbing all the headlines but Augmented Reality (AR) and Mixed Reality (MR) — technologies that superimpose computer-generated images in the physical world — are also going to have a huge impact on design, engineering, production and maintenance.
These technologies are not as mature as VR, and many of the more advanced AR / MR headsets are still in pre-production, but they should start to gain real traction in the next few years. AR and MR are often interchangeable, so it’s worth spending some time explaining the difference between the two.
AR describes the use of a device to overlay digital information in the physical world. Whether that’s performance metrics, diagnostic information, geometry or something else.
The device allows you to look at an object (or indeed, a geographic location) and stream information relating to that object, in context, in your line of sight. That line of sight can be direct (overlaid on glasses in front your eyes) or on a screen, through a camera (think pointing a smartphone or tablet at an object).
Mixed Reality combines the immersive nature of VR with the augmentation of AR. It allows you to visualise a virtual object directly in the real world. Whether that’s a product prototype on your desk, a presentation in a conference room or a work cell in situ on the factory floor. This is done through a headset that overlays the digital artefacts and data directly in your line of sight, allowing you to interact with the digital object, as you would a real object, in the room.
While you could spend all week arguing over the definitions of VR, AR and MR, the reality (pun, absolutely, intended) is nowhere near as clear cut.
AR is all about adding additional digital content to real world objects, but looking at a digital factory work cell on a shopfloor is both AR and MR, as it’s an entirely digital set of data, in the context of its use.
Ultimately, the definitions are almost academic and should be treated as such. What really matters is how these technologies, whatever you call them, can benefit design, engineering and production.
Since the inception of the CAD industry, we have been locked down to a single interface method: a keyboard and mouse, combined with a 2D screen, which is essentially a digital drawing board. While the rise of 3D design applications in the 1990s made the drawing board analogy less accurate, we have still been stuck with a two dimensional display device and, with the notable exception of 3D mice, two dimensional input methods.
AR/MR is set to bring a whole new set of display devices and input methods, that move beyond the flat screen and give us a more intuitive, more natural display with more intuitive input methods.
Let’s explore that potential with three little stories.
Rather than sitting in front your 27-inch monitor, you pop on your headset, fire up your CAD system and see the object in front of you, ready to go.
Using voice commands and hand gestures, you begin creating or editing a new model. If you need to discuss the model with a colleague, they do the same and you can look at and edit your model collaboratively, just as you would if working on a physical model — pointing out areas of concern, making edits individually and inspecting the results.
Unlike VR, where you are fully disconnected from the world around you, in AR/MR you work on a digital prototype that appears in the real world, right in front of you (and in front of your team). Importantly, you can still see your team members.
Now to take things further, imagine you’re working with a specialist in Japan. She can’t fly in for the design review and the time difference means she can’t join in live.
Imagine if you could record the session, tracking not only the model changes, but also the discussion and what each participant is doing, and where they are pointing. She can wake up, sit through the review, look at what you’ve looked at and make her own edits and continue the project or her portions of it.
You’ve just completed the concept models for a new factory work cell and submitted your quote. The customer likes what you’ve done but wants to learn more. You have the digital model built and want to see it fits in context. You and the client visit the factory floor. The freshly skimmed concrete is there, waiting for the assembly your team has designed.
You fire up your headsets and there it is, in situ, operating as it would. You can both walk through, inspect it, make notes and crunch through those inevitable changes you’d only typically find once it’s installed and commissioned.
As a final example, you’ve connected your products to the Internet of Things (IoT). A customer unit has a fault and previous data indicates that this usually means that a critical fault will develop in the next three days. Your business switched to a service-based model, so if the product is down, you don’t get paid. So your service team is deployed to fix it.
They arrive on site, fire up their headsets and the product in the field is overlaid with diagnostic information, showing the metrics and fault indicators.
The technicians are then shown how to replace the problematic part or sub-system, stepping through the process with simple voice commands.
There’s no looking up which custom configuration your customer has, no flipping through greasy manuals or stroking an iPad’s screen. The data that’s needed, is there, in front their eyes, overlaid in high resolution directly on the product in front of them.
These three scenarios illustrate the potential for a mix of VR/AR/MR/whatever-you-want-to-call-it. This rich combination of creation, editing, collaboration and distribution of information is all converging and converging soon.
Some parts of these stories are available now. Some are being worked on as we speak. Others are extrapolations of today’s technology and a prediction of what’s to come.
This is the promise of the next generation of computer interaction devices — and the potential for design, engineering and manufacturing is infinite.
The good news is that the hardware and services behind it are being driven by the entertainment sector, a much bigger market that engineering alone.
With our core skill sets of creativity, geometry and data wrangling, we get to take advantage of all of this as soon as it becomes available.
Original article written by Al Dean for DEVELOP3D magazine
© 2022 3ecruit all rights reserved | Privacy Policy | Privacy & Cookie Policy | Powered with 🤍 by Shazamme