architects at Tsoi/Kobus & Associates in Cambridge have started using the processing system that powers virtual reality games to put clients inside development projects before they are built.
Using a cloud-based system called Revizto, architects can create a digital hospital down to the last brick, and then invite a client to “walk” through the space to see if the ceilings are high enough or the windows provide enough light.
— betaboston.com
Firms like Tsoi/Kobus are beginning to experiment with multiple immersive, interactive media for clients to tour buildings, often in advance of making any physical models. Clients can be virtually transported into the design's space by wearing an Oculus VR headset, or by being inside a specially outfitted room with laser projections, while the architect walks them through. The commentary is recorded and then filed back into the iterative design process.
More from VR's influence on architecture:
8 Comments
Maybe to add further thoughts on the topic lets further apply not just "VR" but AR (Augmented Reality) and hybrid AR/VR technologies. Among them, I will highlight: castAR augmented reality glasses which I would say is a hybrid AR/VR technology platform with the optional AR/VR clip on.
Although this technology is under development as is Oculus Rift and others, this is emerging technology.
For my focus, I'll focus on castAR.
How can technology such as castAR apply to architectural visualization when working with clients?
To start, lets understand castAR's components.
The GLASSES which you plug into your computing device. Right now, Windows PC but other devices including tablets with Android Operating System and Windows RT using ARM processors.
The GLASSES (castAR) uses projector based technology to project the display into the space in front of you. Except when using the AR/VR clip-on attachment, you need to use a retro-reflective surface. With castAR, the company behind this technology has available large silvery-grey retroreflective fabric sheets. They are NOT the typical projection screen. This type of sheet provides for a distinct feature where it can support multiple projection onto the same sheet because of the retroreflective fabric properties. The material used is not unfamiliar to architects familiar with being on construction sites nor shall they be unfamiliar to builders. They are the kind of fabric material used in safety jackets and vest is the silvery-grey retroreflective strips.
This sheet or multiple sheets can be formed to make a large box in which an architect can used to be a surface for representing an architectural model.
How the system knows where the user is in relation to the virtual architectural model is through a system of IR based tracking. The glasses are equipped with an IR camera sensor and located in the viewing area of the architect/client wearing a castAR glasses would be one or more IR LED based tracking modules. At least one of them is a key reference point where the viewers wearing the castAR and the virtual architectural model is to each other relationally because a module maybe placed on an determined XYZ offset from the key tracking marker. Each person's position would be determined and therefore can provide an approximately correct view of the model and the model rendering would adjust as you move in the space. So imagine walking around your virtual architectural model like you would a physical model? Imagine how you would do lighting studies and other studies of your virtual architectural model using plugins for energy analysis and other analysis tools in ways you can never really do with a physical model.
Lets take it further where you now not just a box on a table wrapped in retroreflective (RR) sheets where a person using castAR would see their model as they face the castAR with it's projectors to the RR box thus seeing the virtual architectural model from their positional point of view. Lets take it to another level.
Now... wall paper an entire room space in RR sheets. In turn, transforming that room into a sort of 'holodeck' So as to now literally walk into the walls or anything in a walk through.... use something like Virtuix Omni. This way, you can walk throughout the virtual environment without literally walking around in the room. In which case, information from the Virtuix Omni would aid in that role. In this case, it is less about your exact position but about angular information about where your viewing angle is as you look around.
As a building designer, I find the application of castAR for virtual architectural visualization as new and exciting experiences.
It would be extremely exciting to employ castAR visualization with Google Sketchup even as a plugin. The same idea for Revit, Archicad and others.
This new state of the art application is exactly the kind of application here. With multiple castAR hooked up to a number of devices linked to a server/hosting computer holding the model and bringing all this together is an application that I can see as exciting not only for me but also for all fellow design professionals.
This is why I think it will be exciting to see.
Just some links below for information, youtube video, etc.:
http://www.technicalillusions.com or http://www.castar.com
(Same site)
https://community.technicalillusions.com/
Youtube videos:
https://www.youtube.com/watch?v=H3HGrclGkIE
https://www.youtube.com/watch?v=4FhdqMpTgSk
https://www.youtube.com/watch?v=hL1qT0TK6aw - Although this video is CG'ed some. It isn't too far off from reality except the sheet profile arrangement needs a little bit of 3d profile vs being laid entirely flat to get the most optimal 3d effect. Why I mentioned wrapping a box in the RR fabric as an example. So one can get a good range of angular view of the 3d building model just like you would a physical model.
Money is what bridges the gap between client and architect.
If you look at the quality of the renderings of these demos, they are quite hideous. Looks more like Minecraft than anything resembling good architecture.
Fred,
They are demos but certainly rendering can be a hell of a lot higher end. These demos are meant to demonstrate the concepts and many of those are developed by the team that are developing the glasses. The small start-up company developing the glasses just can't spend the time it would take to develop a Triple-AAA video game or some other development tool.
Current game engine already supported is Unity 4 & 5. castAR doesn't impose any real restrictions on the rendering ability of the computer system or game or app.
We've been using the Oculus technology for over a year now and getting great results. Its a useful tool both for internal design review and for client presentations. The real limitation with this technology for us seems to be persistent issues with motion sickness.
persistent issues with motion sickness
LOL
Yes, castAR resolves a lot of the issues when using "projected AR" (projection technology) when displaying to the retroreflective sheets so it would not be a near eye display. Thus allows you to see the people around you and sense the real world while you are visualizing the 3d world.
Alot of this is with focal depth factor. Other parts relates to positional tracking and rendering in real time fast enough.
castAR does a bit to resolve this significantly through fast positional tracking and also includes an IMU as well but high end photo-rendering that would take extremely long periods of time would not be appropriate with castAR or even Oculus Rift due to the render time it takes. Render time needs to be realtime on the order of film like quality where rendering is fluidic. Computer technology keeps progressing to levels where rendering that used to take awhile 10 years ago would be real time smooth now and in what may take a minute to render today would be real-time smooth in 15 years. So something that takes 5 seconds today to render would render in 1/20th a second in 10 years.
There hardware will continue to progress for high end rendering but you need to apply optimization of rendering your architectural models.
Realtime rendering engines are what 3d rendering engines that are appropriate with Oculus Rift, castAR and like technologies because slow rendering would not update rendering in time frames of less than 40 milliseconds. Ideally, we update at rates of <17ms. Like 16.66666666 ms or 8.33333333 ms. Reality is many of current medium-range hardware would be realtime rendering is average of about 30 fps which would be in the order of 30-35 ms frame render time.
Reason for that is you are moving around, you want the render to update in lock step or close to it as possible without being noticeably behind.
Choices of rendering engine is crucial.
Well at least now I can blame VR motion sickness for my clients throwing up in meetings....
Block this user
Are you sure you want to block this user and hide all related comments throughout the site?
Archinect
This is your first comment on Archinect. Your comment will be visible once approved.