Assignment 5 – Josh Daghir
- by Joshua Daghir
- November 11
- in
Capturing 3D objects in reality and translating them into virtual reality has been the most difficult assignment of this class yet. A combination of factors on both the user end and the software end make 3D capture hard to execute.
My trouble first started with 123D Catch. The idea of being able to capture 3D models straight from your phone is compelling, but the software is simply not strong enough to create good models. I tried capturing objects in bright, even lighting, and usually took 40-50 pictures, but my objects ultimately processed with holes in them, strange deformities, and large chunks of the background. My trouble continued as I tried to import these models into Unity. I was able to export them as a .DAE file in MeshLab, but importing this file into Unity always resulted in crashing.
I had better luck with the model of myself captured on the iPad with the 3D scanner. This model faithfully captured my head and upper torso, with the exception of mapping my glasses onto my face. If a person is wearing glasses, it would make sense to capture them in reality without wearing glasses, and then add a glasses model in virtual reality.
The difference between the effectiveness of 123D Catch and the iPad with the 3D scanner shows that proper hardware is key in capturing real-world 3D information. Maybe someday software will be strong enough to build these models from regular pictures, but for now, the extra sensors in the scanner ultimately prove to create better 3D models.
I think that more fully understanding 3D model file types would have allowed me to tinker around to successfully import my 123D Catch models into Unity. I had scanned my guitar and amplifier using the app, and I was disappointed when I could not figure out how to get these models to appear in Unity. I ended up using my 3D selfie to put myself in a beach scene, as this was an environment where it made sense for me to appear from only the waist up.
COMMENTS