Author Topic: Creating 3D models with a simple webcam (w/ Video)  (Read 4377 times)

Constructing virtual 3D models usually requires heavy and expensive equipment, or takes lengthy amounts of time. A group of researchers at the University of Cambridge, Qi Pan, Dr Gerhard Reitmayr and Dr Tom Drummond have created a program able to build 3D models of textured objects in real-time, using only a standard computer and webcam. This allows 3D modeling to become accessible to everybody.
Qi, Gerhard and Tom presented the system at the 20th British Machine Vision Conference (BMVC'09), in London. During the last few years, many methods have been developed to build a realistic 3D model of a real object. Various equipment has been used: 2D/3D laser, (in visible spectrum or other wave lengths), scanner, projector, camera, etc. These pieces of equipment are usually expensive, complicated to use or inconvenient and the model is not built in real-time. The data (for example laser information or photos) must first be acquired, before going through the lengthy reconstruction process to form the model. If the 3D reconstruction is unsatisfactory, then the data must be acquired again.  
http://www.youtube.com/watch?v=vEOmzjImsVc&feature=player_embedded#! watch it at youtube.

The method proposed by Qi and his colleagues needs only a simple webcam. The object is moved about in front of the webcam and the software can reconstruct the object "on-line" while collecting live video. The system uses points detected on the object to estimate object structure from the motion of the camera or the object, and then computes the Delaunay tetrahedralisation of the points (the extension of the 2D Delaunay triangulation to 3D). The points are recorded in a mesh of tetrahedra, within which is embedded the surface mesh of the object.

Left to right (a) Object rotated by hand in front of camera. (b) Point cloud obtained from on-line structure from motion estimation followed by bundle adjustment. (c) Delaunay Tetrahedralisation of point cloud, partitioning the convex hull into tetrahedra. (d) Carved mesh obtained from recursive probabilisitic tetrahedron carving. (e) Texture-mapped surface mesh.

The software can then tidy up the final reconstruction by taking out the invalid tetrahedra to obtain the surface mesh based on a probabilistic carving algorithm, and the object texture is applied to the 3D mesh in order to obtain a realistic model. Thanks to this simple and cheap system, 3D reconstruction can become accessible to everybody.

Generating 3D models of real world objects is a common task during development of any augmented reality application. This paper describes how ProFORMA (Probabilistic Feature-based On-line Rapid Model Acquisition), a rapid reconstruction system, has been designed to simplify and speed up this task. ProFORMA uses a fixed video camera to allow on-line reconstruction of objects held in a user's hand. Partial models are generated very quickly and displayed instantly, allowing the user to plan how to manipulate the object's pose in order to generate additional views for reconstruction. We demonstrate how augmented reality can be used to assist the user in view planning, guiding the user to collect new keyframes from desirable views in order to complete and refine the model.

Figure 1: The viewing sphere of the object is discretised into an icosahedron. Unseen faces on the model increase the uncertainty score of icosahedron faces (views) in the direction of the normal, whereas camera views decrease the uncertainty score of isocahedron faces. Icosahedron faces (views) with high uncertainty are coloured red, and low uncertainty green. A 3D arrow is augmented onto the object showing the user which way to rotate the object. Left: (a) First phase of user guidance - the user is guided to continue the current rotation if it will visit views with high uncertainty in the near future. Camera positions of previous keyframes are shown in green. Extrapolated camera positions if the current rotation is continued are shown in cyan. If extrapolated camera positions intersect highly uncertain faces in the next 90 degrees, the user is asked to continue the current rotation, otherwise the second phase of user guidance is initiated. Right: (b) Second phase of user guidance - the user is guided to the most uncertain view (shown in cyan). The user continues to be guided to this face until the uncertainty is below a threshold, at which point the user is then guided to the face which now has highest uncertainty. Once all faces have an uncertainty level below a threshold, reconstruction is complete and keyframe acquisition is halted.



http://mi.eng.cam.ac.uk/~qp202/- more details at home website page.
« Last Edit: January 25, 2013, 04:28:06 PM by Cubelands »





So it's pretty much videotrace

I had signed up for the Beta on that a while ago, it didn't work as well as advertised.



This is seriously all of Cubelands' threads in a nutshell.

As much as I don't like Cubelands, this topic is pretty cool, and that wasn't needed.

\
This is seriously all of Cubelands' threads in a nutshell.
I don't care if they're copied and pasted.

This stuff is usually cool and interesting.

What about cylindrical/spherical stuff?

-snip-
This is seriously all of Cubelands' threads in a nutshell.
Well put.

[img]http://puu.sh/1SwSz[/img]

This is seriously all of Cubelands' threads in a nutshell.

this is awesome

-snip-
This is seriously all of Cubelands' threads in a nutshell.



sig'd

« Last Edit: January 25, 2013, 04:34:31 PM by Cubelands »

There is a program like this out there. I think I got a beta of it but forgot which email.

You basically take a video of an object, go in the program and draw the lines and rotate them and get the textures and get a 3D model. It was easy.

Edit: Here's the link.
http://punchcard.com.au/?page_id=46

Gotta sign up and wait like a month before you get your copy of the beta. Also this is more manual. You don't just rotate the object. You draw the lines.
« Last Edit: January 25, 2013, 04:44:07 PM by Blockzillahead »