Posts Tagged ‘augmented reality’

DSCC 2011 @ Caesar Palace in Vegas

We would like to thank each people we met at DSCC – Dassault Systemes Customer Conference – in Vegas last week, and we hope they all enjoyed our demos.

In the heart of DS ecosystem, 3D California was presented as a specialist in interactive 3D and augmented reality applications and devices.

Attendees could discover 3di6, our brand new solution designed in collaboration with Immersion, our friends from Bordeaux. 3di6 is a new category of immersive experience room for collaborative and life like 3D content. We will communicate more on this new product very soon.

 

At DSCC 2011, 3D California was proud to present for the first time in the US a demonstration of LASTER Technologies’ Optical See-Through Glasses – a device ready for new kind of augmented reality applications that
enable the end-user to have a hands free experience while his real vision is enhanced – not a video, his own natural field of view – with information and interactive 3D content. Thank you Zile for your key participation !

Special thank to our client Blu Homes who was presenting at DSCC Key Notes and mentioned our collaboration on some of their initiatives leveraging their CATIA V6 assets. Blu Homes’ online 3D Configurator is a great interactive applications for the sales team and for consumers to virtually build and discover their dream home before purchasing it. www.bluhomes.com

Thank you Dennis !

The fun thing is : thanks to the picture, a couple of people asked us if CATIA V6 was running on Mac … … but Blu Homes 3D Configurator does ;-)

Being a sponsor at DSCC 2011 was a great experience allowing many interactions with DS clients and DS ecosystem of partners.

We are pleased to share with you those few things that happened in Vegas and that were Not supposed to stay in Vegas :-)

Have a great week

3D CALIFORNIA team

Siggraph 2011 – Huge success at demo time !

Dear friends of 3D CALIFORNIA,

Demos of brand new AR see-through glasses from Laster Technologies were a huge success at Siggraph 2011 in Vancouver. 400 people could experience a couple of scenarios and they all enjoyed a lot.

Professionals’ and students’ reactions were extermely promising and confirm that business usages can go from maintenance operations to entertainment …

Please click on the picture to see the video :

 

 

 

 

 

 

 

Send us your comments and questions !

Thank you,

3D CALIFORNIA Team

 

Augmented reality see-through glasses at Siggraph 2011

Dear Friends of 3D CALIFORNIA,
We are proud to invite you to discover Laster Technologies’ latest augmented reality products at Siggraph 2011, in Vancouver from Aug 9th to 11th.
Visit us at Cap Digital’s booth # 351 !
If you are not in Vancouver this week, have a look at www.laster.fr and send a request for information,
or answer this communication with your questions.
We will be pleased to provide you with more information and help you leverage this amazing optical technology for augmented reality solutions.

Cheers,
3D CALIFORNIA Team …

QR Codes and AR markers

We had several questions lately, about the use of QR Codes and how it is similar or different to usual augmented reality markers. A couple of elements here may help understand better the topic.

QR Code

QR Codes are used to spread strings, like URLs

What is a QR Code?

Characters in a text can be coded as bits – zeros and ones – that can then be printed as black and white. Following a specific pattern, we can encode a full string of characters as a set of small black and white squares. This is QR Code (see one example in the picture)

If you want to read a QR Code, some mobile applications will help you do that. Your cell phone camera will read a QR Code and output a string (usually the URL of a website you may visit). Such an app will execute a 2D image analysis, finding the 4 corners of the QR Code which are always the same, and deducing the position of all the squares in the QR Code. There’s no 3D computing, only 2D image analysis.

QR Code or AR Marker?

A QR Code is not an augmented reality (AR) marker. They can look quite the same, but usually AR markers have fewer black and white squares and they are bigger. The aim of AR markers is not to convey a string. An AR application will have the position and orientation of the marker analyzed by a camera, in 3D. The computation is very different. With a QR Code, we read the value of the black and white squares but we don’t assess its position, and we want it to be still during the computation. With AR markers, we recognize a known marker in a set of previously learned ones, plus we get its position and orientation in real time as it moves. Then we usually play interactive 3D animations in real time according to the very position of the marker.

AR Marker

This is an AR Marker with its position outlined in red

Why would a QR Code not be a good AR marker?

A QR Code is usually small. Therefore, the camera needs to be close if we want to read the coded string. The QR Code reading algorithm is very sensitive to movement. Once you stay still for a short moment, and the coded string is recognized, the QR Code has done its job. You don’t want to track it while moving.

AR markers are usually bigger and can be easily tracked with good augmented reality solutions. You don’t have to read or decode it. All you need is to recognize it and then track its movements to render interactive 3D animations accordingly, and in real time. Recent augmented reality advanced solutions have been enabling Markerless Tracking, which does not exactly mean there is no marker at all, but lets us use any image as a marker, like the logo of a company, or a picture, instead of a black and white AR Marker.

What if I want to use a QR Code?

There are plenty of solutions, for example, you could have your Mobile demo downloadable online, and use a QR Code to spread the URL. The actual demo could use any other marker to work with.

Thank you again, and we hope you will have plenty of ideas for projects using computer vision and natural interface technologies. And don’t forget … Zest you ideas with 3D !

A quick peek behind the curtain: Position detection, “Where are you?” (Part 3)

Hi everyone, and Happy new year to you from 3DCalifornia!

First of all, we would like to thank you all for this tremendous year 2010 we had, and wish you the best for 2011! And because we love 3D, we did a small demo for you, using our partner’s technology D’Fusion, and it is available here. Feel free to try it and tell us what you think of it!

Let’s open our eyes

So we were on this series of articles about position detection, and this episode is supposed to be showing how computer vision can be used for that. Here we go!

First things first, what is computer vision? We explained briefly in one of the previous episodes what light was and what colors were. Our eyes can perceive lights and colors, and that’s mainly what they do. Then they send the information to the brain where lights and colors (low level information) become distances, objects, faces (known, unknown), words, etc (high level information).

Computer vision is the domain that studies the algorithm a computer needs to see, and to see high level information. To recognize that 2 objects are identical in 2 different images, to recognize a bunch of pixels as a tree or a bike, to recognize a face in an image and to know that it’s someone’s face in particular. And as this is the subject of this article, to determine the position of an object inside an image.

Two birds with one stone

So, why should we use computer vision for position detection? Because in most cases, we already have all the hardware up and running: we’re trying to do augmented reality, and for that, we need to add virtual objects to a live video stream. So in most cases, we already have a camera. This single piece of hardware will be allowing us to do both reality sampling and position detection. No need for expensive magnetic device, no need for infrared lights. Just a camera and a computer.

ARToolkit marker

First generation AR marker

A computer vision algorithm can be more or less complicated, but it will usually rely on one thing: the value of the pixels. We have a pixel. Its color, or brightness can be quantified. So if we replace all the pixels by their numerical value, we now have a grid of numbers, a matrix, and mathematics are really good at taking information out of matrices, so that’s all. We throw some maths at our digital image, and we get the info.

So what’s the battle plan? We have an incoming video stream and we want to output its coordinates, as fast as possible. There are multiple solutions, and most of them will differ by 3 aspects: Assumptions, Learning and Running. In order to find your favorite teapot, you first have to know what a teapot is, what your favorite teapot looks like and then you need to look everywhere and figure out if you can see it. The steps are the same here.

“The least questioned assumptions are often the most questionable”

building facade

Horizontal and vertical lines

The quote is from Paul Broca. The question is “what do we know about the object we want to search”. For example, if it is a building, we know that we will probably see a lot of horizontal and vertical lines. If we’re looking for an old augmented reality marker, we will see a thick black square with black and white squares inside. And with Total Immersion’s technology, Markerless Tracking (MLT), only few assumptions are necessary, only considering we will be seeing an image that has no symmetry and that has some contrast in it. So first, we decide what kind of assumptions we keep. The larger the assumptions, the easier computation will be, but the more constraints it will create.

“The moment you stop learning, you stop leading”

This quote of Rick Warren will sure be a good introduction. Learning is the phase where we give the algorithm a way to differentiate any image that fits the assumptions and the image we’re specifically looking for. With MLT, for example, it is the step where we give the algorithm a clear view of the target we’ll be tracking. In this phase, which is usually not real time, we will extract features (middle level information) from images such as borders, corners, interest points or keypoints.

Keypoints

Keypoints

Keypoints are points that have a special property (usually a mathematical property), and this property is chosen to be stable, which means that when the object moves within your video stream, these properties will stay with the objects and still be visible. During the learning phase, you learn how to position the keypoints from one another. And now you’re ready for the race.

Running… For president?

So this is it. Now, we have an user in front of the camera, and we need to get the position of a target he has in his hands. So what do we do? We use the algorithm we prepared. If we learned the positions on the corners, we’ll be analyzing the image, searching for corners. If we learned the keypoints, we’ll look for the keypoints. It’s easy to determine that the image we’re looking for is indeed in the video stream. The hard part is to determine where. That’s where some clever filtering and modeling algorithms like RANSAC take place. RANSAC (for RANdom SAmple Consensus) take some data as an input (let’s say the points in the image A), and a model (let’s say a line) from which we want to find the parameters (parameters for a line will be height and slope for example), then will look for the points that fits the model the best, and will completely ignore the points that don’t fit it. It will then output the good points, and their good model (the blue line).

RANSAC

RANSAC for a line model

In an exact similar way, given all the keypoints on the image and a model (the keypoints from the learning, the unknown parameters will be their position and orientation), and RANSAC gives us the proper model (position and orientation) that fit our object’s keypoint, ignoring the background keypoints.

Conclusion

Pfew, that was quite a trip, wasn’t it? We did it! We now have the position and orientation of our object in the video stream. And, even better: now we know the position in this image, it will be even easier to find it in the next image, because we know it can’t have moved that much. We can make new assumptions, and new assumptions mean it’s easier to compute, which means it runs faster!

So that was a few selected ideas of how mathematics and their applications in computer vision can really make your life easier when you’re trying to augment reality. And this concludes this three-parted Quick peek behind the curtain article. I hope you liked it! If you want me to tackle a specific subject on Augmented and Virtual technologies next time, feel free to drop a comment, a mail, or anything.

Curtain

So what's it like, behind the curtain ?