What is a Depth Map? Explore 3D Space Encoding (2024)

Depth maps contain information about the distance of objects from a specific perspective or reference point (like a camera lens). Each pixel is assigned a value to represent the distance of that pixel from the reference point which creates a 3D representation of the scene for its RGB image or virtual scene.

What is a Depth Map? Explore 3D Space Encoding (1)

Take a look at the image of the Stanford Bunny above. The white pixels represent the part of the scene that is closest to the camera lens, and the black pixels represent the part of the scene that is furthest. In this case the part of the scene that is closest are the ears of the bunny. The grayscale gradient in between illustrates that the head, neck, and body are a bit further from the camera, the legs even further, and the tail of the bunny the furthest before the background, or furthest point of the image.

This is a depth map.

Depth maps don’t have a set standard - that is to say, there are some depth maps in which the closest point to the camera lens is black or some depth maps that have a RGB scale instead of a grayscale logic. So keep an open mind here.

What is a Depth Map? Explore 3D Space Encoding (2)

If you’re like me, then a simple definition of what something is doesn’t cut it. I need to know why it’s needed and how it’s applied in order to digest a concept fully and integrate it into my understanding of the real and virtual world. So without further ado:

Why do we need depth maps?

There are a few reasons we rely on depth maps when we communicate objects in 3D space. For the sake of comprehension, I’ll focus on what I believe is the most important one: to tell software where objects are in Z space. Being able to see depth and understand how close and far objects are is the result of being a human being with stereoscopic vision.

When we look at a photo like this one below, we intuitively understand a number of things:

What is a Depth Map? Explore 3D Space Encoding (3)

The path at the bottom is the closest to the lens, followed by my friends ahead of me, with the landscape of the Grand Canyon stretched out to the back of the frame.

We agree upon this because you and I live in the real world and are accustomed to viewing in depth with the same stereoscopic vision. However in essence, and especially to a computer, this is just a bunch of different colored pixels arranged on the XY axis in an image of a specific size on a screen.

So for the computer to see the depth that we see, this is created:

What is a Depth Map? Explore 3D Space Encoding (4)

This is depth information about the same image above, but the pixels that are the closest to the camera are assigned a value that translates to white in the depth map and the pixels that are furthest from the camera are assigned a value that translates to black, with a grayscale depicting the distance moving from the object and floor in the front of the scene to the back of the scene.

I guess the joke I like to tell is that this is probably how Agent Smith’s great great grandfather saw the matrix when it first booted up.

I’m hilarious.

What is a Depth Map? Explore 3D Space Encoding (5)

Okay, but still, why do we what need this?

We need the computer to understand this for a few applications.

Realism in Virtual Scenes

One is realistic lighting in 3D renderings. Remember Doom?

What is a Depth Map? Explore 3D Space Encoding (6)

Great game. But I can also point to it as an example of a 3D game that didn’t have a lot of Z space data to work with in order to render realistic lighting.

What is a Depth Map? Explore 3D Space Encoding (7)

You can see this gameplay still from Call of Duty: Modern Warfare as a good comparison of what we were working with in the 90s versus what we’re working with today.

The way light and shadow casts on objects also depends on its distance from the reference point or the (virtual) camera lens. As objects get further away, they are obstructed by more fog and change in color. And this translates to a more realistic feeling render of a 3D game, because we live in a world of light and shadow (not flat colors, though that could be cool).

Medical Imaging

What is a Depth Map? Explore 3D Space Encoding (8)

Depth maps are also used in medical imaging to create 3D models of the human body. A depth map is rendered by taking multiple images of the human body (like CT or MRI scans) and calculating the distance from each pixel to the camera through software. The model generated from this depth map can be used for diagnosis and planning surgery for the patient.

Car and Robot Vision

Did you ever think about how cars can automatically drive? How can they see the road? Turns out, through a combination of LiDAR and depth imaging technology. They use stereo cameras, LiDARdepth sensors, or a combination of both to generate depth maps and detect other objects and their distance from the car (like other cars, people, poles, etc).

Robots also can see thanks to depth imaging (remember Grandpa Agent Smith?)- using depth maps to generate 3D models of their environment and support the recognition of specific objects. If you’re interested in teaching your robot to see, check out this tutorial that uses the OAK-D Lite, an AI powered depth camera!

What is a Depth Map? Explore 3D Space Encoding (9)

How does this apply to me?

What is a Depth Map? Explore 3D Space Encoding (10)

Smartphone depth maps

Well if you use a smartphone, chances are high that you’re interacting with depth maps already.

iPhones and Androids with Portrait features use the depth map to isolate the subject from the background and change the depth of field or lighting on the subject.

There are also apps like Focos, DepthCam, and Record3D, which allow you to take photos you’ve already captured, capture live photo/video with depth maps, and see the images in 3D on your phone. Record3D actually has a Looking Glass export option that allows you to pull RGB-D video taken from the app straight into Looking Glass Studio to view in 3D.

A couple years back I wrote a blog post on extracting depth maps from IPhone Portrait Mode photos, using Exiftool (it's a little out of date but still interesting to check out). Back then, I would extract out the depth map to clean it up and combine it as an RGB-D image to pull into the Looking Glass Display. But we’ll get to more of that below.

What is a Depth Map? Explore 3D Space Encoding (11)

Facebook 3D Photos

What is a Depth Map? Explore 3D Space Encoding (12)

In 2018, Facebook rolled out a cool new feature that allows users to upload a Portrait Mode photos (Android or iPhone) or RGB-D photos as 3D. The photo is responsive to the gyroscope in your phone, and tilts in 3D as you move your phone.

Looking Glass display

With a Looking Glass, you can pull in RGB-D images into view as real 3D on sight.

What is a Depth Map? Explore 3D Space Encoding (13)

What is a Depth Map? Explore 3D Space Encoding (14)

Even though depth maps are super useful for computers and software applications to understand 3D space - it’s still hard for humans to view this information. As you saw above in the IOS apps that deal with 3D data, you can (at the most) use your finger to rotate the resulting 3D image to view at different angles.

There are ways to render out a depth map into a point cloud and pull it into a 3D software engine like Blender or a game engine like Unity, but you're still limited to experiencing your 3D image by panning around and rotating the scene view with your mouse.

What is a Depth Map? Explore 3D Space Encoding (15)

In a Looking Glass you’re actually able to use the 3D data in a visual way. Its software takes the depth map, creates a mesh for the RGB-D image, and stretches that mesh to be viewable in real 3D. The same data

In fact, Looking Glass has a 2D to 3D Converter, whose software allows you to take any image or photo in PNG or JPG format and convert it into an RGB-D image. Photos that your grandfather took, photos on your iPhone, works of art you love, 3D renders, or AI generated images all become 3D holograms with this simple conversion tool.

2D to 3D Converter was made in partnership with Owl3D, an AI assisted platform that specializes in converting images into depth maps. You can convert images now even if you don’t have a Looking Glass. If you’re into software development, you can convert it into a point cloud. If you love Facebook, you can use your RGB-D pair to make a Facebook 3D post.

What is a Depth Map? Explore 3D Space Encoding (16)

Purchase your first LookingGlass here. If you want to convert images that you've already collected, generated, or captured, you can purchase conversion credits here and get started making 3D content from the 2D images and photos you already have.

To learn how, follow the tutorial linked below.

What is a Depth Map? Explore 3D Space Encoding (2024)
Top Articles
Experian Boost - Improve Your Credit Scores for Free
Down Payments: What to Consider When Saving
Chs.mywork
NYT Mini Crossword today: puzzle answers for Tuesday, September 17 | Digital Trends
The Largest Banks - ​​How to Transfer Money With Only Card Number and CVV (2024)
4-Hour Private ATV Riding Experience in Adirondacks 2024 on Cool Destinations
Http://N14.Ultipro.com
Phone Number For Walmart Automotive Department
Chalupp's Pizza Taos Menu
Zitobox 5000 Free Coins 2023
Www Thechristhospital Billpay
Mivf Mdcalc
Ktbs Payroll Login
4Chan Louisville
Obituary | Shawn Alexander | Russell Funeral Home, Inc.
Yesteryear Autos Slang
Rosemary Beach, Panama City Beach, FL Real Estate & Homes for Sale | realtor.com®
Premier Reward Token Rs3
Salem Oregon Costco Gas Prices
Shopmonsterus Reviews
Shiftselect Carolinas
Maxpreps Field Hockey
A Person That Creates Movie Basis Figgerits
Papa Johns Mear Me
New Stores Coming To Canton Ohio 2022
Goodwill Of Central Iowa Outlet Des Moines Photos
Wku Lpn To Rn
Craigslist Fort Smith Ar Personals
The Collective - Upscale Downtown Milwaukee Hair Salon
Cfv Mychart
LG UN90 65" 4K Smart UHD TV - 65UN9000AUJ | LG CA
Housing Assistance Rental Assistance Program RAP
Forager How-to Get Archaeology Items - Dino Egg, Anchor, Fossil, Frozen Relic, Frozen Squid, Kapala, Lava Eel, and More!
Save on Games, Flamingo, Toys Games & Novelties
Nacho Libre Baptized Gif
Whitehall Preparatory And Fitness Academy Calendar
20+ Best Things To Do In Oceanside California
Frcp 47
Daly City Building Division
Citibank Branch Locations In Orlando Florida
888-822-3743
Pathfinder Wrath Of The Righteous Tiefling Traitor
Petra Gorski Obituary (2024)
Elven Steel Ore Sun Haven
CrossFit 101
The Sports Academy - 101 Glenwest Drive, Glen Carbon, Illinois 62034 - Guide
Dicks Mear Me
House For Sale On Trulia
La Fitness Oxford Valley Class Schedule
Latest Posts
Article information

Author: Moshe Kshlerin

Last Updated:

Views: 5972

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Moshe Kshlerin

Birthday: 1994-01-25

Address: Suite 609 315 Lupita Unions, Ronnieburgh, MI 62697

Phone: +2424755286529

Job: District Education Designer

Hobby: Yoga, Gunsmithing, Singing, 3D printing, Nordic skating, Soapmaking, Juggling

Introduction: My name is Moshe Kshlerin, I am a gleaming, attractive, outstanding, pleasant, delightful, outstanding, famous person who loves writing and wants to share my knowledge and understanding with you.