Single-photon LiDAR delivers detailed 3D images at distances up to 1 kilometer
-
[email protected]replied to [email protected] last edited by
Cool, but less cool when I remember what dark shit this kind of technology can be used for
-
If humans can do it why not cameras?
-
[email protected]replied to [email protected] last edited by
Reason 1: humans can blink – a dirty camera can not.
-
[email protected]replied to [email protected] last edited by
Hey YouTube, today we're testing this photon detector I found browsing ebay that was sold by some friendly russians. It's been collecting dust for 10 years but due to some recent developments it's as relevant as ever! Now I'll be able to see the feds coming from miles away!
-
[email protected]replied to [email protected] last edited by
*will be used for
-
[email protected]replied to [email protected] last edited by
Can humans actually do it, though? Are humans actually capable of driving a car reasonably well using only visual data, or are we actually using an entire suite of sensors in our heads and bodies to understand our speed and orientation, road conditions, and our surroundings? Driving a car by video link is considerably harder than just driving a car normally, from within a car.
And even so, computers have a long way to go before they catch up with our visual processing. Our visual cortex does a lot of error correction of visual data, using proprioceptive sensors in our heads that silently and seamlessly delete the visual smudges and smears of motion as our heads move. The error correction adjusts quickly to recalibrate things when looking at stuff under water or anything with a different refractive index, or when looking at reflections in a mirror.
And we maintain that flow of visual data by correcting for motion and stabilizing the movement of our eyes to compensate from external motion. Maybe not as good as chickens, but we're pretty good at it. We recognize faulty sensor data and correct for it by moving our heads around obstructions, of silently ignoring something that is just blocking one eye, of blinking or rubbing our eyes when tears or water make it hard to focus. We also know when to not trust our eyes (in the dark, in fog, when temporarily blinded by lights), and fall back to other methods of understand the world around us.
Throw in our sense of balance in our inner ears, our ability to direction find on sounds, and the ability to process vibrations in our seat and tactile feedback on a steering wheel, the proprioception of feeling forces on our body or specific limbs, and we have an entire system that uses much more than visual data to make decisions and model the world around us.
There's no reason why an artificial system needs to use exactly the same type of sensors as humans or other mammals do. And we have preexisting models and memories of what is or was around us, like when we walk around our own homes in the dark. But my point is that we rely on much more than our eyes, processed through an image processing system far more complex than the current state of AI vision. Why hold back on using as much sensor data as possible, to build a system that has good, reliable sensor data of what is on the road?
-
[email protected]replied to [email protected] last edited by
Brother it HAD LiDAR, they took it away. Tesla customers now pay more for a worse car
-
And in the sun, strong headlights, rain, dust
-
[email protected]replied to [email protected] last edited by
Only.
...1kilometer only.
Or 1km so far.
Yeah this is crazy stuff.
-
[email protected]replied to [email protected] last edited by
Hello sir, I'm an not from the government and would like to show you something off in the distance for about 3 seconds if you will. Step forward where I've carefully marked the street with blue painter's tape. Do not smile. Did you see it? No? Good!. Well what it is, its ah... Don't worry about it. Good day sir. What are you talking about? I asked you to step where? I did no such thing. I'm just a normal person living in the city.
-
[email protected]replied to [email protected] last edited by
Imagine what they could do with two photons.
-
[email protected]replied to [email protected] last edited by
that level of detail at 1km is insane. We are so fucked in the machine war.
-
[email protected]replied to [email protected] last edited by
No, tesla never had lidar in any of their cars, they had the same regular radar all other cars use, but yes it has since been disabled in favour of a camera-only solution.
-
[email protected]replied to [email protected] last edited by
Do they compete with the 1500 megawatt aperture science heavy duty super-colliding super button?
-
[email protected]replied to [email protected] last edited by
Umm, y'all, this is serious.
-
[email protected]replied to [email protected] last edited by
These scientists got a chill with the AI and surveillance tech and work on making a refillable toothpaste tube ffs.
-
You don't live west of where you work, I take it?
-
[email protected]replied to [email protected] last edited by
Or even phivetons.
-
[email protected]replied to [email protected] last edited by
So, keep in mind that single photon sensors have been around for awhile, in the form of avalanche photodiodes and photomultiplier tubes. And avalanche photodiodes are pretty commonly used in LiDAR systems already.
The ones talked about in that article collect about 50 points per square meter at a horizontal resolution of about 23 cm. Obviously that's way worse than what's presented in the phys.org article, but that's also measuring from 3km away while covering an area of 700 square km per hour (because these systems are used for wide area terrain scanning from airplanes). With the way LiDAR works the system in the phys.org article could be scanning with a very narrow beam to get way more datapoints per square meter.
Now, this doesn't mean that the system is useless crap or whatever. It could be that the superconducting nanowire sensor they're using let's them measure the arrival time much more precisely than normal LiDAR systems, which would give them much better depth resolution. Or it could be that the sensor has much less noise (false photon detections) than the commonly used avalanche diodes. I didn't read the actual paper, and honestly I don't know enough about LiDAR and photon detectors to really be able to compare those.
But I do know enough to say that the range and single-photon capability of this system aren't really the special parts of it, if it's special at all.
-
[email protected]replied to [email protected] last edited by
I love LiDAR, there was very limited terrain data from LiDAR available on USGS about a decade ago, just a couple of counties was all, but it was so detailed you could see the shape of Cars on the streets with it.