How does its obstacle avoidance system work?

How does its obstacle avoidance system work?

Attached: SpotMini Autonomous Navigation.webm (640x360, 2.91M)

Other urls found in this thread:

github.com/openhumanoids/oh-distro
youtube.com/watch?v=aFuA50H9uek
en.wikipedia.org/wiki/Soft_sensor
twitter.com/AnonBabble

how do your eyes work nigga

man fuck off this I want mathematical responses

Localization and mapping with Lidar data.
Path planning algorithm for avoiding obstacles.

it uses some sort of LIDAR or the like to send signals that bounce back to its sensor, it calculates the time it takes for the reflection and uses that to calculate distance, does this 360 to create a map, and moves towards the area with the furthest distance in front of it, i.e. where it wont crash into an obstacle

measure the time it takes for light or radio to bounce off an object and you got the distance from it to you.

probably coupled with some form of pattern recognition (e.g. stairs)

Proximity sensors measure the height of surrounding area. This data gets built into a 3D graph. Green bar means it's low enough to be scaled by the dogs legs. Red mean too high. It will follow the path that has the most even footing (the blue line). It's almost entirely based on hardware sensors.

There are bits and pieces of code related to Boston Dynamics out there. Apparently they are using lidar with opencv and pcl
everybody knows about opencv so i won't go into that, but the interesting thing is pcl
it seems they are generating a point cloud from lidar data and running an analysis on those points to categorize them as either walkable or not
github.com/openhumanoids/oh-distro

>It's almost entirely based on hardware sensors.
Sensors are always hardware.

yes, getting up stairs requires another algorithm that allows the robot to traverse it while retaining full stability. thats the cool part with this robot, especially if youve seen the videos where people kick it and it manages to correct its footing to still remain upright and walking

-Use 2 cameras with overlapping FoV
-Detect shit on the images (blobs, Harris corners, whatever)
-Use cross-correlation on the stuff you found to find where it is on each image
-Given Camera Focal Length and baseline, Thales Theorem lets you find how far it is.
-do this until everything is 3D

Or just telemetry with LiDAR, or projecting shapes like the kinect does.

wrong site idiot

no way you're right

youtube.com/watch?v=aFuA50H9uek

If you build a virtual reality simulation you can have 'software' sensors you pedant.

Probably just calculates a threshold height of what is tranversable and what isn't.

plotting out the world like that is quite simple. you just probe the world with laser or radar or ultrasound, and then determine with simple tests what surfaces can be walked on or stepped onto/over, and which surfaces are simply obstacles. probably the hardest bit of the robot's programming is the kinematics required to walk on all sorts of terrain

>About Spot Mini

>SpotMini is a small four-legged robot that comfortably fits in an office or home. It weighs 25 kg (30 kg if you include the arm). SpotMini is all-electric and can go for about 90 minutes on a charge,
>depending on what it is doing. SpotMini is the quietest robot we have built.

>SpotMini inherits all of the mobility of its bigger brother, Spot, while adding the ability to pick up and handle objects using its 5 degree-of-freedom arm and beefed up perception sensors.
>The sensor suite includes stereo cameras, depth cameras, an IMU, and position/force sensors in the limbs. These sensors help with navigation and mobile manipulation.

Yes way I'm right you double nigger. Stereo cameras are basically what i said the thing did.

Attached: Are-the-Aliens-Really-Having-the-Scary-Alien-Chestburster.jpg (480x360, 38K)

I want a robo doggo now

lidar?

Nigga is a noun, not a adjective.

en.wikipedia.org/wiki/Soft_sensor

Anyone else find those robots' walking style somehow incredibly comfy?

It's a Gerund.

Simultaneous localization and mapping, look it up.

It's a combination of classic ai and a neuralnet.
They use specialised cameras and infrared sensors in tandem.

and how do you "calculate a threshold height of what is tranversable and what isn't." ?

are you fucking stupid, user?

yea im kinda stupid explain