Quantcast
Channel: video – Robohub
Viewing all 284 articles
Browse latest View live

The Year of CoCoRo Video #13/52: Lilycam in nature

$
0
0

TYOC 13 52 Lilycam in Nature   YouTube

The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AUVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we will be uploading a new weekly video detailing the latest stage in its development. In this video, the Lily robot gives us an underwater view of a fishing pond.

Lily is an autonomously diving robot. After we attached an underwater camera onto it, it became an autonomously driving camera agent. We used this “Lily-Cam” to look into the little fishing ponds that we have at our Zoological Department. The look below the surface offered a fascinating glimpse of the underwater world, including algae forests, fish and other water organisms.

To learn more about the project, see this introductory post, or check out all the videos


Quadrotor automatically recovers from failure or aggressive launch, without GPS

$
0
0
Credit: Robotics & Perception Group, University of Zurich.
Photo credit: Robotics & Perception Group, University of Zurich.

When a drone flies close to a building, it can temporarily lose its GPS signal and position information, possibly leading to a crash. To ensure safety, a fall-back system is needed to help the quadrotor regain stable flight as soon as possible. We developed a new technology that allows a quadrotor to automatically recover and stabilize from any initial condition without relying on external infrastructure like GPS. The technology allows the quadrotor system to be used safely both indoors and out, to recover stable flight after a GPS loss or system failure. And because the recovery is so quick, it even works to recover flight after an aggressive throw, allowing you to launch a quadrotor simply by tossing it in the air like a baseball.

How it works

Photo credit: Robotics & Perception Group, University of Zurich.
Photo credit: Robotics & Perception Group, University of Zurich.

Our quadrotor is equipped with a single camera, an inertial measurement unit, and a distance sensor (Teraranger One). The stabilization system of the quadrotor emulates the visual system and the sense of balance within humans. As soon as a toss or a failure situation is detected, our computer-vision software analyses the images for distinctive landmarks in the environment, and uses these to restore balance.

All the image processing and control runs on a smartphone processor on board the drone. The onboard sensing and computation renders the drone safe and able to fly unaided. This allows the drone to fulfil its mission without any communication or interaction with the operator.

The recovery procedure consists of multiple stages. First, the quadrotor stabilizes its attitude and altitude, and then it re-initializes its visual state-estimation pipeline before stabilizing fully autonomously. To experimentally demonstrate the performance of our system, in the video we aggressively throw the quadrotor in the air by hand and have it recover and stabilize all by itself. We chose this example as it simulates conditions similar to failure recovery during aggressive flight. Our system was able to recover successfully in several hundred throws in both indoor and outdoor environments.

More info: Robotics and Perception Group, University of Zurich.

References

M. Faessler, F. Fontana, C. Forster, D. Scaramuzza. Automatic Re-Initialization and Failure Recovery for Aggressive Flight with a Monocular Vision-Based Quadrotor. IEEE International Conference on Robotics and Automation (ICRA), Seattle, 2015.

M. Faessler, F. Fontana, C. Forster, E. Mueggler, M. Pizzoli, D. Scaramuzza. Autonomous, Vision-based Flight and Live Dense 3D Mapping with a Quadrotor Micro Aerial Vehicle. Journal of Field Robotics, 2015.


If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

 

Deep Learning Primer

$
0
0
3D brain scans analysis by applying Deep Learning algorithms - photo : Scyfer
3D brain scans analysis by applying Deep Learning algorithms – photo : Scyfer

The technology that unlocks intelligence from big data – deep learning – is explained in this video by Max Welling, a professor at the University of Amsterdam, and a founder of the Dutch deep learning startup Scyfer.

Even though I have a software and systems background in demographic data, this short video was a necessary and easily understood primer in this new science of deep learning.

Peter Asaro: Challenges and approaches to developing policy for robots

$
0
0

regulation_policy_data_hand_manAs part of the Center for Information Technology Policy (CIPT) Luncheon speaker series, Peter Asaro gives a talk on developing policy for robots. 

Robotics stands on the cusp of an explosion of applications and wide-spread adoption. Already the development and popular use of small UAV drones is gaining momentum, self-driving cars could be market-ready in a few short years, and the next generation of fully-autonomous military drones are in development. Yet the regulatory policies necessary to ensure the social and economic benefits of these technologies are not yet in place. The FAA has struggled to devise operational regulations for small UAV drones, and has not yet addressed the privacy concerns they raise. Google has influenced state legislatures to pass laws permitting self-driving cars, yet the liability issues and insurance regulations are open questions, as are the safety requirements for these cars to interact with human drivers. And while the United Nations has begun discussions over the possible need to regulate fully autonomous weapons, the development of such systems continues at rapid pace. I will present my work on some of these issues, as well as ask whether a more comprehensive regulatory framework might be able to address the questions of ensuring public safety and privacy in the coming revolution in robotics.

The Year of CoCoRo Video #14/52: LilyCam in a Lake

$
0
0

TYOC 14 52  LilyCam in a Lake   YouTube

The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AUVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we will be uploading a new weekly video detailing the latest stage in its development. In this video, the Lily robot takes us below a lake in Austria.

After our successful application of “Lily-Cam” in our small ponds, we went further. At several lovely places at Styria (Austria) we took a look below the water surface and encountered beautiful and pittoresque landscapes, fish and even diving ducks.

To learn more about the project, see this introductory post, or check out all the videos from the Year of CoCoRo on Robohub.

When is an ice cube not an ice cube?

$
0
0

Japanese advertising agency wins award for Suntory Whisky ad campaign using CNC-milled ice cubes and a 3D printing app from Autodesk.

In 2014, Tokyo-based advertising agency TBWAHAKUHODO created an ad for Suntory Whisky which included the most gorgeous ice cubes you'll ever see. The ad won 6 awards at this year's Asia Pacific Ad Festival (AdFest) in Thailand. 

The ice cubes were created from a block of ice using a CNC router to create ice sculptures. Basically this is an inverse 3D printing technique. The CNC router required chilling to -7 degrees Celsius to prevent the ice from melting. The creation of these ice sculptures is more like 3D milling than 3D printing. Rather than building up an object additively, the 3D mill shaves the ice away to create the desired shape. Like a 3D printer, the 3D mill is connected to a computer and uses a 3D-printing app from Autodesk, Autodesk 123D.

Is nothing sacred!

Read more

The Year of CoCoRo Video #15/52: TRAILER LilyCam in a wild river

$
0
0

TYOC 15 52 TRAILER LilyCam in a wild river   YouTube

The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AUVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we will be uploading a new weekly video detailing the latest stage in its development. In this video, we take Lily-Cam into rough water.

Austria is home not only to beautiful lakes, it also has wild rivers and creeks. After “Lily-Cam” did its job in the lakes, we threw it also into a fast whitewater river. We were lucky to catch it downriver after some minutes and the robot survived this adventure. However, our engineers, who have also to constantly maintain and repair the robots, didn’t like the movie. ;-)

To learn more about the project, see this introductory post, or check out all the videos from the Year of CoCoRo on Robohub.

The Year of CoCoRo Video #16/52: JeffCam in a lake

$
0
0

TYOC 16 52  JeffCam in a Lake   YouTube

The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AUVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we will be uploading a new weekly video detailing the latest stage in its development. In this video a Jeff-Cam takes us on a trip below an Italian lake.

Jeff is much more agile and powerful then Lily. So mounting a camera ontop of an autonomous Jeff robot produced an even better autonomous camera agent. After some preliminary tests and tuning in a pool, we went to an Italian lake to see how it looks down there below the surface.

To learn more about the project, see this introductory post, or check out all the videos from the Year of CoCoRo on Robohub.


CoCoRo: New video series tracks dev’t of collective behaviour in autonomous underwater swarm

$
0
0

Cocoro

The EU-funded Collective Cognitive Robotics (CoCoRo) project comprises the largest autonomous underwater swarm in the world. Following three-and-a-half years of intensive research, CoCoRo’s interdisciplinary team of scientists from universities in Austria, Italy, Germany, Belgium, and the UK built a swarm of 41 autonomous underwater vehicles (AUVs) that show collective cognition. Swarm members are not only aware of their own surroundings, but also of their own and other vehicles’ experiences. Throughout 2015 – The Year of CoCoRo – the research team will be uploading a new weekly video detailing a new stage in its development and reporting updates here on Robohub. Check out the first ten videos below!

Overview of the CoCoRo system

The swarm consists of Jeff robots (the highlight of the project),  20 smaller (and slower) Lily robots, as well as a base station at the surface. The setting is shown in a short overview video.

In this simple form of collective self-awareness, the swarm processes information collectively, such that, as a whole it knows more than any single swarm member. The swarm not only collects information about the environment, but also about its own state. As well as staying together as a swarm, it’s also capable of navigating and diving, searching for sunken targets and communicating and processing information as a group.

Research highlights

The swarm communicates findings via a self-established bridge to the “world above”

Not only can the swarm interact with itself, it can also communicate its findings and inner state to the world above. It does this by establishing, self-maintaining and even repairing a bridge between the swarm, located on the sea bed, and a human-controlled floating station that is located on the water’s surface.

During the project, many algorithms were developed and tested with the CoCoRo prototype swarm. Future applications include environmental monitoring and oceanic search missions.

Three-layer decentralized swarm design

When designing the swarm, the scientists decided to follow the KISS principle — Keep it Simple and Stupid. A decentralized approach was chosen for the CoCoRo project because in a swarm there is no single point of failure. Swarms are robust, flexible and scalable systems that can adapt easily to changing environments. Additionally, the technical and cost requirements for a single robot are less than in non-swarm systems.

Three different layers were implemented for the cognition: individual, group and global. Single AUVs collect information, local groups share and compare it and, finally, the whole swarm makes collective decisions.

Bio-inspired algorithms build core functions

While designing the software, the scientists decided to focus on bio-inspired algorithms because they are known to be very flexible and robust. The focal animals were slime mold, fireflies, honeybees, cockroaches or fish. The algorithms were programmed in a modular way; self-organizing cognition algorithms were merged with self-organizing motion algorithms that were inspired by various animals. In this way, totally new algorithms emerged.

Three types of AUVs communicate via a novel multichannel underwater network

A swarm of 41 prototype AUVs was built for the CoCoRo project, consisting of three different types of AUVs that range in size from a human hand to a man’s foot. Depending on their activity, the robots run from two to six hours without required a charge. A unique feature of the CoCoRo project is that sensors were implemented in a combination never used before in underwater robotics. These combinations were necessary to allow the construction of a autonomous underwater swarm that can coordinate mainly through simple signalling between nearest neighbors. In this way, a totally new heterogeneous underwater swarm system was created, consisting of three different types of AUVs: “Jeff” robots that search the sea bed; a base station on the water’s surface (connected to humans); and “Lilly” robots that bridge and communicate information between the Jeff robots and the base station.

Check out our other videos below!

Introducing Jeff

The Jeff robots are extremely agile and can resist water currents of 1m/sec. They have autonomous buoyancy control, lateral and vertical motion with optimized propellers and a rich set of sensors. For all these functions, novel energy-saving methods were implemented that guarantee autonomy and durability.

Jeff in turbulent waters

Jeff’s body shape and actuation also allows it to operate in turbulent waters, as is tested in the following video with a remote-controlled Jeff. In this video it was purposely driven into the most turbulent areas to see how it would be affected by the water currents.

Feeding Jeff with magnets

Even in turbulent water, Jeff can be controlled with enough precision to reach a specific point in 3D space. This is demonstrated by holding a small magnet in the water for Jeff to pick up.

The Jeff swarm explores its habitat

The researchers have produced 20 Jeff robots, able to swarm out and search complex habitats in parallel, as shown in the following video.

Collective search by Jeff and Lily Robots

An important aspect of the project is that all three types of robots help each other in the performance of their collective task. To achieve this, the scientists took inspiration from nature and combined several algorithms to generate a new collective program for the whole swarm. An example of this is their collective search for a magnetic (metallic) target on the sea ground, shown in a pool-based scenario. In this setting, a target is marked by a metal plate and some magnets. Several blocking objects, surrogates for debris and rocks, produce a structured habitat. The Jeff robots, on the ground, first have to spot the target by doing a search and observing their magnetic sense. After the first robot finds the target, it attracts others through the use of blue-light signals. The aggregation of Jeff robots then summons a swarm of Lily robots at higher water levels, which serve to pinpoint the site to humans.

Swarm-size awareness exhibited by Lily robots

It’s important for a swarm of robots to know its own size. In CoCoRo, this was achieved by a novel algorithm inspired by slime mold amoebas; in this case, however, the chemical signals amoebas exchange were substituted in the algorithm for blue-light blinks. These are transmitted by the robots to their neighbors and propagated to other robots in a wave-like manner. This behavior was first tested on the smaller Lily robots.

Swarm-size awareness exhibited by Jeff robots

After the algorithm was successfully tested on Lily robots, it was further polished to improve its reaction speed and prediction quality before it was implemented in Jeff robots, as shown by the following video.

Flocking by slime mold

The slime mold inspired algorithm doesn’t just allow the robots to know the size of the swarm. Combined with directed motion, it can also be used to generate slime mold-like flocks of underwater robots.

Emergent taxis of robots

After adding an additional environmental light sensor, this bio-mimicking collective behavior is turned into something the scientists call “emergent taxis.” When the entire swarm of robots run uphill in a light gradient, although individual members can’t read it (having only a very rough sensor impression of the local area), the swarm turns into a kind of moving compound eye, with all the robots observing their local environmental and influencing each other. Finally, an algorithm designed for counting swarm members transforms it into an acting organ, what might even be called a “super-organism”.

Outlook

The CoCoRo project developed and tested a whole set of “enabling technologies”, mostly in pools but sometimes in out-of-lab conditions. When the project ended in late 2014, it was clear to the research team that these technologies should be further developed and brought out of the lab. As a next step in their long-term roadmap, the research team (with additional partners) started a new project called “subCULTron” to develop the next generation of swarm robots and apply them in areas of high impact: fish and mussel farms and the Venice lagoon.

New footage of CoCoRo available on YouTube each week

During the runtime of CoCoRo, the swarm developed an enormous variety of functionalities, which were presented in several demonstrators at the project’s final review. A lot of footage was recorded and many videos were produced. These films not only give a deep insight into the different features of the CoCoRo swarm as it evolved, but also provide a behind-the-scenes glimpse into the scientific work and development that went into the project. The CoCoRo team will be uploading a new video on YouTube each week until the end of the year to celebrate the “The Year of CoCoRo” – stay tuned on Robohub for regular updates.

Links

CoCoRo Homepage
CoCoRo on Youtube
CoCoRo on Facebook


If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

The Year of CoCoRo Video #17/52: Lily confinement by bluelight

$
0
0

CoCoRo-17

The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AUVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we will be uploading a new weekly video detailing the latest stage in its development. In this video we use blue-light blinks to keep the swarm together and to keep it in vicinity of the moving base station.

Due to this “confinement”, the radio-controlled base-station can pull a whole swarm of Lily robots like a tail behind itself. It is important to confine the robots into specific areas in larger water bodies because the swarm requires normally a minimum connectivity among agents to work efficiently, which is achieved only with a critical minimum swarm density. Without keeping the robots in a controlled area, robots could get lost and the robot density could fall below the critical density. Thus, confinement was identified to be a critical functionality.

To learn more about the project, see this introductory post, or check out all the videos from the Year of CoCoRo on Robohub.

 

The Year of the CoCoRo Video #18/52: Confining Jeff robots with an electric field

$
0
0

TYOC 18 52  Jeff Confinement ElectricField   YouTubeThe EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AUVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we will be uploading a new weekly video detailing the latest stage in its development. This video shows how we used an electric underwater field to confine the robots of a specific area around the base station so that they don’t get lost.

We use a submerged electrode below the CoCoRo surface station to generate a pulsing electric field underwater around this station. The Jeff robots have electrodes on their outer hull to be able to sense such fields. This way we can confine the robots into a specific area (volume) around the base station. This is important to keep the swarm together in the water, otherwise robots can get lost. We first tested this system in a pool, as it is shown in this video here.

To learn more about the project, see this introductory post, or check out all the videos from the Year of CoCoRo on Robohub.

Multiple tethered quadrotors performing high-g, high-speed formation maneuvers 

$
0
0

High-g-quadrocopter-trainingThis video shows tethered quadrocopters flying steadily together at high speeds exceeding 50 km/h in a confined space. With the tether exerting more than 13 gs of centripetal force, multiple quadrotors are able to fly 1.7m- radius circular trajectories in formation across different orientations in space and then successfully perform a coordinated braking maneuver.

The testbed allows the quadrocopter’s high speed flight behavior to be characterized in order to determine drag characteristics, propeller efficiency, and the physical limits of the machine. It is also being used to safely develop high-speed maneuvers such as emergency braking.

Note that it is possible to remove the central pole by balancing the forces acting on the strings; this could be then used in performance settings, possibly enhanced by light and sound effects.

This research was conducted at the Flying Machine Arena at ETH Zurich.

Reference:

Maximilian Schulz, Federico Augugliaro, Robin Ritz, Raffaello D’Andrea , “High-speed, Steady Flight with a Quadrocopter in a Confined Environment Using a Tether “, IROS 2015, submitted.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

The Year of the CoCoRo Video #19/52: Electric confinement in the harbour

$
0
0

cocoro_19b

The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AUVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we will be uploading a new weekly video detailing the latest stage in its development. This video shows how we used an electric underwater field in Livorno harbour to field test confining the robots to a specific area around the base station so that they don’t get lost.

After having tested the electric-field confinement of the Jeff robots to the base station in our pool, we went out to Livorno harbour to test it under out-of-the-lab conditions. Although our CoCoRo prototype robots were not designed to operate in salty ocean water — there is a significantly different electrical conductivity compared to freshwater — the electrical confinement worked there quite well.

To learn more about the project, see this introductory post, or check out all the videos from the Year of CoCoRo on Robohub.

The Year of the CoCoRo Video #20/52: Autonomous docking with Jeff robots

$
0
0

cocoro_20

The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AUVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we will be uploading a new weekly video detailing the latest stage in its development. This video shows the Jeff robots docking and undocking autonomously with the surface station.

For the sake of achieving long-term energy autonomy with our CoCoRo system, we constructed a docking/undocking mechanism for Jeff robots on our surface station. This functionality was first tested with a fixed mounted docking device, and then with a docking device floating around in our pool. The autonomous docking worked exceptionally well under all conditions.

To learn more about the project, see this introductory post, or check out all the videos from the Year of CoCoRo on Robohub.

The Year of CoCoRo Video #35/52: Relay chain communication

$
0
0

TYOC-35-52--RelayChain-Communication---YouTubeThe EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we’ll be uploading a new weekly video detailing the latest stage in its development. This video shows a set of experiments that investigate the capability of the relay chain (formed by Lily robots) to transmit (relay) information between two spatially separated places.

At one of these places we trigger a special RF (radio frequency) pulse to be emitted by a robot. Neighboring robots that receive this pulse send out a similar pulse, relaying the signal along the chain. To control the directionality of the spreading signal, there is also a refractory period after each relaying act in which the robot is unreceptive for the relayed signal. This system is inspired by slime mold amoebas and giant honeybees and serves very well for the underwater communication purpose.


The Year of CoCoRo Video #36/52: Relay swarm

$
0
0

TYOC-36b-52--CoCoRo-RelaySwarm---YouTubeThe EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we’ll be uploading a new weekly video detailing the latest stage in its development. Over the last four posts we demonstrated how the robots use a relay chain to communicate between the sea ground and the surface station. The following two videos show an alternative to this communication principle. The “relay swarm” scenario uses a swarm of Lily robots performing random walks in 3D for transmitting information about the status of the search swarm of Jeff robots on the ground. 

This first video explains the scenario in a computer animation:

The second video shows the real-world experiments performed in the “relay swarm” scenario. First Jeff robots search the ground of a fragmented habitat for a magnetic target. As soon as it finds the target it signals this locally with blue-light LEDs. Lily robots that also roam the habitat can pick up the signal from this Jeff robot. The info can also spread from Lily robot to Lily robot as they meet, spreading like an infectious process. Finally, Lily robots inform the surface station that the Jeff robot on the ground has found an interesting target. Future extensions foresee that after informing the surface station another phase starts: a second signal spreads from the surface station through the Lily robots back to the Jeff robot on the ground, and ultimately makes the Jeff robot to go up to the surface above the found target.

 

Interview with Danica Kragic

$
0
0

Danica-Kragic-Interview-with-the-International-Journal-of-Advanced-Robotic-Systems---YouTubeIn this wide ranging interview, Danica Kragic, professor at the Royal Institute of Technology (KTH), Sweden, and General Chair of ICRA 2016, discusses the nature of collaborative research, shares her opinions on the robotics projects financed by Horizon2020, speculates on the reasons behind the different research agendas for robotics in the US, the EU and in Asia, and tells us why she chooses to publish her team’s research in open access publications.

Kragic underlines the importance of good organisation when it comes to promoting interdisciplinarity among scientific fields, and incentives to attract students in taking up robotics, and shares her views on why robotics is so often negatively portrayed in the media. Finally, will robots be as intelligent as humans? Watch the latest IJARS video to find out.

 

Danica Kragic is a Professor at the School of Computer Science and Communication at the Royal Institute of Technology, KTH. She received her MSc in Mechanical Engineering from the Technical University of Rijeka, Croatia, in 1995 and PhD in Computer Science from KTH in 2001. She has been a visiting researcher at Columbia University, Johns Hopkins University and INRIA Rennes. She is the Director of the Centre for Autonomous Systems. Danica received the 2007 IEEE Robotics and Automation Society Early Academic Career Award. She is a member of the Royal Swedish Academy of Sciences and Young Academy of Sweden. She holds a Honorary Doctorate from the Lappeenranta University of Technology. She chaired IEEE RAS Technical Committee on Computer and Robot Vision and served as an IEEE RAS AdCom member. Her research is in the area of robotics, computer vision and machine learning. In 2012, she received an ERC Starting Grant. Her research is supported by the EU, Swedish Foundation for Strategic Research and Swedish Research Council.

Kragic D. IJARS Video Series: Danica Kragic Interview with the International Journal of Advanced Robotic Systems [online video]. International Journal of Advanced Robotic Systems, 2015, 12:V7. DOI: 10.5772/61490

Robots in Depth: Gregory Dudek on field robotics

$
0
0

Greg_Dudek_McGill_Robots_in_DepthRobots in Depth is a new video series featuring interviews with researchers, entrepreneurs, VC investors, and policy makers in robotics, hosted by Per Sjöborg. In this first episode, Per speaks to Gregory Dudek, Research Director of the McGill Mobile Robotics Lab, about field robotics. They discuss air, surface and underwater vehicles, and review challenges and best practices for using field robots, both individually and as a collaborative team.

 

Robots in Depth is recorded at different robotics events around the world. You can support Robots in Depth on Patreon.

Air and ground robot collaborate to map and safely navigate unknown, changing terrain

$
0
0

ASL_aerial_ground_robot_collaboration_ETHZThis video shows how a robot team can work together to map and navigate toward a goal in an unknown terrain that may change over time. Using an onboard monocular camera, a flying robot first scouts the area, creating both a map of visual features for simultaneous localization and a dense elevation map of the environment. A legged ground robot then localizes itself against the global map, and uses the elevation map to plan a traversable path to a goal.

While following the planned path, the absolute pose corrections are fused with the legged robot’s state estimation and the elevation map is continuously updated with distance measurements from an onboard laser range sensor. This allows the legged robot to safely navigate towards its goal while taking into account any changes in the environment.

More info: http://leggedrobotics.ethz.ch

This work was published as:
P. Fankhauser, M. Bloesch, P. Krüsi, R. Diethelm, M. Wermelinger, T. Schneider, M. Dymczyk, M. Hutter, and R. Siegwart, “Collaborative Navigation for Flying and Walking Robots,” in IEEE International Conference on Intelligent Robots and Systems (IROS), 2016.

Robots in Depth: Melonee Wise on building robots, and robot companies

$
0
0

Melonee_Wise_Robots_in_Depth_Per_SjobergRobots in Depth is a new video series featuring interviews with researchers, entrepreneurs, VC investors, and policy makers in robotics, hosted by Per Sjöborg. In this interview, Per talks to Melonee Wise, lifelong robot builder and developer, and CEO of Fetch Robotics.

Melonee shares how she first got into building things at a young age and how that led to studying mechanical engineering and leaving her PhD project behind to become the second employee of Willow Garage. She shares some personal anecdotes from the first few years at Willow Garage, including both successes like the PR2 and some less successful moments.

Melonee also gives her perspective on the development phase robotics is in now and what the remaining challenges are. Related to that, she discusses what is feasible to deliver in the next five years vs. what her dream robot would be.

You can support Robots in Depth on Patreon. Check out all the Robots in Depth videos here.

 

Viewing all 284 articles
Browse latest View live