Quantcast
Channel: video – Robohub
Viewing all 284 articles
Browse latest View live

ShanghAI Lectures 2012: Lecture 9 “Ontogenetic development”

$
0
0

ShanghAIGlobeColorSmall

In the 9th part of the ShanghAI Lecture series, we look at ontogenetic development as Rolf Pfeifer talks about the path from locomotion to cognition. This is followed by two guest lectures: The first one by Ning Lan (Shanghai Jiao Tong University, China) on cortico-muscular communication in the nervous system, the second by Roland Siegwart (ETH Zurich) on the design and navigation of robots with various moving abilities.

The ShanghAI Lectures are a videoconference-based lecture series on Embodied Intelligence run by Rolf Pfeifer and organized by me and partners around the world.

 

Ning Lan: Cortico-Muscular Communication of Movement Information by Central Regulation of Spindle Sensitivity

Ning Lan is Professor at the Shanghai Jiao Tong University in China. He tells us about cortico-muscular communication in the nervous system.

 

Roland Siegwart: Design and Navigation of Wheeled, Running, Swimming and Flying Robots

Roland Siegwart is professor at ETH Zurich and the director of the Autonomous Systems Laboratory.

Robots are rapidly evolving from factory work-horses, which are physically bound to their work-cells, to increasingly complex machines capable of performing challenging tasks as search and rescuing, surveillance and inspections, planetary exploration or autonomous transportation of goods. This requires robots to operate in unstructured and unpredictable environments and various terrains. This talk will focus on design and navigation aspects of wheeled, legged, swimming and aerial robots operating in complex environments.

Siegwart presents wheeled inspection robots designed to crawl into machines and take various measurement, quadruped walkers that exploit natural dynamics and serial elastic actuation, swimming robots and autonomous micro-helicopters used to inspect cluttered, GPS denied cities or narrow indoor environments. He also presents a small fixed-wing airplane capable of staying in the air indefinitely due to its solar powered generator.

Related links:


ShanghAI Lectures 2012: Lecture 10 “How the body shapes the way we think”

$
0
0

ShanghAIGlobeColorSmall

This concludes the ShanghAI Lecture series of 2012. After a wrap-up of the class, we announce the winners of the EmbedIT and NAO competitions and end with an outlook of the future of the ShanghAI Lectures.

Then there are three guest lectures: Tamás Haidegger (Budapest University of Technology and Economics) on surgical robots, Aude Billard (EPFL) on how the body shapes the way we move (and how humans can shape the way robots move), and Jamie Paik (EPFL) on soft robotics.

The ShanghAI Lectures are a videoconference-based lecture series on Embodied Intelligence run by Rolf Pfeifer and organized by me and partners around the world.

 

Tamás Haidegger: Human Skills for Robots: Transferring Human Knowledge and Capabilities to Robotic Task Execution in Surgery

Almost 90 years ago, the idea of telesurgery was born, along with the initial concept of robots. From the early 1970s, researchers were focusing on robotic telepresence, to empower surgeons to treat patients at a distance. The first systems appeared over 20 years ago, and robotic surgery has quickly become a standard-of-care for certain procedures—at least in the USA. Over the decades, the control concept remained the same; a human surgeon guiding the robotic tools based on real-time sensory feedback. However, from the beginning of the development, the more exciting (and sometimes frightening) questions have been linked to machine learning, AI and automated surgery. In the true sense of automation, there have only been unclear reports of one single robotically planned and executed surgery so far, despite the fact that many research groups are working on the problem. This talk introduces the major efforts currently undertaken in centers of excellence around the globe to transfer the incredibly diverse and versatile human cognition into the domain of surgical robotics.

References

  • P. Kazanzides, G. Fichtinger, G. D. Hager, A. M. Okamura, L. L. Whitcomb, and R. H. Taylor, “Surgical and Interventional Robotics: part I,” IEEE Robotics and Automation Magazine (RAM), vol. 15, no. 2, pp. 122–130, 2008.
  • G. Fichtinger , P. Kazanzides, G. D. Hager, A. M. Okamura, L. L. Whitcomb, and R. H. Taylor, “Surgical and Interventional Robotics: part II,” IEEE Robotics and Automation Magazine (RAM), vol. 15, no. 3, pp. 94–102, 2008.
  • G. Hager, A. Okamura, P. Kazanzides, L. Whitcomb, G. Fichtinger, and R. Taylor, “Surgical and Interventional Robotics: part III,” IEEE Robotics and Automation Magazine (RAM), vol. 15, no. 4, pp. 84–93, 2008.
  • C. E. Reiley, H. C. Lin, D. D. Yuh, G. D. Hager. “A Review of Methods for Objective Surgical Skill Evaluation,” Surgical Endoscopy, vol. 25, no. 2, pp. 356–366, 2011.

 

Aude Billard: How the body shapes the way we move and how humans can shape the way robots move

In this lecture Aude Billard advocates that it is advantageous to have robots move with a dynamics that resembles the dynamics of motion of natural bodies, even if the robots do not resemble humans in their physical appearance (e.g. industrial robots). This will make their motion more predictable for humans and hence make the interaction safer. She then briefly presents current approaches to modeling the dynamics of human motion in robots.

A survey of issues on robot learning from human demonstration can be found at:
http://www.scholarpedia.org/article/Robot_learning_by_demonstration

 

Jamie Paik: SOFT Robot Challenge and 
Robogamis

Making a WALL-E robot

$
0
0

We take you now to sunny, southern California, where a small group of enthusiasts has constructed a very realistic, Arduino-based replica of Pixar’s WALL-E, entirely from custom-fabricated parts.

The beloved Wall-E robot was just computer generated graphics in the Pixar movie, but fans have spent years trying to bring him to life. We visit Mike McMaster’s workshop to see his incredible life-size Wall-E, a remote controlled robot that lives among an R2-D2 droid and other pets on Mike’s orange farm.

Ryan Calo on spyware for your brain

$
0
0

Ryan Calo discusses how researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today’s gamers.

Call for robot holiday videos!

$
0
0

Autonomous-Christmas-Lab

Our robot holiday videos for 2012 were such a hit that we’ve decided to do it again this year. Send us a link (info[at]robohub.org) to your new holiday- or New-Year-themed robot video between now and December 30 – we will feature the best ones over the holiday season.

VIDEO Quadrocopter failsafe algorithm: Recovery after propeller loss

$
0
0

Drone-Failsafe-Algorithm

 

The video in this article shows an automatic failsafe algorithm that allows a quadrocopter to gracefully cope with the loss of a propeller. The propeller was mounted without a nut, and thus eventually vibrates itself loose. The failure is detected automatically by the system, after which the vehicle recovers and returns to its original position. The vehicle finally executes a controlled, soft landing, on a user’s command.

The failsafe controller uses only hardware that is readily available on a standard quadrocopter, and could thus be implemented as an algorithmic-only upgrade to existing systems. Until now, the only way a multicopter could survive the loss of a propeller (or motor), is by having redundancy (e.g. hexacopters, octocopters). However, this redundancy comes at the cost of additional structural weight, reducing the vehicle’s useful payload. Using this technology, (more efficient) quadrocopters can be used in safety critical applications, because they still have the ability to gracefully recover from a motor/propeller failure.

Failsafe_algorithm_sequence

(A) shows the quadrocopter in normal operation. In (B) the propeller detaches due to vibrations, and the quadrocopter starts pitching over in (C) – (E). In (F) the vehicle has regained control, and is flying stably.

The key functionality of the failsafe controller is a novel algorithm that I developed as part of my doctoral research at the Institute for Dynamic Systems and Control at ETH Zurich. This new approach allows such a vehicle to remain in flight despite the loss of one, two, or even three propellers. Having lost one (or more) propellers, the vehicle enters a continuous rotation — we then control the direction of this axis of rotation, and the total thrust that the vehicle produces, allowing us to control the vehicle’s acceleration and thus position.

Even if the vehicle can no longer produce sufficient thrust to support its own weight, this technology would still be useful: one could, for example, try to minimize the multicopter’s velocity when it hits the ground, or steer the multicopter away from dangerous situations such as water, or people on the ground.

This control approach can also be applied to design novel flying vehicles — we will be releasing some related results soon.

This technology is patent pending.

For more information, have a look at the Flying Machine Arena website, the IDSC research page, or just post your question in the comments below.

Robot Holiday Video 2013: Autonomous Systems Lab, ETH Zurich

$
0
0
Autonomous-Christmas-Lab-2013-Web
The Autonomous Systems Lab at ETH Zurich proudly presents this year’s Robotics Christmas Video:

Movement at night and a stolen tree, who the heck let the robots free? They gather and cheer, could Christmas be here?

 

 

DARPA Robotics Challenge Trials live broadcast

$
0
0

The time has come for the robots competing on the DARPA Robotics Challenge (DRC) to make their first public appearance. The trials for the final 2014 event will take place on December 20-21, 2013 and you can watch them live (or up close if you are lucky!). As stated on DARPA’s website:
“The Trials will provide a baseline on the current state of robotics and determine which teams will continue on to the DRC Finals in 2014 with continued DARPA funding. Competing in the 2014 Finals will lead to one team winning a $2 million prize.“
Click after the jump for the live video and twitter stream or go directly to http://www.theroboticschallenge.org/.

Live Stream

Day One


Click here for the trials event agenda (pdf)

This is a general overview of the arena and the tasks the robots need to perform, you can find more on the official website.

ChallengeTasksV6 Hi-Res

UPDATE: coverage from Automaton blog:
http://spectrum.ieee.org/automaton/robotics/humanoids/darpa-robotics-challenge-watch-it-live


Watch Zero Robotics SPHERES Challenge live

$
0
0

Astronaut-Chris-Cassidy-and-SPHERES
NASA, MIT and DARPA will host the Fifth Annual Zero Robotics SPHERES Student Challenge today at 7:30am (EST). The event will take place at MIT’s campus in Cambridge, Mass. where student teams from the US and other countries will join NASA, ESA,MIT, DARPA, the Center for the Advancement of Science in Space, and IT consulting firm Appiro. As stated on the official press release:

“For the competition, NASA will upload software developed by high school students onto bowling ball-sized spherical satellites called Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES, which are currently aboard the International Space Station. From there, space station Expedition 38 Commander Oleg Kotov and Flight Engineer Richard Mastracchio will command the satellites to execute the teams’ flight program. “

Below you can find the link for the live broadcast

http://webcast.mit.edu/i/institute/2013-2014/zero_robotics/17jan/index.html

You can also watch it on NASA TV:



Live streaming video by Ustream


de-shield-SPHERES

SPHERES consist of 3 flying satellites on board the ISS able to test a diverse range of software and hardware. You can find more info at NASA’s website and you can also listen to our older interview of Dr. Alvar Saenz-Otero from MIT, lead scientist of the SPHERES project on Robotspodcast.

(photos by NASA)

No drone experts were harmed in the making of this video … Or were they?

$
0
0

subaru_drones_tif

What’s with all the quadrotors in auto advertising these days? And what do quadrotor swarms have to do with cars? Probably not much at all, but apparently associating your auto brand with high-performance quads is de rigeur. Subaru is following the lead of Lexus (which launched its quadrotor ad last November), upping the ante by having the driver of the new WRX STI engage in a pas de deux (or should we say, ’pas de plusieurs?’) with a swarm of 300 LED-lit quadrotors. It makes for some pretty stunning footage, but before you get too excited, unlike the original Lexus ad (which had at least a decent portion of real footage from Kmel’s impressive quads) almost all of the quadrotor eye-candy in the new Subaru ad is CGI. The automaker’s desire to associate themselves with cutting edge technology may be a sign of just how popular quadrotors have become, but is hyper-realistic CGI enhancement inflating consumer’s expectations of what quadrotors can actually do? (see the video below)

Quadrotors are famous for performing amazing stunts in the frontier of what we think is possible from a machine. DDB Canada/Tribal Worldwide, which is the ad agency behind this project, wanted to associate the performance and especially the maneuverability of the new Subaru WRX STi with the stunts performed in various quad videos-gone-viral.

Subaru_Ad_Making_Of_17

The production used live action captured in the Hughes Airport Hangar in Los Angeles, California (which was purchased by Youtube and converted to a 41,000-square foot mega studio) along with Big Block’s Drive-a-tron system where a car is accurately modelled – both geometrically (with manufacturer’s CAD models) and dynamically (its physics and performance envelope). However the quadrotors (although modeled according to real drones) are all computer generated (note that none of the behind-the-scenes photos contain a quadrotor). The final footage is almost completely CGI and the process took 8 weeks.

Subaru_Ad_Making_Of_25

By using real-life emulating models, the production ensures that what you see on the video could have been performed in real life. That is true for the car, which is accurately modelled, but it’s speculative for the quads, even if it’s not totally far-fetched based on what has already been done (but not at that scale) by several labs like the Flying Machine Arena, or even K-Mel, the robotics company behind the Lexus drone ad.

Just how far away are we from quadrotors actually being able to perform stunts like these? Raffaello D’Andrea, the lead researcher behind the Flying Machine Arena, says “With enough of a budget, this could be done by a few groups of people now. But it is much, much cheaper to do this with CGI.”

“It definitely changes people’s expectations and is no different than the portrayal of any advanced technology in technology in movies, television, and videos. But: seeing it live is completely different; even though people may be partially “desensitized” by CGI, this immediately disappears when they see it live,” added D’Andrea, who is no stranger to giving live demos that feature his quadrotor research. “As someone who thrives on doing live demonstrations, I don’t see this as being harmful in any way.”

We also asked Alan Winfield, Hewlett-Packard Professor of Electronic Engineering at UWE Bristol and expert of the portrayal of robotics in the media his thoughts on whether CGI will inflate people’s expectations of what quads can do, and he agreed that it’s a problem. But, he adds, ”What I also find curious and interesting (in a geeky kind of way) is that the CGI in the Subaru ad faithfully reproduces the grey reflective spheres needed by the tracking system in labs – especially at ~44s. It’s ironic that the eye-candy drones include the very tracking tech that many people don’t realise is needed to make them do precision formation acrobatics.”

We can’t blame Subaru for wanting to jump on the quadrotor bandwagon … quadrotors are popular, cutting edge and hip (they are already making an entry into concept cars).  But while car enthusiasts will already have their opinions about the new Subaru WRX STi (it’s an evolution of a well-proven and popular model), we have to wonder how people’s expectations of drone technology will be shaped when they’re exposed to mostly CGI videos and advertisements rather than real footage.

In fact there is some astounding work being done with high-performance quads and aerial swarms. If you want to get a sense of what’s really doable, check out the following 2012 video, which was developed for the The Ars Electronica Futurelab with input from ETH-Zürich, the University of Pennsylvania’s GRASPlab, and the MIT-Medialab. You can find the full behind-the-scenes story about this project here.

PS. The Lexus ad mentioned above

Amazing in Motion – SWARM (Lexus Ad)

The making of “SWARM”

 

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

 

Live Coverage of IROS Workshop on Science Communication

$
0
0

Don’t miss our live coverage on “Understanding Robotics and Public Opinion: Best Practices in Public Science Communication and Online Dissemination” at the IROS conference in Tokyo. LIVE NOW  November 7 from 9am to 12:30pm (JST).

Check back to this post during the workshop for the latest live stream. Do you have questions for the panel? Use hashtag #irosSciCom or post in the discussion below.

 

Program

Science communication in robotics
9am-10am

Online tools to stay informed and expand your reach (Sabine Hauert, Massachusetts Institute of Technology)

Robots for people who know nothing about robots (Evan Ackerman, IEEE Spectrum)

How to pitch your project to investors (Andra Keay, Robot LaunchPad & Silicon Valley Robotics)


Robots in the public discourse
10am-10:30am

A swarm on every desktop: Lessons on crowdsourcing swarm manipulation
Aaron Becker and Chris Ertel, Rice University

Public attitudes towards robots and outreach to the public through the European Union-funded programme on Cognitive Systems and Robotics
Cécile Huet, European Commission

10:30am – 11am
Break

11am-11:30am
Overview of the #robotsandyou conversation

Communicating the history of robotics through the voices of roboticists: the oral history of robotics project
Peter Asaro, School of Media Studies, The New School
Selma Sabanovic, School of Informatics and Computing, Indiana University Bloomington
Staša Milojevic, School of Library and Information Science, Indiana University Bloomington
Sepand Ansari, School of Media Studies, The New School


Driving the discussion around robotics
11:30am-12:30am

Panelists:
Peter Asaro, The New School
Ryan Calo*, University of Washington
Travis Deyle*, Hizook
Dario Floreano, EPFL
Cécile Huet, European Commission
Chris Mailey, Association for Unmanned Vehicle Systems International
AJung Moon*, University of British Columbia & Roboethics Info Database
Bruno Siciliano*, Università degli Studi di Napoli Federico II
Alan Winfield*, University of the West of England

* online participants

Organizers:
Sabine Hauert, MIT, USA
Bruno Siciliano, UNINA, Italy
Markus Waibel, ETH Zurich, Switzerland

New video shows range and versatility of professional service robots

$
0
0

A new video promoting this year’s AUTOMATICA event in Munich offers an excellent primer in the state of professional service robotics in Europe. Check out the video here:

Surrounded by his quadrocopter drones on stage, Raffaello D’Andrea explains feedback control and talks about the coming Machine Revolution

$
0
0

Raffaello DAndrea Quadrocopter2

During the 20 minute presentation, Raffaello D’Andrea revealed some of the key concepts behind his group’s impressive demonstrations of quadrocopters juggling, throwing and catching balls, dancing, and building structures – and illustrated them with live examples with quadrocopters flying on stage.

To watch him hurtle quadrocopters towards his audience, see them juggle balls and balance poles, and to find out what happens when control fails, check out the video:

Other speakers at the Zurich.Minds event included:

 

Full disclosure: I, my colleagues from the Flying Machine Arena, and Raffaello D’Andrea all work in his group at the Institute for Dynamic Systems and Control at ETH Zurich.

Here are some photos of the event:

Raffaello DAndrea Quadrocopter3

Raffaello DAndrea Quadrocopter4

Raffaello DAndrea Quadrocopter5

Raffaello DAndrea Quadrocopter6

Photos: Zurich.Minds 2012

Video: Throwing and catching an inverted pendulum – with quadrocopters

$
0
0

Quadrotors_Juggling_4

Two of the most challenging problems tackled with quadrocopters so far are balancing an inverted pendulum and juggling balls. My colleagues at ETH Zurich’s Flying Machine Arena have now combined the two.

As part of his Master thesis Dario Brescianini, student at ETH Zurich’s Institute for Dynamic Systems and Control, has developed algorithms that allow quadrocopters to juggle an inverted pendulum . If you are not sure what that means (or how that is even possible), have a look at his video “Quadrocopter Pole Acrobatics”:

(Don’t miss the shock absorber blowing up in smoke at 1:34!)

The Math

A quadrocopter with a 12cm plate for balancing
A quadrocopter with a plate for balancing the pole. The cross-shaped cut-outs are used for easy attachment to the vehicle and have no influence on the pendulum’s stability.

To achieve this feat, Dario and his supervisors Markus Hehn and Raffaello D’Andrea started with a 2D mathematical model. The goal of the model was to understand what motion a quadrocopter would need to perform to throw the pendulum. In other words, what is required for the pendulum to lift off from the quadrocopter and become airborne?

This first step allowed to determine (theoretical) feasibility. In addition, it showed the ideal trajectory in terms of positions, speeds, and angles the quadrocopter needed to follow to throw a pendulum. And it offered an insight into the throwing process, including identification of its key design parameters.

Reality Checks
The main goal of the next step was to determine how well the theoretic model described reality: How well does the thrown pendulum’s motion match the mathematical prediction? Does the pendulum really leave the quadrocopter at the pre-computed time? How does the pendulum behave while airborne? How well do assumptions for catching the pendulum (e.g., completely inelastic collisions, completely rigid pendulum, infinite friction between quadrocopter and pendulum when balancing) hold?

This second step involved multiple tests with the physical system, including throwing the pendulum by hand to study its aerodynamic properties and precisely timing the quadrocopters’ and pendulum’s motions during the maneuver.

Analyze, Experiment, Repeat

The shock absorber at the end of the pendulum is a balloon filled with flour and attached to a sliding metal cap with zip ties.
The shock absorber at the end of the pendulum is a balloon filled with flour and attached to a sliding metal cap with zip ties.

Armed with a good theoretical model and knowledge of its strengths and limitations, the researchers set out on a process of engineering the complete system of balancing, throwing, catching, and re-balancing the pendulum. This involved leveraging the theoretic insights on the problem’s key design parameters to adapt the physical system. For example, they equipped both quadrocopters with a 12cm plate that could hold the pendulum while balancing and developed shock absorbers to add at the pendulum’s tips.

This also involved bringing the insights gained from their initial and many subsequent experiments to bear on their overall system design. For example, a learning algorithm was added to account for model inaccuracies.

Dario writes:

This project was very interesting because it combined various areas of current research and many complex questions had to be answered: How can the pole be launched off the quadrocopter? Where should it be caught and – more importantly – when? What happens at impact?

The biggest challenge to get the system running was the catching part. We tried various catching maneuvers, but none of them worked until we introduced a learning algorithm, which adapts parameters of the catching trajectory to eliminate systematic errors.

The long and iterative process of this third step resulted in the final successful architecture to repeatedly throw and catch the pendulum on the real system, including three key components:

First, a state estimator was used to accurately predict the pendulum’s motion while in flight. Unlike the ball used in the group’s earlier demonstration of quadrocopter juggling, the pendulum’s drag properties depend on its orientation. This means, among other things, that a pendulum in free fall will move sideways if oriented at an angle. Since experiments showed that this effect was quite large for the pendulum used, an estimator including a drag model of the pendulum was developed.
This was important to accurately estimate the pendulum’s catching position.

Another task of the estimator was to determine when the pendulum was in free flight and when it was in contact with a quadrocopter. This was important to switch the quadrocopter’s behavior from hovering to balancing the pendulum.

Second, a fast trajectory generator was needed to quickly move the catching quadrocopter to the estimated catching position.

Third, a learning algorithm was implemented to correct for deviations from the theoretical models for two key events: A first correction term was learnt for the desired catching point of the pendulum. This allowed to capture systematic model errors of the throwing quadrocopter’s trajectory and the pendulum’s flight. A second correction term was learnt for the catching quadrocopter’s position. This allowed to capture systematic model errors of the catching quadrocopter’s rapid movement to the catching position.

The Result
As you can see in the video embedded above, at the end of Dario’s thesis two quadrocopters could successfully throw and catch a pendulum.

Many of the key challenge of this work were caused by the highly dynamic nature of the demonstration. For example, the total time between a throw and a catch is a mere 0.65 seconds, which is a very short time to move to, and come to full rest at, a catching position.

Another key challenge was the demonstration’s high cost of failure: a failed catch typically resulted in the pendulum hitting a rotor blade, with very little chance for the catching quadrocopter to recover. A crashed quadrocopter not only entailed repairs (e.g., changing a propeller), but also meant recalibration of the vehicle to re-determine its operating parameters (e.g., actual center of mass, actual thrust produced by propellors) and restarting the learning algorithms.

Says Markus Hehn:

This was a really fun project to work on. We started off with some back-of-the-envelope calculations, wondering whether it would even be physically possible to throw and catch a pendulum. This told us that achieving this maneuver would really push the dynamic capabilities of the system.

As it turned out, it is probably the most challenging task we’ve had our quadrocopters do. With significantly less than one second to measure the pendulum flight and get the catching vehicle in place, it’s the combination of mathematical models with real-time trajectory generation, optimal control, and learning from previous iterations that allowed us to implement this.

Note: The Flying Machine Arena is an experimental lab space equipped with a motion capture system.
Full disclosure: I work with Dario Brescianini, Markus Hehn, and Raffaello D’Andrea at ETH Zurich’s Institute for Dynamic Systems and Control.

More photos:

Quadrotors_Juggling_1

Quadrotors_Juggling_2 Kopie

Credits: Carolina Flores, ETH Zurich 2012

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

ShanghAI Lectures 2012: Lecture 1 “Intelligence – An eternal conundrum”

$
0
0

ShanghAIGlobeColorSmall

In this first part of the ShanghAI Lecture series, Rolf Pfeifer gives an overview of the content and scope of the project, discusses the meaning of “Intelligence”, the Turing Test, and IQ.

The ShanghAI Lectures are a videoconference-based lecture series on Embodied Intelligence run by Rolf Pfeifer and organized by me and partners around the world. The name goes back to the first lectures of the series in 2009, which were held from Shanghai Jiao Tong University in China.

The lectures roughly follow Rolf Pfeifer’s book “How the Body Shapes the Way We Think” (co-authored with Josh Bongard). Additional information, including information on publications and the series’ sponsors can be found at the ShanghAI Lectures website, which also provided a support framework to bring together students and researchers in an interactive setting during the semester.

Starting with this post, recordings of these lectures and guest presentations given as part of the ShanghAI Lectures are now made available on Robohub.

Related links:


ShanghAI Lectures 2012: Lecture 2 “Cognition as computation”

$
0
0

ShanghAIGlobeColorSmall

In this 2nd part of the ShanghAI Lectures, Rolf Pfeifer looks at the paradigm “Cognition as Computation”, show its successes and failures and justifies the need for an embodied perspective. Following Rolf Pfeifer’s class, there are two guest lectures by Christopher Lueg (University of Tasmania) on embodiment and information behavior and Davide Scaramuzza (AI Lab, University of Zurich) on autonomous flying robots.


The ShanghAI Lectures are a video conference-based lecture series on Embodied Intelligence run by Rolf Pfeifer and organized by me and partners around the world.

 

Christopher Lueg: Embodiment and Scaffolding Perspectives in Human Computer Interaction

In this talk Professor Lueg will discuss how embodiment and scaffolding perspectives discussed in the ShanghAI Lectures on Natural and Artificial Intelligence can also be used to look at, and re-interpret, research topics in human computer interaction ranging from human information behavior in the real world to information interaction in online communities. In his work Professor Lueg understands human computer interaction as interaction with pretty much any kind of computer-based system ranging from desktop computers and mobile phones to microwave ovens and parking meters.

 

Davide Scaramuzza: Vision-Based Navigation: a Ground and a Flying Robot Perspective

Over the past two decades, we have assisted to a rapid research progress in driver-assistance systems. Some of these systems have even reached the market and have become nowadays an essential tool for driving. GPS navigation systems are probably the most popular ones. They have revolutionized the way of traveling and certainly facilitated research towards fully autonomous navigation in outdoor environments. However, there are still numerous challenges that have to be solved in view of fully autonomous navigation of cars in cluttered environments. This is especially true in urban environments, where the requirements for an autonomous system are very high.

Another research area that lately received a lot of interest—especially after the earthquake in Fukushima, Japan—is that of micro aerial vehicles. Flying robots have numerous advantages over ground vehicles: they can get access to environments where humans cannot get access to and, furthermore, they have much more agility than any other ground vehicle. Unfortunately, their dynamics makes them extremely difficult to control and this is particularly true in GPS-denied environments.

In this talk, Davide Scaramuzza will present challenges and results for both ground vehicles and flying robots, from localization in GPS-denied environments to motion estimation. He will show several experiments and real-world applications where these systems perform successfully and those where their applications is still limited by the current technology.

Related links:

ShanghAI Lectures 2012: Lecture 3 “Towards a theory of intelligence”

$
0
0

ShanghAIGlobeColorSmall

In this lecture Rolf Pfeifer presents some first steps toward a “theory of intelligence”., followed by guest lectures by Vincent C. Müller (Anatolia College, Greece) on computers and cognition, and Alex Waibel (Karlsruhe Institute of Technology, Germany/Carnegie Mellon University, USA) who demonstrates a live lecture translation system.

The ShanghAI Lectures are a videoconference-based lecture series on Embodied Intelligence run by Rolf Pfeifer and organized by me and partners around the world.

Vincent C Müller: Computers Can Do Almost Nothing – Except Cognition (Perhaps)

The basic idea of classical cognitive science and classical AI is that if the brain is a computer then we could just reproduce brain function on different hardware. The assumption that this function (cognition) is computing has been much criticized; I propose to assume it is true and to see what would follow.

Let us take it as definitional that computing is ‘multiply realizable’: Strictly the same computing procedure can be realized on different hardware. (This is true if computing is understood as digital algorithmic procedures, in the sense of Church and Turing.) But in multiple realizations only the syntactic computational properties are retained from one realization to the other, while the physical and semantic properties may or may not be. So, even if the brain is indeed a computer, realizing it in different hardware might not have the desired effects because the hardware-dependent effects are not computational: Just computing can’t even switch on a red light; a computer model of an apple tree will not produce apples. But perhaps cognition is different. Is cognition one the properties that are retained in different realizations?

References:

 

Alex Waibel: Bridging the Language Divide

Related links:

ShanghAI Lectures 2012: Lecture 4 “Design principles for intelligent systems (part 1)”

$
0
0

ShanghAIGlobeColorSmall

This is the fourth part of the ShanghAI Lecture series, where Rolf Pfeifer starts introducing a set of “Design Principles” for intelligent systems, as outlined in the book “How the Body Shapes the Way We Think”.

In the first guest lecture, Dario Floreano (EPFL) talks about biologically inspired flying robots, and then Pascal Kaufmann (AI Lab, UZH) gives a short overview of the “Roboy” project.

The ShanghAI Lectures are a videoconference-based lecture series on Embodied Intelligence run by Rolf Pfeifer and organized by me and partners around the world.

 

Dario Floreano: Bio-inspired Flying Robots
Most autonomous robots operate on the ground, essentially living in 2 dimensions. Taking robots into the 3rd dimension offers new opportunities, such as performing exploration of rough terrain with small and inexpensive devices and gathering aerial information for monitoring, security, search-for-rescue, and mitigation of catastrophic events.

However, there are several novel scientific and technological challenges in perception, control, materials, and morphologies that need to be addressed. In this talk, Dario Floreano presents the long-term vision, approach, and results obtained so far to let robots live in the 3rd dimension. Taking inspiration from nature, he starts by describing how robots could take off the ground by jumping and gliding. He then moves on to autonomous flight in cluttered environments and on the issue of perception and control for small flying systems in indoor environments. This will lead to the next step resulting in outdoor flying robots that can autonomously regulate altitude, steering, and landing using only perceptual cues. He then expands the perspective by describing how multiple robots could fly in swarm formation in outdoor environments and how these achievements could possibly lead to fleets of personal aerial vehicles in the not-so-far future. Finally, he closes the talk by going back indoor with current work on radically new concepts of flying robots that collaborate with teams of terrestrial and climbing robots and of flying robots designed to survive and even exploit collisions. Throughout the talk, Dario Floreano also emphasize bi-directional links between biology as a source of inspiration and robotics as a novel method to explore biological questions.

  • J.-C. Zufferey, A. Beyeler and D. Floreano. Autonomous flight at low altitude using light sensors and little computational power, in International Journal of Micro Air Vehicles, vol. 2, num. 2, p. 107-117, 2010.
  • M. Kovac, M. Schlegel, J.-C. Zufferey and D. Floreano. Steerable Miniature Jumping Robot, in Autonomous Robots, vol. 28, num. 3, p. 295-306, 2010.
  • S. Hauert, J.-C. Zufferey and D. Floreano. Evolved swarming without positioning information: an application in aerial communication relay, in Autonomous Robots, vol. 26, num. 1, p. 21-32, 2009.

 

Pascal Kaufmann: Roboy

Related links:

ShanghAI Lectures 2012: Lecture 5 “Design principles for intelligent systems (part 2)”

$
0
0

ShanghAIGlobeColorSmall

This is the second part of the “Design Principles for Intelligent Systems” ShanghAI Lecture. After Rolf Pfeifer’s class, Barry Trimmer (Tufts University, USA) gives a guest presentation about soft robotics.

The ShanghAI Lectures are a videoconference-based lecture series on Embodied Intelligence run by Rolf Pfeifer and organized by me and partners around the world.

 

Barry Trimmer: Living Machines: Soft Animals, Soft Robots and Biohybrids

Related links:

ShanghAI Lectures 2012: Lecture 6 “Evolution: Cognition from scratch”

$
0
0

ShanghAIGlobeColorSmall

In this sixth part of the ShanghAI Lecture series, Rolf Pfeifer introduces the topic “Artificial Evolution” and gives examples of evolutionary processes in artificial intelligence. The first guest lecture, by Francesco Mondada (EPFL) is about the use of robots in daily life; in the second guest lecture, Robert Riener (ETH Zürich) talks about rehabilitation robots.

The ShanghAI Lectures are a videoconference-based lecture series on Embodied Intelligence run by Rolf Pfeifer and organized by me and partners around the world.

 

Francesco Mondada: Toward Robots For Daily Life

In a recent survey from the European Commission, 60% of the participants said that robots should be banned from the application area “care of children, elderly, and the disabled”, 34% would like to ban robots from “education”. Within this framework, what is the future of robotics in daily life services? Two research projects answering this question are presented in this talk: education using specific robotics tools and a new form of embodiment for service robotics. Some preliminary results are illustrated, showing a very interesting potential demonstrated by a high acceptance of the proposed approaches.

 

Robert Riener: Design Principles for Intelligent Rehabilitation Robots

Integrating the human into a robotic rehabilitation system can be challenging not only from a biomechanical view but also with regard to psycho-physiological aspects. Biomechanical integration involves ensuring that the system to be used is ergonomically acceptable and “user-cooperative”. Psycho-physiological integration involves recording and controlling the patient’s physiological reactions so that the patient receives appropriate stimuli and is challenged in a moderate but engaging way. In this talk basic design criteria are presented that should be taken into account, when developing and applying an intelligent robotic system that is in close interaction with the human subject. One must carefully take into account the constraints given by human biomechanical, physiological and psychological functions in order to optimize device function without causing undue stress or harm to the human user.

References:

Related links:

Viewing all 284 articles
Browse latest View live