Robots in Depth is a new video series featuring interviews with researchers, entrepreneurs, VC investors, and policy makers in robotics, hosted by Per Sjöborg. In this interview, Per talks to Melonee Wise, lifelong robot builder and developer, and CEO of Fetch Robotics.
Melonee shares how she first got into building things at a young age and how that led to studying mechanical engineering and leaving her PhD project behind to become the second employee of Willow Garage. She shares some personal anecdotes from the first few years at Willow Garage, including both successes like the PR2 and some less successful moments.
Melonee also gives her perspective on the development phase robotics is in now and what the remaining challenges are. Related to that, she discusses what is feasible to deliver in the next five years vs. what her dream robot would be.
You can support Robots in Depth on Patreon. Check out all the Robots in Depth videos here.
StarlETH is a multi-purpose legged transporter robot developed at ETH Zurich’s Autonomous Systems Lab. Combining versatility, speed, robustness, and efficiency, StarlETH walks, climbs, and runs over varied terrain.
Precisely controlled elastic actuators allow for temporary energy storage – in fact this robotic system consumes 10 times less power than other hydraulic systems. Weighing in at just 26 kg, it can be handled by a single operator, but operates autonomously at a speed of 2km per hour walking or running. Potential applications for such a highly mobile robot include: inspection of industrial, construction, or polluted environments, search and rescue operations, security, or even the entertainment industry.
The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we’ll be uploading a new weekly video detailing the latest stage in its development. This week’s video shows an autonomous swarm of underwater robots coordinating their motion to form coherent shoals.
The body shape of Jeff robots is much closer to that of a fish than the Lily robots are. With their slim bodies, Jeff robots can tightly flock together and move in one direction as a group. We implemented a simple blue-light-LED-based algorithm that allows neighboring robots to align to each other. This doesn’t work 100% of the time, but it still works quite often. And when we filmed the little fish that observed our experiments with robots in Livorno harbor (see at beginning of the movie), we observed that the natural fish also did not align 100% of the time. In other words, we came pretty close.
We implemented this code in a very short period of time (hours!) towards the end of the project. With more time and more local neighbor communication, the shoaling can be much improved in future. We hope to be able to further extend this in our follow up project, subCULTron.
In 21 countries across the globe, hundreds of people are preparing for Cybathlon 2016, where cutting edge robotic assistive technologies will help people with disabilities to compete in a series of races. This summer the Cybathlon practice session took place at the Swiss Arena in Kloten so that the teams could test out the courses. Watch the trailer for the rehearsal games!
Cybathlon 2016 is organised by ETH Zurich and will showcase six disciplines:
BCI Race Functional Electrical Stimulation (FES) Bike Race
Check out this great little film by drone startup PRENAV, which took home the LOL WTF prize at the Flying Robot International Film Festival last night. According to Nathan Schuett, CEO of PRENAV, the team was looking for a way to demonstrate precision drone flight in a visually appealing way. “We decided to try something that had never been done before – drawing accurate shapes, letters and animations in the sky with a drone – and we’re very pleased with how ‘Hello World’ turned out.” Fun stuff.
According to a recent press release,
PRENAV and partner Hawk Aerial today announced that they have been granted the first Section 333 exemptions from the Federal Aviation Administration to operate the PRENAV precision drone system. PRENAV drones are capable of autonomously navigating in complex, cluttered, or GPS-denied environments. The two companies plan to use the system to perform close proximity visual inspections of cell phone towers, wind turbines, bridges, oil tankers, industrial boilers, and other large structures.
If you liked this article, you may also be interested in:
In March 2014, we exhibited CoCoRo in Hannover, Germany at the CEBIT — Europe’s largest consumer electronics fair. At first we thought we might be out of place and that our exhibit would be overshadowed by the latest flatscreen TVs, smartphones and gaming consoles. We were very wrong: though we had the smallest booth, we were overrun with thousands of people throughout the week, and television and radio crews also stopped by for interviews. By our own estimates, we may just have had the highest rate of visitors per square meter in the whole fair.
It’s not easy to bring a swarm of underwater robots and run live experiments at a consumer electronic show, but we gained a lot of motivation from the public interest we felt there. Thanks to our enduring team members and also to all the people who visited us!
The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we’ll be uploading a new weekly video detailing the latest stage in its development.
Most of the videos from The Year of CoCoRo were shot during workshops we held throughout the project. These workshops, which were usually focussed on one or several specific demonstrators, were what drove our international team of collaborators to implement mechanical hardware, electronics and software into working installations. This form of workshop-driven development proved to be very successful, and by the end of the project we were able to show 17 working final demonstrators that show the versatility of robot swarms.
The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we’ll be uploading a new weekly video detailing the latest stage in its development.
During the past year we have shown many swarm algorithms in various experiments. The spotlight was always on the Lily and the Jeff robots. However, there is now another star in the team and this trailer is dedicated to this special agent: the base station!
The base station was finished towards the end of the project, thus, we had to develop (i.e. hack) many surrogates and placeholders for it over the course of the project. We got so experienced with it that we could quickly hack a surrogate base station from almost anything that was lying around in the lab: styrofoam, cans, boxes whatever was around and handy. This video shows some of those creations.
A few months before the final review we had the real thing ready: a typical Italian machine (like Italian cars) made by our partners from SSSA (Pontedera). It was fast as hell, highly manoeuvrable, and elegant. The base station has a docking device and can actively manoeuvre, dock and undock robots and carry three attached spare robots with it. With this central masterpiece, we were ready for our final review.
The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we’ll be uploading a new weekly video detailing the latest stage in its development.
Our underwater swarm research started in a few cubic centimeters of water with some naked electronics on a table. Over the next three and a half years, our swarm increased by a factor of 40, and the size of our test waters increased by a factor of 40 million as we went from aquariums and pools, to ponds, rivers and lakes, and finally ending up in the salt water basin of the Livorno harbour. Quite a stretch for a small project!
Our new project, subCULTron, which extends the work of CoCoRo, will scale up the swarm size to 120+ robots, and will take place in an even larger body of water: the Venice Lagoon.
The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we’ll be uploading a new weekly video detailing the latest stage in its development.
As the worldwide leader in collaborative robotics research and education, Rethink Robotics is excited to announce the winner of the inaugural Rethink Robotics Video Challenge. Launched in the summer of 2015, the Challenge was created to highlight the amazing work being done by the research and education community with the Baxter robot. With more than 90 total entries from 19 countries around the globe, the Humans to Robots Lab at Brown University was a standout in the criteria of relevancy, innovation and breadth of impact.
A significant obstacle to robots achieving their full potential in practical applications is the difficulty in manipulating an array of diverse objects. The Humans to Robots Lab at Brown is driving change in this area by using Baxter to collect and record manipulation experiences for one million real-world objects.
Central to the research at the Humans to Robots Lab is the standardization and distribution of learned experience, where advances on one robot in the network will improve every robot. This research seeks to exponentially accelerate the advancement in capabilities of robots around the world, and establish a framework by which the utility of these types of systems advances at a pace never before seen.
To better collaborate with labs around the world, Professor Stefanie Tellex and the team at Brown use the Baxter robot, an industrial robot platform that has the capability to automatically scan and collect a database of object models, the flexibility of an open source software development kit and an affordable price that makes the platform accessible to researchers all over the world. As a result of the Rethink Robotics Video Challenge, Brown will have an additional Baxter that will accelerate this research, while also providing new opportunities for continued experimentation.
“Our goal in creating the Rethink Robotics Video Challenge was to raise awareness of the tremendous amount of unique, cutting-edge research being conducted using collaborative robots that advances our collective education. The response far exceeded our expectations, and narrowing this down to one winner was an extremely difficult task for our judging panel,” said Rodney Brooks, founder, chairman and CTO of Rethink Robotics. “Brown was ultimately chosen as the best entry because the work being conducted by the Humans to Robots Lab at Brown University is critical to helping robots become more functional in our daily lives. There is a stark contrast between a robot and human in the ability to manipulate and handle a variety of objects, and closing that gap will open up a whole new world of robotic applications.”
Educational institutions, research labs and companies from around the world submitted an abstract and video showcasing their work with Baxter in one of three categories: engineering education, engineering research, or manufacturing skills development. The submissions encompassed a wide range of work, including elementary school STEM education, assistance to the physically and visually impaired, mobility and tele-operation, advanced and distributed machine learning and cloud-based knowledge-sharing for digital manufacturing and IoT, to name a few. After a detailed vetting process, the field was narrowed to 10 finalists from the following organizations: Brown University, Cornell University, Dataspeed Inc., Glyndwr University, Idiap Research Institute, North Carolina State University, Queens University, University of Connecticut, University of Sydney and Virginia Beach City Public Schools.
Finalist entries were reviewed by a premier panel of judges, including Rodney Brooks of Rethink Robotics; Lance Ulanoff, chief correspondent and editor-at-large of Mashable; Erico Guizzo, senior editor at IEEE Spectrum; Steve Taub, senior director, advanced manufacturing at GE Ventures; and Devdutt Yellurkar, general partner at CRV.
Our penultimate video features the initial “Big Vision” trailer we produced at the beginning of this project. The video showcases the basic components of the robotic system we targeted (surface station, relay chain, ground swarm) and how we imagined our collective of underwater robots forming coherent swarms.
Our “Big Vision” helped us stay focussed on the project, ensuring that all project partners shared the same goal for their work. We based the video on a small simulator, written by us. Little did we know that this simple simulator would develop into a critical tool that would eventually include underwater physics with fluid dynamics for simulating underwater swarms. The simulator was also used in evolutionary computation to find good shoaling behavior for our robots.
Next week we will be publishing our final video in the Year of the CoCoRo. Our finale will show the whole system on real underwater robots in a comparable setting and how our initial vision finally became reality!
The EU-funded Collective Cognitive Robotics (CoCoRo) project has built a swarm of 41 autonomous underwater vehicles (AVs) that show collective cognition. Throughout 2015 – The Year of CoCoRo – we’ll be uploading a new weekly video detailing the latest stage in its development.
Last week Raffaello D’Andrea, professor at the Swiss Federal Institute of Technology (ETH Zurich) and founder of Verity Studios, demonstrated a whole series of novel flying machines live on stage at TED2016: From a novel Tail-Sitter (a small, fixed-wing aircraft that can optimally recover a stable flight position after a disturbance and smoothly transition from hover into forward flight and back), to the “Monospinner” (the world’s mechanically simplest flying machine, with only a single moving part), to the “Omnicopter” (the world’s first flying machine that can move into any direction independent of its orientation and its rotation), to a novel fully redundant quadrocopter (the world’s first, consisting of two separate two-propeller flying machines), to a synthetic swarm (33 flying machines swarming above the audience).
Most of D’Andrea’s work shown at this latest demonstration is dedicated to pushing the boundary of what can be achieved with autonomous flight. One key ingredient is localization: To function autonomously, robots need to know where they are in space. Previously his team has relied on an external high-precision motion capture system available in his Flying Machine Arena at ETH Zurich for positioning. In his previous TED talk on robot athletes for example, you can clearly see the reflective markers required by the motion capture system. This meant that most algorithms were difficult to demonstrate outside the lab. It also meant that the system had a single point of failure (SPOF) in the centralized server of the mocap system — a highly problematic point for any safety critical system.
In another world first, Raff and his team are now showing a newly developed, doubly redundant localization technology from Verity Studios, a spin-off from his lab, which gives flying machines, and robots in general, new levels of autonomy. For the live demonstrations, all flying machines use on-board sensors to determine where they are in space and on-board computation to determine what their actions should be. There are no external cameras. The only remote commands the flying robots receive are high level ones, such as “take-off” or “land”.
The nature of this performance was also unprecedented in that dozens of vehicles flew above the audience. The demonstrations also included a heavier, high performance quadcopter that is fully redundant and utilizes a state-of-the-art failsafe algorithm (see previous Robohub article) among other onboard and offboard safety features.
Following extensive technical discussions on the various failure modes of the positioning system and of each flying machine as well as on the systems’ numerous redundancies and safety features, the organizers of TED had full faith in the systems’ safety and reliability. So much so that they decided to indemnify the venue against any lawsuits to overcome its ban on drones.
Here is the video:
And here is a more in-depth breakdown of what you are seeing, based on a transcript of the presentation and information from Raff D’Andrea’s team:
Intro video – Institute for Dynamic Systems and Control (IDSC), ETH Zurich
The video shows some of the previous work of D’Andrea’s group on aerial construction: A 6-meter tall tower built out of 1500 foam bricks by four autonomous quadcopters over a three day period in front of a live audience and a rope bridge built by three autonomous quadcopters in the Flying Machine Arena.
“This is a so-called tail-sitter. It’s an aircraft that tries to have its cake and eat it: Like other fixed wing aircraft, it is efficient in forward flight, much more so than helicopters and variations thereof. Unlike most other fixed wing aircraft, however, it can hover, which has huge advantages for take-off, landing, and general versatility. There is no free lunch, unfortunately, and one of the challenges with this type of aircraft is its susceptibility to disturbances, such as wind-gusts, while hovering. We are developing control architectures and algorithms that address this limitation. [Pushes the vehicle, then grabs it and throws it] The idea is to allow the aircraft to autonomously recover no matter what situation it finds itself in. [Second throw] And through practice, improve its performance over time. [Third throw, violently spinning the vehicle]”
Demo 2 (video of demo the night before): Monospinner – IDSC, ETH Zurich
“When doing research we often ask ourselves abstract, fundamental questions that try to get at the heart of a matter. One such question: What is the minimum number of moving parts necessary for controlled flight? This line of exploration may have practical ramifications; take helicopters, for example, which are affectionately known as “machines with over 1000 moving parts all conspiring to cause bodily harm”. Turns out that decades ago skilled pilots were flying RC airplanes with only 2 moving parts: a propeller and a tail rudder. We recently discovered that it could be done with only 1.”
“This is the monospinner, the world’s mechanically simplest, controllable flying machine, invented just a few months ago. It has only one moving part, a propeller. There are no flaps or hinges, no control surfaces or valves, no other actuators. Just one propeller. Even though it is mechanically simple, there is a lot going on in its electronic brain to keep it stable and move through space in a controllable fashion. Even so, it does not yet have the recovery algorithms of the tail-sitter, which is why I have to throw it just right.”
“If the monospinner is an exercise in frugality, this machine here, the omnicopter, with its 8 propellers, is an exercise in excess. What can you do with this surplus? The thing to notice is that it is highly symmetric, and as a result it is ambivalent to orientation. This gives it an unprecedented capability: It can move anywhere in space, independently of where it is facing, and even of how it is rotating. It has its own complexities, mainly due to the complex, interacting flows of its 8 propellers. Some of this can be modeled, while the rest can be learned on the fly. [Omnicopter takes off, manoeuvres around the stage, lands]”
“If flying machines are going to find their way into our daily lives, they will need to become extremely safe. This machine here is a prototype being developed by Verity Studios. It actually consists of two separate, two propeller flying machines, each capable of controlled flight. One wants to rotate clockwise, the other one counter clockwise. When all systems are operational, it behaves like a high-performance, 4 propeller quadrocopter. [Quadcopter takes off and flies an arc across the stage] If anything fails, however – propeller, motor, electronics, even a battery pack – it is still able to fly, albeit in a degraded way [One half of the quad is de-activated, the quadrocopter starts to spin and fall, the failsafe mode kicks in and the quadrocopter recovers, flies to the back of the stage, and performs a controlled landing at the takeoff spot].”
“This last demonstration is an exploration of synthetic swarms. The large number of coordinated autonomous entities offers a radically new palette for aesthetic expression. We have taken commercially available micro-quadrocopters, each weighing less than a slice of bread, and equipped them with our localization technology and custom algorithms. Because each unit knows where it is in space and is self-controlled, there really is no limit to their number. [Swarm performance]”
Raffaello D’Andrea wrapped up with: “Hopefully these demonstrations will motivate you to dream up new revolutionary roles for flying machines. For example, the ultra-safe flying machine [points to the fully redundant multicopter] has aspirations to become a flying lampshade on Broadway. The reality is that it is difficult to predict the impact of nascent technology. And for folks like us, the real reward is the journey, and act of creation. It is also a continual reminder of how wonderful and magical the universe we live in is; that it permits clever, creative creatures to sculpt it in such spectacular ways. The fact that this technology has huge commercial potential is just icing on the cake.”
Raheeb Muzaffar, an information technology specialist, has developed an application-layer framework that improves the transmission of videos between moving drones and mobile devices located at ground level. His work within the Interactive and Cognitive Environments (ICE) doctoral programme will be completed soon. Raheeb explains what makes this technology innovative and talks about his plans for the future in a conversation with Romy Mueller.
Raheeb Muzaffar was already aware of what to expect when he arrived in Klagenfurt as a PhD student, travelling from Islamabad, the capital of Pakistan, December 2012: He’s travelled well before, had experience working with various universities and professional organizations, and was acquainted both with the European culture and the research environment. The doctoral programme “Interactive and Cognitive Environments (ICE)” suited him perfectly, allowing him to conduct research in an international environment. Doctoral students accepted into the programme are funded for a period of three years. During this time, they work at two universities. The research teams at the Alpen-Adria-Universität and the Queen Mary University in London offered Muzaffar, who wanted to work on the communication between drones, an environment that allowed him insight into various laboratories and research approaches. By combining the key research areas he experienced in Austria and Great Britain, he ultimately arrived at his very own topic: video transmission in drone networks.
He spotted a problematic issue in this context, which he has studied very closely over the past few years. “In a situation where several drones are airborne and are transmitting videos to several units located at ground level, we cannot currently guarantee that the footage will be reliably transmitted.” The problem is that the receiving devices do not have a feedback mechanism for the video packets being transmitted from the drones. The missing feedback mechanism causes unreliability rendering a distorted video reception. Also, when the transmitting devices and the receiving devices are in motion, the wireless transmission conditions are subject to continuous change, introducing an additional factor of uncertainty.
To address this problem, Raheeb Muzaffar has developed a so-called “video multicast streaming framework”, which allows feedback from multiple ground devices to the drones to add reliability allowing a smooth video reception. However, the framework is even more capable: “The drone does not only learn whether the video has arrived, but thanks to the feedback mechanism, the drone can adjust the transmission rate and the video quality to match the prevailing wireless conditions. Thus, the transmitting drones can re-send lost video packets following the transmission and video rate adaption.” Video communication between drones and receiving devices is useful for various applications such as search and rescue, surveillance, and disaster management.
When we asked him, if he can make profitable use of his technology, Raheeb Muzaffar laughed. At various conferences, he talked to the developers from the industry, who confirmed to him that modules endowed with similar functionality are expensive and that the WLAN protocol requires modification. However, his application is freely available. Anyone wishing to commercialise the technology as a product, is able to do so. In any case, over the course of numerous simulations and experiments, he has demonstrated that the streaming framework works and uses the conventional WLAN protocol without modifications.
Muzaffar’s research in Klagenfurt is coming to an end. By November, he will complete his doctoral thesis, and will pull up stakes in Klagenfurt, following his hopes and dreams to work in the research-based industry. The researcher was accompanied by his wife when he came to Klagenfurt. In the meantime, they celebrated the birth of their daughter. The Muzaffar family intends to travel more: “I am not tied to one location. I would enjoy working anywhere in Europe, but we also like the idea of moving to New Zealand.”
A few words with Raheeb Muzaffar
What career would you have chosen, if you had not become a scientist?
If I would not have become a scientist, I would have become a social worker. Before turning to research, I worked with Non-governmental organizations including iMMAP and UNICEF thinking that my technical expertise could help expedite the social work that the organizations are involved in. Besides the work I do plan to involve myself in some sort of social work may it be technical, educational, financial or emotional.
Do your parents understand what it is you are working on?
Yes, especially my father, he is quite interested in technology. He may not have the technical details but he is aware of research and the work I am involved in.
What is the first thing you do when you arrive at the office in the morning?
First thing in the morning is to check and answer my emails. Thereafter, I work by the plan. I always have a plan for the week/month regarding the tasks I need to do. Although sometimes things don’t go by the plan and I have to readjust the plan to meet deadlines.
Do you have proper holidays? Without thinking about your work?
Yes, at least once a year.
Why are so many people afraid of technical sciences?
In my opinion, every individual has a different aptitude to his/her work domain and one should do what one thinks he/she is interested in and can perform best. Having said that, working in the domain of technical science is quite challenging. The field has advanced and it is getting increasingly complex to even gain the basic understanding of different domains. To work in this field, one has to stay up to date. Developing something new involves a minimal margin of error. I think, people are afraid due to the complex nature of the studies that involve an extra bit of hard work, understanding, and patience.
Written by: Romy Müller, University of Klagenfurt
Originally published in German language in AAU News of the University of Klagenfurt, Austria. Also published at Medium.com.
We compared five adhesives for 3D printing applications on a Wanhao Duplicator i3. We’ll print a PLA (polylactide) cube, 1х1х1 cm in size.
1. NELLY LACQUER: We spread it on the clean surface of the table and turn on the Print Mode. The printed model is very well stuck—a pallet-knife is needed to separate it from the surface of the table. The adhesion result is very good.
2. THE 3D GLUE: We apply it to the table with a cloth, wait for the table to warm up, then start printing. The result is similar to Nelly lacquer. We can’t separate the printed model from the table without any tools. The bottom of the cube is smooth.
3. PVA GLUE: We spread a small quantity on the table and let it dry before printing. It is not easy to tear the model from the table. The bottom surface of the cube is rather rough and has PVA glue stains.
4. GLUE STICK: We try to apply a thin layer of it to the hot platform but, because the adhesive is very thick, we can’t do it using a microfiber cloth. To separate the cube from the platform we use a pallet-knife. The bottom surface of the model and the platform surface have white glue stains. Now we need to clean the table and apply an adhesive anew.
5. STICKY TAPE (an analogue of a blue Scotch): Finally, we paste one layer of it to the table, carefully smoothing it out to avoid blistering. This adhesive is no good for big ABS models as they tend to break away from the table along with the tape.
We try to accurately separate the cube from the table to prevent the tape from peeling off, but part of the tape stuck too well to the cube and came off the table along with the model. Now we’ll have a naked spot (without the tape layer) on the table when we print next time.
Conclusions
The lacquer and 3D glue give the best results; the bottom surface of the printed models is clean and smooth, without any traces of adhesives.
The glue stick stains the models.
The Scotch tape tends to stick to the model and comes off from the table partially or fully.
For ABS printing we recommend using a closed 3D printer and a strong adhesive, like NELLY.
If you liked this article, you may also want to read these other articles on 3D printing:
That’s right! You better not run, you better not hide, you better watch out for brand new robot holiday videos on Robohub! Drop your submissions down our chimney at editors@robohub.org and share the spirit of the season, like these vids-of-Christmas-past
We’re beginning to see more and more jobs being performed by machines, even creative tasks like writing music or painting can now be carried out by a computer.
But how and when will machines be able to explain themselves? Should we be worrying about an artificial intelligence taking over our world or are there bigger and more imminent challenges that advances in machine learning are presenting here and now?
Join Professor Brian Cox, the Royal Society Professor of Public Engagement, as he brings together experts on AI and machine learning to discuss key issues that will shape our future. Panelists include:
In this fascinating animation from Oxford Sparks, we take a look at how statistics and computer science can be used to make machines that learn for themselves, without being explicitly programmed.
Machine learning is a burgeoning breed of artificial intelligence (AI), and it’s all around us already; on our phones, powering social networks, helping the police and doctors, scientists and mayors. But how does it work? Enjoy the video below, and visit Oxford Sparks to discover more science and research from Oxford University.
You might also enjoy the following posts about AI and AI/robotics policy:
The population of the scenic ski-resort Davos, nestled in the Swiss Alps, swelled by nearly +3,000 people between the 17th and 20th of January. World leaders, academics, business tycoons, press and interlopers of all varieties were drawn to the 2017 World Economic Forum (WEF) Annual Meeting. The WEF is the foremost creative force for engaging the world’s top leaders in collaborative activities to shape the global, regional and industry agendas for the coming year and beyond. Perhaps unsurprisingly given recent geopolitical events, the theme of this year’s forum was Responsive and Responsible Leadership.
With the onset of the fourth industrial revolution, increasingly discontented segments of society not experiencing congruous economic and social progress are in danger of existential uncertainty and exclusion. Responsive and Responsible Leadership entails inclusive development and equitable growth, both nationally and globally. It also involves working rapidly to close generational divides by exercising shared stewardship of those systems that are critical to our prosperity.
In the end, leaders from all walks of life at the Annual Meeting 2017 must be ready to react credibly and responsibly to societal and global concerns that have been neglected for too long.”
Developing last year’s theme—“The fourth industrial revolution”—this year’s luminaries posited questions, among many others, concerning incipient robotics and artificial intelligence technologies set to have a pronounced impact on the global economy and global consciousness alike. What can we learn from the first wave of AI? How can the humanitarian sector benefit from big data algorithms? How will drone technology change the face of warfare? Can AI and computational tech help foster responsive and responsible leadership? What are the downsides of technology in the fourth industrial revolution?
Enjoy a selection of tech-themed videos below.
And a bit about global science including big data, open source science and education.
On the 15th November 2016, the IEEE’s AI and Ethics Summit posed the question: “Who does the thinking?” In a series of key-note speeches and lively panel discussions, leading technologists, legal thinkers, philosophers, social scientists, manufacturers and policy makers considered such issues as:
The social, technological and philosophical questions orbiting AI.
Proposals to program ethical algorithms with human values to machines.
The social implications of the applications of AI.