Thursday, March 22, 2012

Robots to map environment


The researchers used a PR2 robot,
developed by Willow Garage, with 
Microsoft's Kinect sensor to test
their system. 
Image: Hordur Johannsson
Robots could one day navigate through constantly changing surroundings with virtually no input from humans, thanks to a system that allows them to build and continuously update a three-dimensional map of their environment using a low-cost camera such as Microsoft’s Kinect.

The system, being developed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), could also allow blind people to make their way unaided through crowded buildings such as hospitals and shopping malls.

To explore unknown environments, robots need to be able to map them as they move around — estimating the distance between themselves and nearby walls, for example — and to plan a route around any obstacles, says Maurice Fallon, a research scientist at CSAIL who is developing these systems alongside John J. Leonard, professor of mechanical and ocean engineering, and graduate student Hordur Johannsson.

But while a large amount of research has been devoted to developing one-off maps that robots can use to navigate around an area, these systems cannot adjust to changes in the surroundings over time, Fallon says: “If you see objects that were not there previously, it is difficult for a robot to incorporate that into its map.”

The new approach, based on a technique called Simultaneous Localization and Mapping (SLAM), will allow robots to constantly update a map as they learn new information over time, he says. The team has previously tested the approach on robots equipped with expensive laser-scanners, but in a paper to be presented this May at the International Conference on Robotics and Automation in St. Paul, Minn., they have now shown how a robot can locate itself in such a map with just a low-cost Kinect-like camera.

As the robot travels through an unexplored area, the Kinect sensor’s visible-light video camera and infrared depth sensor scan the surroundings, building up a 3-D model of the walls of the room and the objects within it. Then, when the robot passes through the same area again, the system compares the features of the new image it has created — including details such as the edges of walls, for example — with all the previous images it has taken until it finds a match.

At the same time, the system constantly estimates the robot’s motion, using on-board sensors that measure the distance its wheels have rotated. By combining the visual information with this motion data, it can determine where within the building the robot is positioned. Combining the two sources of information allows the system to eliminate errors that might creep in if it relied on the robot’s on-board sensors alone, Fallon says.

Once the system is certain of its location, any new features that have appeared since the previous picture was taken can be incorporated into the map by combining the old and new images of the scene, Fallon says.

The team tested the system on a robotic wheelchair, a PR2 robot developed by Willow Garage in Menlo Park, Calif., and in a portable sensor suit worn by a human volunteer. They found it could locate itself within a 3-D map of its surroundings while traveling at up to 1.5 meters per second.

Ultimately, the algorithm could allow robots to travel around office or hospital buildings, planning their own routes with little or no input from humans, Fallon says.

It could also be used as a wearable visual aid for blind people, allowing them to move around even large and crowded buildings independently, says Seth Teller, head of the Robotics, Vision and Sensor Networks group at CSAIL and principal investigator of the human-portable mapping project. “There are also a lot of military applications, like mapping a bunker or cave network to enable a quick exit or re-entry when needed,” he says. “Or a HazMat team could enter a biological or chemical weapons site and quickly map it on foot, while marking any hazardous spots or objects for handling by a remediation team coming later. These teams wear so much equipment that time is of the essence, making efficient mapping and navigation critical.”

While a great deal of research is focused on developing algorithms to allow robots to create maps of places they have visited, the work of Fallon and his colleagues takes these efforts to a new level, says Radu Rusu, a research scientist at Willow Garage who was not involved in this project. That is because the team is using the Microsoft Kinect sensor to map the entire 3-D space, not just viewing everything in two dimensions.

“This opens up exciting new possibilities in robot research and engineering, as the old-school ‘flatland’ assumption that the scientific community has been using for many years is fundamentally flawed,” he says. “Robots that fly or navigate in environments with stairs, ramps and all sorts of other indoor architectural elements are getting one step closer to actually doing something useful. And it all starts with being able to navigate.”

Wednesday, March 21, 2012

SeqSLAM: a visual-based algorithm for navigation

Dr Michael Milford from Queensland University of Technology's (QUT) Science and Engineering Faculty said his research into making more reliable Global Positioning Systems (GPS) using camera technology and mathematical algorithms would make navigating a far cheaper and simpler task.

"At the moment you need three satellites in order to get a decent GPS signal and even then it can take a minute or more to get a lock on your location," he said.

"There are some places geographically where you just can't get satellite signals and even in big cities we have issues with signals being scrambled because of tall buildings or losing them altogether in tunnels."

The world-first approach to visual navigation algorithms, which has been dubbed SeqSLAM (Sequence Simultaneous Localisation and Mapping), uses local best match and sequence recognition components to lock in locations.

"SeqSLAM uses the assumption that you are already in a specific location and tests that assumption over and over again.

"For example if I am in a kitchen in an office block, the algorithm makes the assumption I'm in the office block, looks around and identifies signs that match a kitchen. Then if I stepped out into the corridor it would test to see if the corridor matches the corridor in the existing data of the office block lay out.

"If you keep moving around and repeat the sequence for long enough you are able to uniquely identify where in the world you are using those images and simple mathematical algorithms."

However, the challenge was making those streets recognisable in a variety of different conditions and to differentiate between streets that were visually similar.

The research, which utilises low resolution cameras, was inspired by Dr Milford's background in the navigational patterns of small mammals such as rats.

"My core background is based on how small mammals manage incredible feats of navigation despite their eyesight being quite poor," he said.

"As we develop more and more sophisticated navigation systems they depend on more and more maths and more powerful computers.

"But no one's actually stepped back and thought 'do we actually need all this stuff or can we use a very simple set of algorithms which don't require expensive cameras or satellites or big computers to achieve the same outcome?'" 

 Dr Milford will present his paper SeqSLAM: Visual Route-Based Navigation for Sunny Summer Days and Stormy Winter Nights at the International Conference on Robotics and Automation in America later this year. 

The research has been funded for three years by an Australian Research Council $375,000 Discovery Early Career Researcher Award (DECRA) fellowship.


Tuesday, March 20, 2012

Mapping the Anthropocene

The 5th of March 2012 marks the 500th birthday of Gerardus Mercator, the creator of the world map that profoundly changed our views of the world. He was not the only one who worked on a conformal map projection in the 16th century, which was still an age of exploration and discovery. But he was the first to do the maths right and complete a world map that allowed ships to navigate around the planet by its ability to represent lines of constant course. That makes the Mercator projection a milestone in the history of cartography and remains one of the central map projections up to the present day. The Mercator projection, however, is not always the most appropriate projection. It is useful in nautical issues, but far less suitable for map purposes in which distances or areas are in the centre of interest. When misunderstood, using a Mercator projection can even lead to some awkward misinterpretations: An infamous example is a map drawn by The Economist showing North Korean missile ranges drawn in circles on a Mercator map. A vast amount of projections has been developed since Mercator released his iconic map in 1569, mostly trying to find the optimal solution to “preserve some properties of the sphere-like body” (see a comprehensive overview of map projections at Wikipedia). Far less consideration so far has been given to the question of different spaces. The spatial turn has been widely discussed, not only in the circles of human geography. Far less thoughts have been spent on an adequate visual representations of new understandings of space as a result of processes of globalisation and global change.

Geologists and environmental scientists have shaped the term Anthropocene for the impact that humanity has on the physical environment. Crutzen speaks of the geology of mankind, which highlights the relevance that our species has in the transformation of nature. The concept has also found a wider attention in the media recently (see e.g. these articles from the BBC and the NYT), showing that the issues related to the idea are becoming ever more pressing for the future of humanity. As stated in the New York Times, “Humans were inevitably going to be part of the fossil record. But the true meaning of the Anthropocene is that we have affected nearly every aspect of our environment — from a warming atmosphere to the bottom of an acidifying ocean.”

Cartography appears to be predestined to show these issues in visual form on maps. The educational Globaia project is one interesting example that produced some stunning imagery of human activity. Like other maps, it uses conventional map depictions in its approach, which may help in understanding the underlying issues, but is not particularly novel. The claim that I made as a result of my doctoral research is that we also need new cartographic concepts to fully understand the full extent of human-environment relationship and to fully comprehend the age of humankind. Mercator had a great impact to lead us into a globalised world, but we are no longer in an age of exploring unknown places, rather than an age of discovering alternative pathways into our own future.

In my plenary speech to the Population Specialty Group at this year’s AAG conference in New York City I showed a map that was made in collaboration with Globaia, showing some key indicators of human activity on the planet projected on a gridded population cartogram projection. The following map shows one example for such an attempt to redraw the impact of humanity on those spaces where people live. The map gives equal space to every person living on the planet, while preserving the geographical reference of the additional layers that are shown on the map. The issues depicted in the map include night lights, major roads, railways, power lines, pipelines, overseas cables, air lines and shipping lanes (see a full account of the data on the Globaia website). Many of the issues have been shown as individual maps on this website or in my PhD thesis, but this map brings some of the key aspects of human (inter)activity on earth together and shows them on an equal population projection:


My talk concluded in a slightly bold manner: Is it too much of an aspiration to take the chance of celebrating Mercator’s 500th birthday by changing our mental map of the world from one that guides ships to one that guides our journey into a more sustainable future for humanity?

Mercator’s map was a great achievement, but we should not forget to move on and find new ways of thinking about our world. A map cannot change the world, but maps can change the way we view the world. It is about time to change our views to see how we can live our lives a bit less stupid to make our impact a bit more sustainable, and create a more equal world for every person living on this planet. Gridded cartograms are not the only way to draw new maps, but stand for one possibility to rethink our view of the world. A gridded population cartogram can therefore be one new basemap for a cartography of the Anthropocene.

The map on this page has been created by Benjamin D. Hennig of the SASI Research Group (University of Sheffield) in collaboration with Globaia. Feel free to use the maps on this page under Creative Commons conditions (CC BY-NC-ND 3.0); please contact me for further details – I also appreciate a notification if you use my maps. High resolution and customized maps are available on request.

Written By :-  Benjamin D. Hennig at the University of Sheffield. (Blog of the author)

Thursday, March 15, 2012

Build a simple GIS web application using GeoDjango and Google Maps

Django is a high-level Python web framework that encourages rapid development and clean, pragmatic design.


Wednesday, March 7, 2012

The India-WRIS project

The India-WRIS project is a joint venture of Indian Space Research Organisation and Central Water Commission.It aims to develop an online, user friendly application that shall bring all information related to the water resources of the nation at single window wherefrom all the interested users can obtain it and make use of it. The vision of the project is to make available all water resources related data in standardized GIS form to the user community by achieving the objectives of creating and collecting all such data available through various sources. It's scope includes the generation and collection of data pertaining to 30 geospatial layers whose comprehensive information will be provide to the end user. This information will be dessiminated to the end user through a web based geo-spatial application which forms an integral part of the deliverables. The proposed web application will be released in a 2D-application form and also with the 3D View.

For accessing portal click here