I have been awarded a grant from Microsoft as part of its AI for Earth program. The grant will be used to develop high-resolution spatialized population projections, which will take population projections from the shared socioeconomic pathways and use a geosimulation approach to distribute the projected populations on a map. The resulting maps can then be used to assess the number of people who will be directly affected by climate change.
AI for Earth is a Microsoft program aimed at empowering people and organizations to solve global environmental challenges by increasing access to AI tools and educational opportunities, while accelerating innovation. Via the Azure for Research AI for Earth award program, Microsoft provides selected researchers and organizations access to its cloud and AI computing resources to accelerate, improve and expand work on climate change, agriculture, biodiversity and/or water challenges.
I am among the first grant recipients of AI for Earth, first launched in July 2017. The grant process was a competitive and selective process and was awarded in recognition of the potential of the work and power of AI to accelerate progress. To date, Microsoft has distributed more than 35 grants to qualifying researchers and organizations around the world. Microsoft just announced their intent to put $50 million over 5 years into the program, enabling grant-making and educational trainings possible at a much larger scale.
I have a new paper out in Transactions in GIS, together with Grant McKenzie. A Geoprivacy Manifesto has taken us quite a while to write, the initial idea came up after our workshop on Geoprivacy at ACM SIGSPATIAL 2014 (!), so I’m really glad this one is finally out. Here’s the abstract:
As location-enabled technologies are becoming ubiquitous, our location is being shared with an ever-growing number of external services. Issues revolving around location privacy—or geoprivacy—therefore concern the vast majority of the population, largely without knowing how the underlying technologies work and what can be inferred from an individual’s location (especially if recorded over longer periods of time). Research, on the other hand, has largely treated this topic from isolated standpoints, most prominently from the technological and ethical points of view. This article therefore reflects upon the current state of geoprivacy from a broader perspective. It integrates technological, ethical, legal, and educational aspects and clarifies how they interact and shape how we deal with the corresponding technology, both individually and as a society. It does so in the form of a manifesto, consisting of 21 theses that summarize the main arguments made in the article. These theses argue that location information is different from other kinds of personal information and, in combination, show why geoprivacy (and privacy in general) needs to be protected and should not become a mere illusion. The fictional couple of Jane and Tom is used as a running example to illustrate how common it has become to share our location information, and how it can be used—both for good and for worse.
[DOI:10.1111/tgis.12305 / Preprint PDF]
If you’ve always wondered how this whole blockchain thing works, but didn’t dare to ask: Here’s an excellent high-level introduction that explains the basic principles.
GeoNotebook is an application that provides client/server environment with interactive visualization and analysis capabilities using Jupyter, GeoJS and other open source tools.
I use Jupyter notebooks all the time when I write Python code, so I definitely need to give GeoNotebook a shot.
- Carsten Keßler (2017) Extracting Central Places from the Link Structure in Wikipedia. Transactions in GIS 21(3):488–502.
Abstract: Explicit information about places is captured in an increasing number of geospatial datasets. This article presents evidence that relationships between places can also be captured implicitly. It demonstrates that the hierarchy of central places in Germany is reflected in the link structure of the German language edition of Wikipedia. The official upper and middle centers declared, based on German spatial laws, are used as a reference dataset. The characteristics of the link structure around their Wikipedia pages, which link to each other or mention each other, and how often, are used to develop a bottom-up method for extracting central places from Wikipedia. The method relies solely on the structure and number of links and mentions between the corresponding Wikipedia pages; no spatial information is used in the extraction process. The output of this method shows significant overlap with the official central place structure, especially for the upper centers. The results indicate that real-world relationships are in fact reflected in the link structure on the web in the case of Wikipedia.
The published version is available from the TGIS website, a preprint PDF is available right here. I’ll also present this at the ESRI User Conference in San Diego next month.
While we’re at it: IJGIS has also published a brief book review online that I wrote about Glen Hart and Catherine Dolbear’s Linked data: a geographic perspective.
The results of our our evaluation of the RG Score were rather discouraging: while there are some innovative ideas in the way ResearchGate approached the measure, we also found that the RG Score ignores a number of fundamental bibliometric guidelines and that ResearchGate makes basic mistakes in the way the score is calculated. We deem these shortcomings to be so problematic that the RG Score should not be considered as a measure of scientific reputation in its current form.
Interesting read about reverse engineering the blackbox ResearchGate score. I have considered that score useless for a long time and think about closing my account every time they send me one of those annoying emails. But unfortunately RG has become so widely used that they drive a considerable number of readers to my papers, so I guess I’ll just keep on putting up with these annoyances. I just hope people don’t start taking that score seriously.
Handy tool if you want to cite a book, but are too lazy to put together the BibTex entry yourself. To comply with the Amazon API that it uses to generate the BibTex code, the entry includes the link to the book on Amazon, but that’s easy enough to remove.
I almost forgot to mention that our group finally has a proper website.
I’ll be presenting a short paper at AGILE in Wageningen next week that outlines some of the stuff I’ve been working on with Peter Marcotullio:
The presentation is scheduled for Wednesday at 12:00PM in the SOCIETAL-1 session in room 4.
This should be a fun workshop:
Knowledge graphs, i.e., making semantically annotated and interlinked raw data available on the Web, has taken information technologies by storm. Today such knowledge graphs power search engines, intelligent personal assistants, and cyber-infrastructures. For instance, the publicly available part of the Semantic Web-based Linked Data cloud contains more than 150 billion triples distributed over 10000 datasets and connected to another by millions of links. Geographic data play a significant role in this cloud and knowledge graphs in general as places function as central nexuses that connect people, events, and physical objects. Consequently, geo-data sources are among the most central and densely interlinked hubs. Beyond their sheer size, the diversity of these data and their inter-linkage are of major value as they enable a more holistic perspective on complex scientific and social questions that cannot be answered from a single domain’s perspective. Hence, knowledge graphs such as those implemented using the Linked Data paradigm bear potential to address many fundamental challenges of geoinformatics.
In this workshop we will discuss various aspects of geo-knowledge graphs ranging from their extraction and construction from unstructured or semi-structured data, issues of data fusion, conflation, and summarization, geo-ontologies, to query paradigms and user interfaces. By focusing explicitly on geo-knowledge graphs in general, we aim at broadening the focus beyond the Semantic Web technology stack and thus also beyond RDF-based Linked Data.