GIScience and GISystems have been successful in tackling many geographical problems over the last 30 years. But technologies and associated theory can become limiting if they end up defining how we see the world and what we believe are worthy and tractable research problems. This paper explores some of the limitations currently impacting GISystems and GIScience from the perspective of technology and community, contrasting GIScience with other informatics communities and their practices. It explores several themes: (i) GIScience and the informatics revolution; (ii) the lack of a community-owned innovation platform for GIScience research; (iii) the computational limitations imposed by desktop computing and the inability to scale up analysis; (iv) the continued failure to support the temporal dimension, and especially dynamic processes and models with feedbacks; (v) the challenge of embracing a wider and more heterogeneous view of geographical representation and analysis; and (vi) the urgent need to foster an active software development community to redress some of these shortcomings. A brief discussion then summarizes the issues and suggests that GIScience needs to work harder as a community to become more relevant to the broader geographic field and meet a bigger set of representation, analysis, and modelling needs.
Good food for thought at the beginning of the year, even though I do not agree with all of his points. There is currently a lot of progress being made concerning some of the problems he mentions (such as GeoMesa addressing scalability, or the NSF funding the Geospatial Software Institute, to name just two examples). I also don’t think (or at least hope?) that it is a prevalent position in our field that if it doesn’t fit on a desktop computer, it is some other community that should deal with it.
One point he raises about software platforms really resonates with me, though, since this is something I have been thinking about a lot recently:
Personally, this has driven me to use R, Python, and PostGIS for almost any kind of work, but I’m wondering if that is a viable solution for everyone? Or are the GIsystems he talks about more like classical, GUI-driven GIS systems that can be used without programming skills?
For the fifth day of Cards Against Humanity Saves America, we used your money to fund one year of monthly public opinion polls. We’ll ask the American people about their social and political views, what they think of the president, and their pee-pee habits.
In fact, we secretly started polling three months ago. What a delightful surprise!
To conduct our polls in a scientifically rigorous manner, we’ve partnered with Survey Sampling International — a professional research firm — to contact a nationally representative sample of the American public. For the first three polls, we interrupted people’s dinners on both their cell phones and landlines, and a total of about 3,000 adults didn’t hang up immediately. We examined the data for statistically significant correlations, and boy did we find some stuff.
Hilarious. I think I’m going to use their data in class some time. Too bad it doesn’t include respondents’ location.
I recently came across O’Reilly’s R for Data Science by Hadley Wickham and Garrett Grolemund. From cross-reading some of the chapters, it is a very easily digestible intro to R and it also goes into topics such as cleaning up data (something most books suggest to happen automagically). It doesn’t go very deep into the statistical capabilities of R, though.
The privilege that techies have enjoyed for years is starting to erode. It’s taking them some time to see what other people are seeing, but if VCs, media critics, and people adjacent to the industry are starting to get it, then it’s time to make a change. Right?
I have been awarded a grant from Microsoft as part of its AI for Earth program. The grant will be used to develop high-resolution spatialized population projections, which will take population projections from the shared socioeconomic pathways and use a geosimulation approach to distribute the projected populations on a map. The resulting maps can then be used to assess the number of people who will be directly affected by climate change.
AI for Earth is a Microsoft program aimed at empowering people and organizations to solve global environmental challenges by increasing access to AI tools and educational opportunities, while accelerating innovation. Via the Azure for Research AI for Earth award program, Microsoft provides selected researchers and organizations access to its cloud and AI computing resources to accelerate, improve and expand work on climate change, agriculture, biodiversity and/or water challenges.
I am among the first grant recipients of AI for Earth, first launched in July 2017. The grant process was a competitive and selective process and was awarded in recognition of the potential of the work and power of AI to accelerate progress. To date, Microsoft has distributed more than 35 grants to qualifying researchers and organizations around the world. Microsoft just announced their intent to put $50 million over 5 years into the program, enabling grant-making and educational trainings possible at a much larger scale.
As location-enabled technologies are becoming ubiquitous, our location is being shared with an ever-growing number of external services. Issues revolving around location privacy—or geoprivacy—therefore concern the vast majority of the population, largely without knowing how the underlying technologies work and what can be inferred from an individual’s location (especially if recorded over longer periods of time). Research, on the other hand, has largely treated this topic from isolated standpoints, most prominently from the technological and ethical points of view. This article therefore reflects upon the current state of geoprivacy from a broader perspective. It integrates technological, ethical, legal, and educational aspects and clarifies how they interact and shape how we deal with the corresponding technology, both individually and as a society. It does so in the form of a manifesto, consisting of 21 theses that summarize the main arguments made in the article. These theses argue that location information is different from other kinds of personal information and, in combination, show why geoprivacy (and privacy in general) needs to be protected and should not become a mere illusion. The fictional couple of Jane and Tom is used as a running example to illustrate how common it has become to share our location information, and how it can be used—both for good and for worse.
Carsten Keßler (2017) Extracting Central Places from the Link Structure in Wikipedia. Transactions in GIS 21(3):488–502.
Abstract: Explicit information about places is captured in an increasing number of geospatial datasets. This article presents evidence that relationships between places can also be captured implicitly. It demonstrates that the hierarchy of central places in Germany is reflected in the link structure of the German language edition of Wikipedia. The official upper and middle centers declared, based on German spatial laws, are used as a reference dataset. The characteristics of the link structure around their Wikipedia pages, which link to each other or mention each other, and how often, are used to develop a bottom-up method for extracting central places from Wikipedia. The method relies solely on the structure and number of links and mentions between the corresponding Wikipedia pages; no spatial information is used in the extraction process. The output of this method shows significant overlap with the official central place structure, especially for the upper centers. The results indicate that real-world relationships are in fact reflected in the link structure on the web in the case of Wikipedia.
The published version is available from the TGIS website, a preprint PDF is available right here. I’ll also present this at the ESRI User Conference in San Diego next month.
While we’re at it: IJGIS has also published a brief book review online that I wrote about Glen Hart and Catherine Dolbear’s Linked data: a geographic perspective.
The results of our our evaluation of the RG Score were rather discouraging: while there are some innovative ideas in the way ResearchGate approached the measure, we also found that the RG Score ignores a number of fundamental bibliometric guidelines and that ResearchGate makes basic mistakes in the way the score is calculated. We deem these shortcomings to be so problematic that the RG Score should not be considered as a measure of scientific reputation in its current form.
Interesting read about reverse engineering the blackbox ResearchGate score. I have considered that score useless for a long time and think about closing my account every time they send me one of those annoying emails. But unfortunately RG has become so widely used that they drive a considerable number of readers to my papers, so I guess I’ll just keep on putting up with these annoyances. I just hope people don’t start taking that score seriously.