I’m probably late to the game, but since I’ve been working a lot with R lately, I finally got around to give the Simple Features for R package a proper shot. And boy should I have done that earlier. If you use R with spatial data and haven’t checked it out yet, please do. Here’s a brief list of my favorite features (pun intended):
- Much faster reading and writing data
- No more clumsy working with attribute data – instead of
mylayer@data$attribute, just go straight to
- If you work with PostGIS a lot, you’ll feel right at home with the spatial operators (and they automatically use spatial indexes!)
- Mapping with ggplot2 is much more intuitive than before, using the geom_sf function.
- The automatic faceted plots by attribute when you use the plain
plot function are also pretty cool.
- If you need to run functions that don’t work on simple feature collections, it is super easy to just convert them to a data frame (
my_df <- as.data.frame(my_sf)), run the function, and convert them back (
my_sf <- st_as_sf(my_df)) – geometries, CRS, etc. are picked up correctly automatically.
I'm sure I'm missing more great stuff, but this is just a first impression after a day of work with
sf. Overall, working with spatial data in R feels much more natural with
sf, with less extra code and special cases than before. Kudos to Edzer and the other contributors for this one!
Very nice paper by Mark Gahegan. Here’s the abstract:
GIScience and GISystems have been successful in tackling many geographical problems over the last 30 years. But technologies and associated theory can become limiting if they end up defining how we see the world and what we believe are worthy and tractable research problems. This paper explores some of the limitations currently impacting GISystems and GIScience from the perspective of technology and community, contrasting GIScience with other informatics communities and their practices. It explores several themes: (i) GIScience and the informatics revolution; (ii) the lack of a community-owned innovation platform for GIScience research; (iii) the computational limitations imposed by desktop computing and the inability to scale up analysis; (iv) the continued failure to support the temporal dimension, and especially dynamic processes and models with feedbacks; (v) the challenge of embracing a wider and more heterogeneous view of geographical representation and analysis; and (vi) the urgent need to foster an active software development community to redress some of these shortcomings. A brief discussion then summarizes the issues and suggests that GIScience needs to work harder as a community to become more relevant to the broader geographic field and meet a bigger set of representation, analysis, and modelling needs.
Good food for thought at the beginning of the year, even though I do not agree with all of his points. There is currently a lot of progress being made concerning some of the problems he mentions (such as GeoMesa addressing scalability, or the NSF funding the Geospatial Software Institute, to name just two examples). I also don’t think (or at least hope?) that it is a prevalent position in our field that if it doesn’t fit on a desktop computer, it is some other community that should deal with it.
One point he raises about software platforms really resonates with me, though, since this is something I have been thinking about a lot recently:
Personally, this has driven me to use R, Python, and PostGIS for almost any kind of work, but I’m wondering if that is a viable solution for everyone? Or are the GIsystems he talks about more like classical, GUI-driven GIS systems that can be used without programming skills?
Here’s the submission system of a computer science (!) journal that thinks it is more accurate to manually type in my email address, rather than have it filled in automatically. Probably because of all the typos introduced by copying and pasting. D’oh.
Tag clouds are getting a bit old and they are definitely not the most scientific way to visually summarize text contents. They are still a quick and dirty way of getting an overview of the language used in a text, though. In this case, I created tag clouds from the titles of accepted papers and posters at the respective last iterations of five big conferences in the field (GIScience 2014, AGILE 2014, COSIT 2013, ACM SIGSPATIAL GIS 2013, and Geocomputation 2013). In the case of AGILE and GIScience, these meetings are yet to be held, but the accepted papers were already listed (just full papers so far in the case of GIScience).
I did these for a lecture about current hot research topics in GIScience to give the students a quick overview of what the different sub-communities are working on. What I did not quite expect is how well the tag clouds reflect my thinking of the different conferences – e.g., with AGILE and GIScience a bit more on the applied side, COSIT heavy on the theoretical side, and SIGSPATIAL focusing on computational issues. Also, the only big topic they all have in common is spatial data. I would have expected a few more terms prominently popping out at all five conferences, but apparently they are all catering to distinct sub-communities.
Update May 19, 2014: Thanks to Paul for noticing that the COSIT tag cloud had some author names in it. I have replaced the image and also uploaded the text files I have used to create the tag clouds.