I’m currently running some analyses on a virtual machine in the cloud, and it turns out that there is a really neat way to access Jupyter Notebooks remotely without installing Jupyter Hub. So if you (like me) just run the notebooks for yourself and don’t need multi-user support and the like, you can simply SSH into your remote machine (replacing
ssh -L 9999:localhost:9999 username@host
Enter your password for the remote machine when prompted, and, when logged in, start a notebook server and tunnel the output to your local port 9999:
jupyter notebook --port 9999 --no-browser
Starting the notebook server this way will make it show a URL that you simply paste into your local browser, et voilà – a Jupyter Notebook with code executed in the cloud.
I’ll co-organize a workshop on Location Privacy and Security at GIScience 2018 in Melbourne this coming August. Details below – feel free to share widely and submit a paper, of course!
More Information: https://ptal-io.github.io/lopas2018/
Description Location privacy has been a topic of research for many years but has recently seen a resurgence in interest. This renewed interest is driven by recent advances in location-enabled devices, sensors and context-aware technology, and the broader Internet of Things (IoT). The data generated via these devices are being collected, analyzed, and synthesized at an unprecedented rate. While much of these data are used in the advancement of products or services, many individuals are unaware of the information that is being collected, or how it is being used. The resulting information extracted from these personal data have contributed to significant advances in domain such as location recommendations or fitness/health services, but these advances often come at the cost of location privacy. This workshop is aimed at facilitating a discussion surrounding current methods and techniques related to location privacy as well as the social, political, etc. implications of sharing or preserving location privacy. Further, this workshop invites contributions and discussions related to methods and techniques for securing location information and preserving the privacy of geospatial data.
We invite two types of submissions for this workshop:
- Novel research contributions (6-8 pages)
- Vision / work-in-progress papers (3-6 pages)
Also note that all registered workshop participants will be invited to give a 5 minute, ignite style, lightning presentation on a subject related to the workshop topic.
Topics of Interest include, but are not limited to:
- Context-aware mobile applications
- Obfuscation techniques
- Educational approaches to location privacy
- Policy implications of personal location information
- Role of location in personal relationship development
- Geosocial media implications
- Credibility, trust, and expertise related to location information
- Tools and systems for preserving or securing private information
- Techniques for sharing private location information
- Methods for securing location information
- Place-based data privacy
- Individual vs. group privacy preservation
- Gamification techniques
- Next-generation location-based services
- Marketplaces for location data
- Submissions Due: 4 June, 2018
- Acceptance Notification: 2 July, 2018
- Camera-ready Copies Due: 16 July, 2018
- Workshop: 28 August, 2018
- Grant McKenzie, University of Maryland, College Park, USA
- Carsten Keßler, Aalborg University, Copenhagen, Denmark
- Clio Andris, Penn State University, State College, USA
Very cool interactive demo of Myriahedral Projections>:
Myriahedral Projections combine map projection and origami techniques to provide maps without area or angle distortion (at the expense of many interrupts).
The technique was first proposed by Jarke J. van Wijk in a 2008 article in The Cartographic Journal.
We have a full paper accepted for AGILE 2018, which summarizes the results of our little online experiment about visualizations of uncertainty:
Here’s the preprint, and the abstract: Effectively communicating the uncertainty that is inherent in any kind of geographic information remains a challenge. This paper investigates the efficacy of animation as a visual variable to represent positional uncertainty in a web mapping context. More specifically, two different kinds of animation (a ‘bouncing’ and a ‘rubberband’ effect) have been compared to two static visual variables (symbol size and transparency), as well as different combinations of those variables in an online experiment with 163 participants. The participants’ task was to identify the most and least uncertain point objects in a series of web maps. The results indicate that the use of animation to represent uncertainty imposes a learning step on the participants, which is reflected in longer response times. However, once the participants got used to the animations, they were both more consistent and slightly faster in solving the tasks, especially when the animation was combined with a second visual variable. According to the test results, animation is also particularly well suited to represent positional uncertainty, as more participants interpreted the animated visualizations correctly, compared to the static visualizations using symbol size and transparency. Somewhat contradictory to those results, the participants showed a clear preference for those static visualizations.
I’m probably late to the game, but since I’ve been working a lot with R lately, I finally got around to give the Simple Features for R package a proper shot. And boy should I have done that earlier. If you use R with spatial data and haven’t checked it out yet, please do. Here’s a brief list of my favorite features (pun intended):
- Much faster reading and writing data
- No more clumsy working with attribute data – instead of
mylayer@data$attribute, just go straight to
- If you work with PostGIS a lot, you’ll feel right at home with the spatial operators (and they automatically use spatial indexes!)
- Mapping with ggplot2 is much more intuitive than before, using the geom_sf function.
- The automatic faceted plots by attribute when you use the plain
plot function are also pretty cool.
- If you need to run functions that don’t work on simple feature collections, it is super easy to just convert them to a data frame (
my_df <- as.data.frame(my_sf)), run the function, and convert them back (
my_sf <- st_as_sf(my_df)) – geometries, CRS, etc. are picked up correctly automatically.
I'm sure I'm missing more great stuff, but this is just a first impression after a day of work with
sf. Overall, working with spatial data in R feels much more natural with
sf, with less extra code and special cases than before. Kudos to Edzer and the other contributors for this one!
Adam Wilson at SUNY Buffalo has the whole material for his course R for Spatial Data Science online. The whole course is really well-done, but I found the Introduction to Parallel Computing with R particularly useful. Bookmarked.
Very nice paper by Mark Gahegan. Here’s the abstract:
GIScience and GISystems have been successful in tackling many geographical problems over the last 30 years. But technologies and associated theory can become limiting if they end up defining how we see the world and what we believe are worthy and tractable research problems. This paper explores some of the limitations currently impacting GISystems and GIScience from the perspective of technology and community, contrasting GIScience with other informatics communities and their practices. It explores several themes: (i) GIScience and the informatics revolution; (ii) the lack of a community-owned innovation platform for GIScience research; (iii) the computational limitations imposed by desktop computing and the inability to scale up analysis; (iv) the continued failure to support the temporal dimension, and especially dynamic processes and models with feedbacks; (v) the challenge of embracing a wider and more heterogeneous view of geographical representation and analysis; and (vi) the urgent need to foster an active software development community to redress some of these shortcomings. A brief discussion then summarizes the issues and suggests that GIScience needs to work harder as a community to become more relevant to the broader geographic field and meet a bigger set of representation, analysis, and modelling needs.
Good food for thought at the beginning of the year, even though I do not agree with all of his points. There is currently a lot of progress being made concerning some of the problems he mentions (such as GeoMesa addressing scalability, or the NSF funding the Geospatial Software Institute, to name just two examples). I also don’t think (or at least hope?) that it is a prevalent position in our field that if it doesn’t fit on a desktop computer, it is some other community that should deal with it.
One point he raises about software platforms really resonates with me, though, since this is something I have been thinking about a lot recently:
Personally, this has driven me to use R, Python, and PostGIS for almost any kind of work, but I’m wondering if that is a viable solution for everyone? Or are the GIsystems he talks about more like classical, GUI-driven GIS systems that can be used without programming skills?
For the fifth day of Cards Against Humanity Saves America, we used your money to fund one year of monthly public opinion polls. We’ll ask the American people about their social and political views, what they think of the president, and their pee-pee habits.
In fact, we secretly started polling three months ago. What a delightful surprise!
To conduct our polls in a scientifically rigorous manner, we’ve partnered with Survey Sampling International — a professional research firm — to contact a nationally representative sample of the American public. For the first three polls, we interrupted people’s dinners on both their cell phones and landlines, and a total of about 3,000 adults didn’t hang up immediately. We examined the data for statistically significant correlations, and boy did we find some stuff.
Hilarious. I think I’m going to use their data in class some time. Too bad it doesn’t include respondents’ location.
I recently came across O’Reilly’s R for Data Science by Hadley Wickham and Garrett Grolemund. From cross-reading some of the chapters, it is a very easily digestible intro to R and it also goes into topics such as cleaning up data (something most books suggest to happen automagically). It doesn’t go very deep into the statistical capabilities of R, though.
Anyway, it turns out they actually have the full book online for free at http://r4ds.had.co.nz.
The privilege that techies have enjoyed for years is starting to erode. It’s taking them some time to see what other people are seeing, but if VCs, media critics, and people adjacent to the industry are starting to get it, then it’s time to make a change. Right?
Very good read over at Wired.