CAGIS 2014 Keynote

I’ll be giving a keynote at the First International Workshop on Context-Awareness in Geographic Information Services (CAGIS 2014) at GIScience in Vienna next month (on September 23rd, to be more specific). The title of the talk will be Research in the Age of the Context Machine; here’s the abstract:

One of the major challenges in the development of context-aware applications has always been the initial step of collecting enough information about a user’s context. With the increasing prevalence of smartphones equipped with a plethora of sensors, more and more users have a context machine on them that constantly collects, uses, and transmits different kinds of passively collected contextual information. Additionally, many users actively provide contextual information by participating in online social networks. This talk will shed some light on the implications of these developments for research on context awareness. Starting with a brief review of the history of research in context awareness, it will discuss the role of research conducted in industry in this field, upcoming research challenges, and implications for user privacy.

Looking forward to see everyone in Vienna in a few weeks!

NYC Hurricane Evacuation Zones Map

Screen Shot 2014-08-11 at 13.22.03

This Leaflet map of the NYC hurricane evacuation zones is one of the small projects we completed during our training course on Free and Open Source GIS at Hunter College last week. Carson (who prepared this example – credit where credit is due!) and I had an awesome crowd and had a a really great time teaching this class.

Note that I have simplified the actual zone shapes a little bit to make the GeoJSON file more digestible, so please use the NYC Evacuation Zone Finder in case of an actual storm.

The Kardashian index: a measure of discrepant social media profile for scientists →

In the era of social media there are now many different ways that a scientist can build their public profile; the publication of high-quality scientific papers being just one. While social media is a valuable tool for outreach and the sharing of ideas, there is a danger that this form of communication is gaining too high a value and that we are losing sight of key metrics of scientific value, such as citation indices. To help quantify this, I propose the ‘Kardashian Index’, a measure of discrepancy between a scientist’s social media profile and publication record based on the direct comparison of numbers of citations and Twitter followers.

The K-Index is hilarious and potentially very useful at the same time.

Virtuoso Open Source on Mac OS

I’ve played around with different triple stores over the past years, but somehow never got to try Virtuoso. So I thought I’d give it a shot and while I’m at it, document the installation steps. The commercial edition of Virtuoso comes with a simple app that does all the dirty work for you, the open source edition does not – but this is not a huge issue unless a terminal window makes you want to hide under the table and cry.

Here we go.

Installation

  1. The Virtuoso Open Source GitHub page has a tutorial for building the application from scratch, but we are going to take the easy path. If you don’t have Homebrew installed, go ahead and do that. Open a terminal window and run
    ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
  2. Homebrew is a package manager for Mac OS and provides builds for many popular (and not so popular) open source software packages. Homebrew makes installing Virtuoso as easy as
    brew install virtuoso

Starting Virtuoso

  1. After Homebrew has installed all dependencies and Virtuoso itself, we are ready to go. Virtuoso needs a virtuoso.ini file that contains all settings for the server. Our installation comes with a sample .ini file, located in /usr/local/Cellar/virtuoso/7.1.0/var/lib/virtuoso/db/ (this works for Virtuoso 7.1.0, but the location is most likely different for other versions. You can find that directory by running locate virtuoso.ini). Change into that directory and start virtuoso by running the following two commands:
    cd /usr/local/Cellar/virtuoso/7.1.0/var/lib/virtuoso/db/
    virtuoso-t -f
  2. Voilà. You should now be able to access the Virtuoso frontend at http://localhost:8890.
  3. You’ll find a link to the Conductor, the Virtuoso admin interface, at the top left of the page. The standard installation has a bunch of preset usernames and passwords, so that we can simply login with
    • User: dba
    • Password: dba

Loading data into Virtuoso

An empty triple store is not much fun, so let’s load some data into Virtuoso. There are several ways to do that, I’ll just describe one way here that works well for large files like the Geonames dump in N-Triples format I’m loading here.

  1. In order to load a local RDF file into Virtuoso, it needs access to the directory holding that file. The clean solution is to add that directory to the DirsAllowed in virtuoso.ini and restart Virtuoso. I’ll take the quick and dirty approach here and simply move the file from my Downloads folder to the folder holding our virtuoso.ini by running
    mv ~/Downloads/geonames.nt /usr/local/Cellar/virtuoso/7.1.0/var/lib/virtuoso/db/geonames.nt
  2. Next, we’ll start an SQL prompt by running
    isql

    You should see a new prompt now: SQL>

  3. On this SQL prompt, insert the following command to load the data into Virtuoso:
    DB.DBA.TTLP_MT (file_to_string_output ('./geonames.nt'), '', 'http://mytest.com');

    The second parameter is the URI of the graph we load the data into. It does not really matter what you put there, as long as it’s a valid URI.

If you’re trying to load a big file, the bulk loader functions are much faster. You should also change the settings for the NumberOfBuffers and MaxDirtyBuffers in virtuoso.ini to allow Virtuoso to use more RAM.

Querying Virtuoso

While we are waiting for Virtuoso to finish loading our dataset (which may take a while if you are loading a big dump like GeoNames), we can already run our first queries on the triples that have already been loaded. Go to http://localhost:8890/sparql and run a test query against the graph you are loading data into (http://mytest.com in my example above:

SELECT * WHERE {
 ?a ?b ?c
} LIMIT 10

This should give you ten random triples from the dataset you are loading.

Shutting down Virtuoso

Shutting down Virtuoso correctly (rather than just killing the process) is important because otherwise you may have trouble starting it next time. For a clean shut down, open an SQL prompt (see above) and run

SHUTDOWN;

AGILE 2014: Best paper award and some slides

Our paper on Encoding and querying historic map content won the best paper award at AGILE 2014 in Castellon, Spain. Thanks to Simon, Jim and Alber for the great work!

These are the slides for the talks presenting our three papers.


Encoding and querying historic map content (thanks Simon Scheider!)


Making the Web of Data Available via Web Feature Services (thanks Jim Jones!)


Geo-Information Visualizations of Linked Data (thanks Rob Lemmens!)

GeoPrivacy Workshop at ACM SIGSPATIAL

I will be jointly organizing a workshop on GeoPrivacy at ACM SIGSPATIAL in Dallas this fall, together with Grant McKenzie (UC Santa Barbara) and Lars Kulik (University of Melbourne).

CALL FOR PAPERS

GeoPrivacy: 1st Workshop on Privacy in Geographic Information Collection and Analysis

In conjunction with ACM SIGSPATIAL 2014

November 4, 2014, Dallas, Texas, USA

Website: http://stko.geog.ucsb.edu/geoprivacy/

Workshop scope

Developments in mobile and surveying technologies over the past decade have enabled the collection of Individual-level geographic information at unprecedented scale. While this large pool of information is extremely valuable to answer scientific questions about human behavior and interaction, privacy intrusion is an imminent risk when detailed individual travel patterns are used for commercial purposes such as customer profiling, or even for political persecution. The GeoPrivacy workshop will hence focus on discussing methods to protect individual’s privacy in geographic information collection and analysis.

Topics of interest for the workshop include, but are not limited to:

  • Awareness
  • Perception of privacy
  • Obfuscation
  • Methods of privacy­preserving anonymization
  • Geo­credibility, trust and expertise
  • The role of geoprivacy in policy decisions
  • Location Based Services
  • Online Geosocial Networks
  • Geofencing
  • Privacy implications of Big Data
  • Sample, training and test datasets
  • Privacy in near­field communication
  • Abstraction of geo data for privacy preservation
  • Analysis of anonymized datasets
  • Privacy implications of public displays and signage
  • Gamification and geogames

Workshop format

The workshop will be kicked off with an invited keynote (to be announced), followed by presentations of full papers (30 minutes) and extended abstracts (20 minutes). Each session will include plenty of time for questions and discussions to enable an interactive workshop. The afternoon will be dedicated to small breakout groups to work on focused topics that emerge from the presentations in the morning sessions. Such a highly interactive workshop format has great potential to spark a significant number of new ideas for research and future collaborations in the realm of GeoPrivacy.

Submissions

We call for full papers (up to 8 pages) and short papers presenting work in progress and raising discussion points for the workshop (up to 4 pages). Submissions must be original and must not be under review elsewhere. Papers must be formatted using the ACM camera-ready templates available at http://www.acm.org/sigs/pubs/proceed/template.html. All papers must be submitted in PDF format via the online system (the submission link will be added to the website soon).

Acceptance will be based on relevance to the workshop, technical quality, originality, and potential impact, as well as clarity of presentation. All submitted papers will be reviewed by at least 3 referees.

The proceedings of the workshop will appear in the ACM Digital Library. One author per accepted paper is required to register for the workshop and the conference, as well as present the accepted submission to ensure inclusion in the workshop proceedings.

Important dates

  • Paper submission deadline: August 29, 2014
  • Author notification: September 19, 2014
  • Camera­ready papers due: October 10, 2014
  • Workshop date: November 4, 2014

Organizers

Program committee

  • Benjamin Adams, Center for eResearch, University of Auckland, New Zealand
  • Sen Xu Alex, Twitter, San Francisco, USA
  • Matt Duckham, University of Melbourne, Australia
  • Carson Farmer, Hunter College, City University of New York
  • Gabriel Ghinita, University of Massachusetts at Boston, USA
  • Tanzima Hashem, BUET University, Bangladesh
  • Peter Kiefer, ETH Zurich, Switzerland
  • Marc­ Olivier Killijian, LAAS, Centre national de la recherche scientifique, France
  • Edzer Pebesma, Institute for Geoinformatics, University of Münster, Germany
  • Albert Remke, 52°North, Germany
  • Colin Robertson, Wilfrid Laurier University, Waterloo, Canada
  • Erik Wilde, UC Berkeley, USA
  • John Wilson, University of Southern California, USA