Measuring continental drift with your phone →

So, you could imagine instead of leaving your phone to do nothing overnight you could instead leave it to record 8 hours of drift data. We’d anonymize it and record drift information just for the nearest 100 mile square or something so we don’t know where your house is. Then we could aggregate that data with other phones across the world and see if we get something that looks accurate out of it.

I have no idea whether that works, but it sure is a damn cool idea. Go to m.opendrift.org on your phone to participate.

Bayes’ Theorem explained with Lego →

What’s a good blog on probability without a post on Bayes’ Theorem? Bayes’ Theorem is one of those mathematical ideas that is simultaneously simple and demanding. Its fundamental aim is to formalize how information about one event can give us understanding of another. Let’s start with the formula and some lego, then see where it takes us.

There should be more explanations of mathematical ideas that involve Lego.

Developing and testing SPARQL queries with cURL

There are tons of online SPARQL editors out there, but they often lack some specific functionality. The most common one is that they cannot query an endpoint you may have running on your own machine during development. The SPARQL forms that come with most triple stores, however, are very bare bones, to say the least. Plus I don’t really like going back and forth in a web browser when I’m working on a piece of code.

What I do instead is writing the query in a text editor with syntax highlighting and then shoot them over to the endpoint via cURL on the command line:

curl -i -H "Accept: text/csv" --data-urlencode query@query.sparql http://example.com/sparql

This will take the file query.sparql, send its contents to http://example.com/sparql (with the query parameter name being query), and show the results as comma-separated values. Obviously, this is no magic, I just keep forgetting the exact parameters so I thought I might as well document this here.

If you are using Sublime Text as your text editor, there is also the Sublime SPARQL Runner package. Does exactly the same thing and opens the results in a new text file right in sublime. I’ve only tested the package briefly, but it seems to do what it says on the tin.

Papers accepted for AGILE and AAAI Spring Symposium

I have two new papers accepted, one for AGILE 2015 in Lisbon, and one for the AAAI Spring Symposium 2015 on Structured Data for Humanitarian Technologies in Stanford. The latter was a collaboration with Tim Clark and Hemant Purohit. Find the preliminary citations below; click the title for a preprint PDF:

  • Carsten Keßler (forthcoming 2015) Central Places in Wikipedia. Accepted for the 18th AGILE Conference on Geographic Information Science: Geographic information Science as an enabler of smarter cities and communities. June 9–12, 2015. Lisbon, Portugal

Abstract Central Place Theory explains the number and locations of cities, towns, and villages based on principles of market areas, transportation, and socio-political interactions between settlements. It assumes a hexagonal segmentation of space, where every central place is surrounded by six lower-order settlements in its range, to which it caters its goods and services. In reality, this ideal hexagonal model is often skewed based on varying popu- lation densities, locations of natural features and resources, and other factors. In this paper, we propose an approach that extracts the structure around a central place and its range from the link structure on the Web. Using a corpus of georeferenced documents from the English language edition of Wikipedia, we combine weighted links between places and semantic annotations to compute the convex hull of a central place, marking its range. We compare the results obtained to the structures predicted by Central Place Theory, demonstrating that the Web and its hyperlink structure can indeed be used to infer spatial structures in the real world. We demonstrate our approach for the four largest metropolitan areas in the United States, namely New York City, Los Angeles, Chicago, and Houston.

Abstract Given the rise of humanitarian crises in the recent years, and adoption of multiple data sharing platforms in offline and online environments, it is increasingly challenging to collect, organize, clean, integrate, and analyze data in the humanitarian domain. On the other side, computer science has built efficient technologies to store, integrate and analyze structured data, however, their role in the humanitarian domain is yet to be shown. We present a case of how structured data technology, specifically Linked Open Data from the Semantic Web area, can be applied for information interoperability in the humanitarian domain. We present the domain-specific challenges, description of the technology adoption via an example of real world adoption of the Humanitarian Exchange Language (HXL) ontology, and describe the lessons from that to build the case of why, how and which components of technologies can be effective for information organization and interoperability in the humanitarian domain.

CFP: Workshop on Semantics and Analytics for Emergency Response (SAFE2015)

When: May 24, 2015
Where: Kristiansand, Norway
Collocated with the The 12th International Conference on Information Systems for Crisis Response and Management (ISCRAM2015)
Workshop URI: http://linkedscience.org/events/safe2015/
Submission Deadline: February 9, 2015. 23:59pm Hawaii Time
Submissions via: http://www.easychair.org/conferences/?conf=safe2015
Notifications: March 2nd, 2015

Emergencies require massive coordinated efforts from various departments, government organizations and public bodies to help and assist affected communities. Responders must rapidly gather information, determine where to deploy resources and make prioritization decisions regarding how best to deal with an evolving situation. Sharing accurate, real time and contextual information between different agencies, organizations and individuals is therefore crucial to developing good situation awareness for emergency responders. However, with the involvement of multiple organizations and agencies, each with their own response protocols, knowledge practices and knowledge representations, sharing critical information is considerably more difficult. Applying semantic technologies to represent information can provide excellent means for effectively sharing and using data within different organizations. Using highly structured, self-descriptive pieces of information, interlinked with multiple data resources can help develop a unified and accurate understanding of an evolving scenario. This provides an excellent framework for developing applications and technologies that are highly generic, reproducible and extendible to different regions, conditions, and scenarios. In addition, the semantic descriptions of data can enable new forms of analyses on this data, such as checking for inconsistencies, verifying developments according to planned scenarios, or trying to discover interesting semantically meaningful patterns in data. Such analytics can be performed either in real-time as the scenario unfolds, e.g., through semantic stream processing and event detection techniques, or as an after-action analysis to learn from past events.

SAFE2015 targets the intersection between Semantic Web and Linked Data, and the field of information systems for Emergency Response. The focus is on the use of semantic technologies to gather, share and integrate knowledge, as well as for supporting novel methods for analyzing such information, in order to provide better situation awareness, decision support, and potential for after-action reviews. This full-day workshop will be highly interactive, including presentations, demos, poster discussions, group work sessions, and road-mapping activities. We invite submissions in the form of research papers, demonstrations and poster papers, related to the workshop topics listed below.

Workshop topics include, but are not limited to:

  • Semantic Annotation and Mining, for understanding the content and context of both static sources and streaming data, such as social media streams.
  • Integration of unstructured or semi-structured data with Linked Data.
  • Interactive Interfaces and visual analytics methodologies for managing multiple large-scale, dynamic, evolving datasets, while exploiting their underlying semantics.
  • Vocabularies, ontologies and ontology design patterns for modelling, managing, sharing and analysing information in the Security and Emergency Response domains.
  • Stream reasoning and event detection over RDF streams.
  • Collaborative tools and services for citizens, organisations, communities, which exploit semantic technologies, and/or produce semantically well-specified information, such as Linked Data.
  • Privacy, ethics, trustworthiness and legal issues in the social Semantic Web and the use of semantic technologies, such as Linked Data.
  • Use case analysis, with specific interest for use cases that involve the application of semantic technologies and Linked Data methodologies in real-life scenarios.

Submissions

The workshop welcomes submissions describing novel research, both verified results as well as work in progress and system demonstrations.

Submission categories:

  • Full research papers, up to 10 pages.
  • Position papers, up to 5 pages.
  • Demos & Posters, up to 4 pages.

Paper submissions will have to be formatted in the Springer LNCS style. Submissions are made using EasyChair
(http://www.easychair.org/conferences/?conf=safe2015). Papers will be published as online proceedings, e.g. in CEUR-WS.

Workshop organizers

Eva Blomqvist, Linköping University, Sweden
Tomi Kauppinen, Aalto University, Finland
Vita Lanfranchi, University of Sheffield, United Kingdom
Carsten Kessler, Hunter College–CUNY, USA
Suvodeep Mazumdar, University of Sheffield, United Kingdom