Handy tool if you want to cite a book, but are too lazy to put together the BibTex entry yourself. To comply with the Amazon API that it uses to generate the BibTex code, the entry includes the link to the book on Amazon, but that’s easy enough to remove.
I almost forgot to mention that our group finally has a proper website.
- Carsten Keßler and Peter J. Marcotullio (2017) A Geosimulation for the Future Spatial Distribution of the Global Population. Short paper, AGILE 2017, Wageningen, The Netherlands.
The presentation is scheduled for Wednesday at 12:00PM in the SOCIETAL-1 session in room 4.
This should be a fun workshop:
Knowledge graphs, i.e., making semantically annotated and interlinked raw data available on the Web, has taken information technologies by storm. Today such knowledge graphs power search engines, intelligent personal assistants, and cyber-infrastructures. For instance, the publicly available part of the Semantic Web-based Linked Data cloud contains more than 150 billion triples distributed over 10000 datasets and connected to another by millions of links. Geographic data play a significant role in this cloud and knowledge graphs in general as places function as central nexuses that connect people, events, and physical objects. Consequently, geo-data sources are among the most central and densely interlinked hubs. Beyond their sheer size, the diversity of these data and their inter-linkage are of major value as they enable a more holistic perspective on complex scientific and social questions that cannot be answered from a single domain’s perspective. Hence, knowledge graphs such as those implemented using the Linked Data paradigm bear potential to address many fundamental challenges of geoinformatics.
In this workshop we will discuss various aspects of geo-knowledge graphs ranging from their extraction and construction from unstructured or semi-structured data, issues of data fusion, conflation, and summarization, geo-ontologies, to query paradigms and user interfaces. By focusing explicitly on geo-knowledge graphs in general, we aim at broadening the focus beyond the Semantic Web technology stack and thus also beyond RDF-based Linked Data.
I am currently working a lot with large GeoTIFFs in Python and use Pillow to read them in, then convert them to NumPy arrays for processing. Every now and then, Pillow throws the following error, that I’ve seen on several computers running OS X now:
python TIFFReadDirectory: Warning, Unknown field with tag 42113 (0xa481) encountered. Segmentation fault: 11
Since it always takes me a while to figure out how to fix this, here’s a short note to self, maybe also useful to someone else out there:
- Uninstall Pillow:
- Install dependencies for building Pillow from source:
- Download Pillow source from PyPI
- Unpack and change into the folder with the source code, then build an install via:
python setup.py install
$ pip uninstall pillow
$ brew install libtiff libjpeg webp little-cms2
This has always fixed the problem for me so far. I don’t know whether building from source rather than simply running
pip install pillow
will also fix this problem on other operating systems, but it’s worth a shot if you hit that error.
The world is awash in bullshit. Politicians are unconstrained by facts. Science is conducted by press release. So-called higher education often rewards bullshit over analytic thought. Startup culture has elevated bullshit to high art. Advertisers wink conspiratorially and invite us to join them in seeing through all the bullshit, then take advantage of our lowered guard to bombard us with second-order bullshit. The majority of administrative activity, whether in private business or the public sphere, often seems to be little more than a sophisticated exercise in the combinatorial reassembly of bullshit.
We’re sick of it. It’s time to do something, and as educators, one constructive thing we know how to do is to teach people. So, the aim of this course is to help students navigate the bullshit-rich modern environment by identifying bullshit, seeing through it, and combatting it with effective analysis and argument.
I would definitely have taken a course that has Harry G. Frankfurt’s On Bullshit as its first reading. Let’s hope they get the university administration to approve it.
Interesting new workshop to take place at AGILE 2017. I like how they break from the usual submission workflow: In order to submit, you should fork their GitHub repo, add your submission file to the fork, and then send a pull request. If that’s too much hassle, you can also email your submission to Daniel Nüst, who is chairing the workshop.
Papers should be < 1000 words, the deadline is on March 19.
If you have a few minutes to spare (or need some cheering up after one of your papers has been rejected), here’s a nice read:
Guillaume Cabanac (2015) Unconventional academic writing.
Cabanac wrote this as an addendum to Hartley’s Academic writing and publishing: A practical handbook (2008), and as a present for Hartley’s 75th birthday. It contains lots of unusual – and very funny – titles, papers, and figures, all of which have been published in academic journals. My favorite may be this one-page paper on writer’s block:
Hat tip to Viola Voß for the pointer.
Getting the highest (or lowest) value from a database column is a bit tricky if you cannot use
GROUP BY, because it requires you to aggregate across all columns that you want in the result. Say you have a table with employees, having the columns
department, and you want to know the highest-paid employee per department. Then
GROUP BY is not an option because you would need to also aggregate by
name to have the
name in the output, which doesn’t make sense.
SELECT DISTINCT ON (department) name, salary, department FROM salaries_table ORDER BY department, salary DESC;
So we’ll only get one entry per department, and
ORDER BY salary DESC makes sure it is the one with the highest salary. The only bummer is that it is a PostgreSQL-specific function, so it won’t work on other DBMSs.
In this great article, Lisa Charlotte Rost gives you a crash course to the use of color in data visualisation (and mapping, for that matter). It covers some theory, lots of useful links to classics such as ColorBrewer and less-known tools such as this awesome R library that provides color schemes based on Wes Anderson movies (yes, seriously).