After building my first pipeline with StreamSets Data Collector, I wanted to give the framework more of a workout. I’ve spent a lot of time working with JSON data over the past few years, and the biggest, baddest JSON data set I can easily get hold of is a 181MB file containing the address and coordinates of all 206,560 city lots in San Francisco. Not just the location, but the vertices of the polygon(s) that define every single lot in the city. SF OpenData, the City and County of San Francisco’s official open data portal, makes this data set available in shapefile format which, helpfully, Mirco Zeiss, a consultant at CGI in Germany, has converted to JSON and pushed to a GitHub project.
Now, it turns out that there is a standard algorithm for calculating the area of a polygon on the Earth’s surface from its coordinates, so I set myself a challenge: could I use StreamSets, and the Cloudera Distribution of Hadoop (CDH) QuickStart VM on this data set to answer the question, “What’s the biggest lot in the City of San Francisco?”
In about a half day of work, I answered that question, and, in the process, learned a whole lot more about StreamSets and the Hadoop ecosystem. You can follow my new tutorial, parsing JSON data, processing it with a JavaScript evaluator, and writing it to Apache Hive, or you can simply skip to the end and discover, what’s the biggest lot in the City of San Francisco?
The post What’s the Biggest Lot in the City of San Francisco? appeared first on StreamSets.