May 06 by Mike Reich

This month's - Unlocking jobs data for good

We’re really excited to give America’s jobs report a much-needed upgrade, so today we’re rolling out

Before now, there were a few tried and true ways of figuring out the details in each month’s jobs and employment numbers from the Bureau of Labor Statistics.

1. Use an FTP site, download some files, and crunch. You could take a few hours to download all the files from the Bureau of Labor Statistics FTP site, create some kind of graphs to show you changes since last month’s report, and hopefully figure out what’s going on with American jobs.

Typically, that looks like this:

Screenshot of BLS FTP page. This is the other way to get raw data on American jobs.

2. Read the news and hope for good analysis. The headlines give just a piece of the picture. If you’re looking for the percentage of national expenditure on non-food products, though, you either have to find some in-depth analysis or head back to step 1 to do it yourself.

3. Try your luck with the BLS site. The agency’s site offers up a series of forms that allow a user with a lot of background knowledge to poke around and eventually find and compare the data sets they want.

We got tired of these options, so we put the Seabourne team on a 24-hour task: build the jobs report dashboard we wanted to see, and prove that while creating new jobs might take hard work, understanding each month’s jobs report shouldn’t have to.

Fixing BLS data and launching using DelRay

Our team decided we could do better, so we gave ourselves a day-long challenge. The result is -- a simple, digestible dashboard that puts all of the monthly jobs data at reach.

April jobs numbers on the dashboard. April jobs numbers on the dashboard. includes more than 60,000 sets of data representing over 12,000,000 data points. Typically, this information is only available in raw form through a collection of flat-files. This presentation makes understanding the data simple by eliminating the heavy lifting that often comes from trying to grok unwieldy sets of data.

All of the data cleansing, standardization, and processing from inputs into site-ready outputs was powered by DelRay. This information management platform lets anyone with a little tech know-how can take big, extremely valuable datasets and turn them into something that just works. DelRay allowed our team to take various, disparate inputs and eliminate problematic discrepancies quickly, turning a bunch of non-uniform feeds into cleaned up outputs. Then, organizes those outputs into the dashboard you see on the site. This kind of presentation would be unthinkable without heavy data lifting in the back end of the process. DelRay made it happen.

The 24-hour development window proved a few things to us from an internal perspective. First, it takes all types to visualize a product like Austin, our account manager, doesn’t have as deep a technical background as our development leads. So we had him go through the motions with DelRay to test our hypothesis about the ease of use for the platform. While Austin cleaned data and made inputs, Luke on our development team was working on putting the ouptuts together and re-classifying data sets so that they were intuitive, standardized and -- most importantly -- readable by humans.

Another great lesson was that big data sets don’t have to be big nightmares. Every month, we read the jobs numbers and analysis and wonder how many of these writers have deep stats backgrounds, or tech-savvy assistants to help out. With newsrooms shrinking and every writer doing more with less, this just seemed unsustainable and unnecessary.

We hope that is a handy tool for writers and readers alike. And even though we’re proud of what we put together in a day, we’ll be rolling out new features and functionality quickly.

comments powered by Disqus