Next Big Thing: New Tools for Digital Digging [VIDEO]

Nicola Hughes from ScraperWiki shared this video on Twitter recently and we thought it would be a shame not to share it with you too.

Experts in data mining gathered at the Paley Center for Media on 10 November 2011 to discuss the future of journalism and how to sustain a journalism watchdod in the digital age. This session is about data mining and the new tools available online.

Watch the video and let us know what you think. If you’ve used some of them, tell us how good -or how bad- you think they are…

Next Big Thing: New Tools for Digital Digging from The Paley Center For Media on FORA.tv

Presenters include:

Bill Allison

Bill Allison is the Editorial Director at the Sunlight Foundation. A veteran investigative journalist and editor for nonprofit media, Bill worked for the Center for Public Integrity for nine years, where he co-authored The Cheating of America with Charles Lewis, was senior editor of The Buying of the President 2000 and co-editor of the New York Times bestseller The Buying of the President 2004.

He edited projects on topics ranging from the role of international arms smugglers and private military companies in failing states around the world to the rise of section 527 organizations in American politics. Prior to joining the Center, Bill worked for eight years for The Philadelphia Inquirer — the last two as researcher for Pulitzer Prize winning reporters Donald L. Barlett and James B. Steele.

 

David Donald

David Donald, United States , is data editor at the Center for Public Integrity, where he oversees data analysis and computer-assisted reporting at the Washington-based investigative journalism nonprofit.

 

Sheila Krumholz

Sheila Krumholz is the Center for Responsive Politics’ executive director, serving as the organization’s chief administrator, the liaison to its board and major funders and its primary spokesperson.

Sheila became executive director in 2006, having served for eight years as the Center’s research director, supervising data analysis for OpenSecrets.org and the Center’s clients. She first joined the Center in 1989, serving as assistant editor of the very first edition of Open Secrets, the Center’s flagship publication.

In 2010, Fast Company magazine named Sheila to its “Most Influential Women in Technology” list. Sheila has a degree in international relations and political science from the University of Minnesota.

Jennifer 8. Lee

Jennifer 8. Lee authors The Fortune Cookie Chronicles ($24.99). Also, she’s a New York Times reporter.

 

Nadi Penjarla

Nadi Penjarla is the chief architect and designer of the Ujima Project. The Ujima Project (www.ujima-project.org) is a collection of databases, documents and other resources that aims to bring transparency to the workings of governments, multinational non-governmental organizations and business enterprises.

Nadi’s work demonstrates that data analysis provides unique insights into international and local political controversies and brings the facts of the world into sharper focus. He has spoken and conducted workshops on computer assisted reporting at international forums such as the ABRAJI Conference in Sao Paulo, Brazil, the GLMC Investigative Journalism Forum in Kigali, Rwanda, and at the Annual Investigative Reporters & Editors (IRE) Conference.

Nadi possesses a strong background in data analysis and data mining, including work as an investment banker, and a strategy and business analytics consultant. Past projects include consulting for Fortune 500 companies on how to improve strategic decision-making, enhance operations, conduct complementary marketing and transform related business processes by properly analyzing data and its implications. In 2003 Nadi was the founding editor of Global Tryst, an online magazine focusing on international issues from a grassroots perspective.

Nadi holds an MBA from the University of Chicago, an M.S in Engineering and Computer Science, and a B.S. in Engineering. He can be reached at 202-531-9300 or at nadi.penjarla@gmail.com

Scraping data from a list of webpages using Google Docs

OJB – By Paul Bradshaw

Quite often when you’re looking for data as part of a story, that data will not be on a single page, but on a series of pages. To manually copy the data from each one – or even scrape the data individually – would take time. Here I explain a way to use Google Docs to grab the data for you.

Some basic principles

Although Google Docs is a pretty clumsy tool to use to scrape webpages, the method used is much the same as if you were writing a scraper in a programming language like Python or Ruby. For that reason, I think this is a good quick way to introduce the basics of certain types of scrapers.

Here’s how it works:

Firstly, you need a list of links to the pages containing data.

Quite often that list might be on a webpage which links to them all, but if not you should look at whether the links have any common structure, for example “http://www.country.com/data/australia” or “http://www.country.com/data/country2″. If it does, then you can generate a list by filling in the part of the URL that changes each time (in this case, the country name or number), assuming you have a list to fill it from (i.e. a list of countries, codes or simple addition).

Second, you need the destination pages to have some consistent structure to them. In other words, they should look the same (although looking the same doesn’t mean they have the same structure – more on this below).

The scraper then cycles through each link in your list, grabs particular bits of data from each linked page (because it is always in the same place), and saves them all in one place.

Scraping with Google Docs using =importXML – a case study

If you’ve not used =importXML before it’s worth catching up on my previous 2 posts How to scrape webpages and ask questions with Google Docs and =importXML and Asking questions of a webpage – and finding out when those answers change.

This takes things a little bit further. [Read more…]

An Analysis of Steve Jobs Tribute Messages Displayed by Apple

 

Editor’s Note: We found this great example of data mining and thought it would be a shame not to share it with you. Neil Kodner analysed the data from all the tribute messages that were sent to Apple  after Steve Jobs passed away and checked for patterns and trends in what people were saying. Here is how he did it…

Neil Kodner.com 

Two weeks have passed since Apple’s Co-Founder/CEO Steve Jobs passed away.  Upon his passing, Apple encouraged people to share their memories, thoughts, and feelings by emailing rememberingsteve@apple.com. Earlier this week, Apple posted asite (http://www.apple.com/stevejobs) in tribute to Steve Jobs. According to the site, over a million people have submitted messages. The site cycles through the submitted messages.

I decided to take a closer look at what people are saying about Steve Jobs, as a whole. Looking at how the site updates, it appears to use Ajax to retrieve and display new messages. Using Chrome’s developer tools, I monitored the requests it was making to get the new messages.


Once I found the location of the individual messages, it was trivial to download all of them. The message endpoint URLs are in the format

and a sample message looks like


The site makes a request to http://www.apple.com/stevejobs/messages/main.json which returns


So it appears that it cycles through 10975 messages. I didn’t decompose the javascript powering the site to determine this, I just made an assumption. I tried querying values greater than 10975 and they returned 404. I wrote a quick python program to download the messages:

So now, we have over ten thousand tribute messages saved to the file stevejobs_tribute.txt. What I was most interested in seeing how many of these messages contain a reference to a certain Apple product.
I came up with a few search terms based on some legendary Apple product names including

  • Newton
  • Macintosh
  • MacBook
  • iBook
  • Mac
  • iPhone
  • iPod
  • iMac
  • iPad
  • Apple II family
  • OSX
  • iMovie
  • Apple TV
  • iTunes
  • LaserWriter (yes, Laserwriter)
Each product received an entry in a python dictionary. The value is another dictionary containing a regex for the product name and a count for the running totals. Some of the regular expressions are as simple as testing for an optional s at the end of the product name, some are a little more complex – check the Apple II regular expression to match all of entire product Apple 2 line. As I’m ok but not great with regular expressions, I welcome your corrections.

Here’s a screenshot of me testing the Apple II regular expression, using the excellent Regexr.

Overall, out of 10975 messages downloaded(as of now), 2,186, or just under 20% mentioned an apple product by name. Here’s the breakdown of the products mentioned:

More than one out of every ten messages included a reference to a Mac! Nearly one in ten mentioned an iPhone – not bad for a device that’s been out a fraction of the time the Mac has been available. [Read more…]