Interested in data-driven journalism? Get your voice heard!

The DJB supports good causes and when we heard that the European Journalism Center was doing this survey on data-driven journalism, we couldn’t help but blog about it! By getting involved and answering the survey you could not only win 100€ worth of amazon vouchers  but you would also make a good contribution to the future of data journalism. What a great feeling… No need to say we’ve all done it, what are YOU waiting for?

by  Liliana Bounegru from EJC

The European Journalism Centre (EJC) in collaboration with Mirko Lorenz (Deutshe Welle) created a survey that aims to gather the opinion of journalists on the emerging practice of data-driven journalism and understand their training needs in this field.

Data has always been used as a source for reporting especially by investigative journalists and will play an increasingly important role in journalism in the future. Data-driven investigative operations in the past however involved a lot of resources and time. With the increasing pressure on newsrooms to be more time and cost efficient, they remained a marginal practice.

Why data-driven journalism?

Data-driven journalism enables journalists and media outlets to produce value and revenues without requiring the large investments of time and resources that data-driven investigative operations required in the past, thus holding the potential to more evenly distribute this practice across newsrooms. This is partly due to the increasing availability of open data catalogues which reduces the time required for journalists to get their hands on valuable data, and of free and open tools for data interrogation and visualization that lend themselves to non-expert use, which make data-driven reporting easier to undertake. The most notable data journalism operation in Europe, the Guardian Data Blog, works mainly with Excel or Google spreadsheets and free tools for data interrogation and visualization, and was until not long ago a one-man show, using the potential of crowdsourcing for data analysis at times.

How to understand what journalists need?

To enable more journalists and newsrooms across Europe to tap into the potential of data-driven journalism, the European Journalism Centre plans to organize a series of trainings this year and in the coming year. To understand what journalists need in order to practice data journalism, we created a survey. The survey has 16 questions asking for their opinion on data journalism, aspects of working with data in their newsrooms, and what they are interested in learning.

Answer the survey and get your voice heard!

We’ve had a good start: in a bit over one week over 80 journalists responded. If you are a journalist we would be grateful if you took 10 minutes of your time to take the survey and help us understand what is useful for journalists in order to organize trainings that fit real needs. To say thank you one of the entries will win a 100€ Amazon gift voucher.

The insights from this survey will be made feely available. We would much appreciate also help with tweeting, blogging or forwarding this to relevant people you might know.

 

DATA VISUALISING THE STORY OF FOOD AND EMOTION

OWNI.eu by EKATERINA YUDIN

How do we even begin to visualize and draw connections between the intimately complex relationship that exists between food and emotion? Here is a great article by Ekaterina Yudin that we picked for its compelling data visualisations. You can find the original version on the Masters of Media website, otherwise read on! It is worth it.

Can we discover patterns amongst global food trends and global emotional trends? Could data visualization help us weave a story, and make use of the complex streams of data surrounding food and its consumption, to reveal insights otherwise invisible to the naked eye? And why would we try to do so in the first place?

To begin, let’s just establish that one has an ambitious appetite.

For our group information visualization project we have set out to measure global food sentiment. The main objective of our project matches the very definition of information visualization first put forth by Card et al. (1999) – of using computer-supported, interactive, visual representations of data to amplify cognition, where the main goal of insight is discovery, decision making (as investigated in my last post), and explanation. Our mission is to gauge and visualize, in real-time, the planet’s feelings towards particular foods using Twitter data; does pizza make everyone happy, do salads make people sad, does cake comfort us? Will there be an accordance of food with nations?

Setting the visualization in the backdrop of country GDP and obesity levels we can begin to ponder how the social, political and cultural issues will play out and what reflections of globalization will emerge. Will richer countries be more obese? It should be noted that being restricted to English language tweets for now creates a huge bias in our visualization, and one should keep in mind that the snapshot of data will obviously not be completely representative of the entire world; for example, in developing countries it’s most probable that only rich/modern people speak English AND use Twitter at the same time.

The relationships between all the variables is already an enigmatic one, particularly when each carry their own layers of baggage, so a narrative of complexity emerges even before the visualization can be realized. Incidentally this is the story the data is already beginning to weave, which makes it a perfect calling for data visualization to reduce the complexity, present it in a meaningful way we can understand and use its power of storytelling to understand our puzzling relationships towards food — a story worth discovering.

WHY FOOD?

Food is at the core of our daily survival, with broad-ranging effects on personal health, and a particularly hot topic these days with everyone having some opinion about it — after all, everyone needs it, which makes food intrinsically emotional. So it is no surprise that a wealth of conversations emerge about food when today’s increased citizen interest, health focus and demand for a transparent food industry collide; to top it off, this is all happening amidst concerns of food security, shortages, rising food prices, obesity, hunger, addiction and diseases. With data related to food increasingly open, the benefits of using data visualization, as well as the empowerment that access to layers of hidden information produces, is already being explored on the web.

A brief survey of food visualizations reveal: the ten most carnivorous countries, world hunger visualization, how the U.S.A was much thinner not that long go, snacks available in middle and high school vending machines, calories per dollar, driving is why you’re fat, where Twinkies come from, and so on.

Health issues related to food run high in the corpus of visualizations and it is no surprise. With improved access to information about food (sources, ingredients, effects, consumption statistics, etc.) presented in a visually engaging way, we can begin to distill the essential changes that could then impact our food-purchasing choices, enable better health, and enhance the design of an open food movement. [An additional reel of 60 food/health infographics can be found here].

Food is not just a lifestyle that is essential and important to the world. It can also be one of the most effective ways to reshape health, poverty issues, and relationships; and because it touches all facets of life, it shouldn’t be treated as just a lifestyle’y sort of thing. –Nicola Twilley (FoodandTechConnect Interview)

What’s the insight worth?

Beyond helping discover new understandings amidst a profoundly complicated world where massive amounts of information create a problem of scaling, a great visualization can help create a shared view of a situation and align people on needed action — it can often make people realize they are more similar than different, and that they agree more than they disagree. And it is precisely via stories — which are compelling and have always been used to convey information, experiences, ideas and cultural values — that we can begin to better understand the world and transform the interdependent factors of food and sentiment discussions into a visual form that makes sense. In this way, food – a naturally social phenomenon — can become our lens that reveals patterns in society.

A multitude of blogs, projects and companies such as GOOD’s Food StudiesFood+Tech Connect,The Foodprint Project, innovation series like the interactive future of food research) and lest not forget Jamie Oliver’s food revolution, to name just a few, propel the exploration, understanding and the reshaping of conversation about food, health and technology today and in the future. (Food+Tech Connect, 2011). But it is the newest wave of infographics and data visualizations that seek to draw our attention to epidemics such as food shortages and obesity by illustrating meaning in the numbers for people to truly see and understand the implications.

 

A WEB OF FEELINGS

We also can’t entirely separate feelings from food. People consistently experience varying emotional levels (see Natalie’s post on this very subject) and these play key roles in our daily decision-making. Emotions, too, have now begun to be mapped out in visualizations ranging from a mapping of a nation’s well being to a view of the world mean happiness.

 

 

Taking food and emotion together we come to understand that this data of the everyday paints a picture and hyper-digitizes life in a way that self-portraits and global portraits of food consumption patterns begin to emerge. As psychology researchers have shown us, people are capable of a diverse range of emotions. And because food provides a sense of place – a soothing and comforting feeling — it makes food evoke strong emotions that tie it right back to the people (Resnick, 2009).

Now that we spend a majority of our time online, our feelings and raw emotion, too, find their way to the web. We can visualize this phenomenon with projects like We Feel Fine, which taps into our and other people’s emotions by scanning the blogosphere and mapping the entire range of human emotions (thereby essentially painting a picture of international human emotion), I want you to want me, which explores the complex relationship on love and hope amongst people, Lovelines, which illuminates the emotional landscape between love and hate, and The Whale Hunt, which explores death and anxiety.

What all these visualizations have in common is the critical component of an emotional aesthetic — the display of people’s bubbling feelings that are often removed from visualizations but is the very human aspect we tend to remember. This is in line with Gert Nielsen’s philosophy that he shared with the audience at the Wireless Stories conference early last month — that you can’t take the human being out of the visualization or else you take out the emotion, too; the key, it seems, is data should ‘enrich’ the human stuff and the powerful human stories that are waiting to be captured and told.

MAKING DISCOVERIES AND SPREADING AWARENESS IN A SEA OF DATA

Which brings us to our data deluge world. We’re increasingly dependent on data while perpetually creating it at the same time. But creating data isn’t the question (at least not for Western and emerging countries, whereas producing relevant data for developing countries is still quite a challenge) – it’s whether someone is paying attention to the data, and whether someone is using the data usefully in an even larger question (Resnick, 2009).

The age of data accessibility, information [sharing], and connectivity allows people, cultures and institutions to share and influence each other daily via a plethora of broadcast platforms available on the web; these function as a public shout box for daily chatter, emotional self-expression, social interaction, and commiseration. Twitter – the social media network, twenty-four-hour news site and conversation platform that connects those with access across the world — is also the chosen data pool for our project. It’s a place to share just as much as it is to peek into other lives and conversations. And precisely because it’s a place where millions of people express feelings and opinions about every issue that the distillation of knowledge from this huge amount of unstructured data becomes a challenging task. In this case visualization can serve to extend the digital landscape to better understand broadcasts of human interaction. Our digital lives, and conversations within them, are full of traces we leave behind.  But by transcoding and mapping these into visual images, representations, and associations, we can begin to comprehend meanings and associations.

Twitter is also a narrative domain, and serves as a platform for Web 2.0 storytelling – the telling of stories using Web 2.0 tools, technologies, and strategies (Alexander & Levine, 2008). Alexander and Levine (2008) distinguish such web 2.0 projects as having features of micro-content (small chunks of content, with each chunk conveying a primary idea or concept) and social media (platforms that are structured around people). With the number of distributed discussions across Twitter, a new environment for storytelling emerges — one we will explore to uncover and analyze global patterns amongst conversations surrounding food sentiment.

SO WHAT’S THE FOOD + EMOTION STORY?

As put forth by Segel & Heer (2009), each data point has a story behind it in the same way that every character in a book has a past, present, and future, with interactions and relationships that exist between the data points themselves. Thus, to reveal information and stories hiding behind the data we can turn to the storytelling potential of data visualization, where visualization can serve to create new stories and insights that can ultimately function in place of a written story. These new types of stories — ones that are made possible by data visualization — empower an open door for the free exploration and filtering of visual data, which according to Ben Shneiderman also allow people to become more engaged (NYTimes, 2011).

To date, the storytelling potential of data visualization has been explored and popularized by news organizations such as the NY Times and the Guardian, where visualizations of news data are used to convince us of something (humanize us), compel us to action, enlighten us with new information, or force us to question our own preconceptions (Yau, 2008). There is a growing sense of the importance of making complex data visually comprehensible and this was the very motivation behind our project; of linking food and emotion sentiment with country GDP and obesity to see if insightful patterns emerge using this new visual language. With our visualization still in progress, and data still dispersed, I’m still wondering what’s the story and what could the story of our visualization become? Will the visualization of our data streams produce something insightful? What will we be able to say about how people feel towards foods in different countries? At this point it’s only a matter of time until we dig deeper into the complexities of our real world data ti understand the (food <–> emotion) <–> (income <–> obesity) paradox.

This post was originally published on Masters of Media

Photo Credits: The New York TimesR. Veenhoven, World Database of Happiness, Trend in Nations, Erasmus University RotterdamWorld Food ProgramGOOD and HyperaktA Wing, A prayer, Zut Alors, Inc. and GOOD, and Flickr CC Kokotron

References:

Alexander, B. & Levine, A. (2008). “Web 2.0 Storytelling: Emergence of a New Genre”. Web. Educause. Accessed on 19/04/11

Card, K.S., Mackinlay, J. D., & Shneiderman, B. (1999). “Readings in Information Visualization, using vision to think”. Morgan Kaufmann, Cal. USA.

Resnick, M. (2009). “The Moveable Feast of Memory”. Web. PsychologyToday.com. Accessed on 20/04/11

Segel, E. & Heer, J. (2010). “Narrative Visualization: Telling Stories with Data”.

Singer, N. (2011). “When the Data Struts Its Stuff”. Web. NYTimes.com. Accessed on 19/04/11

Yau, N. (2008). “Great Data Visualization Tells a Great Story”. Web. FlowingData.com. Accessed on 20/04/11


Breaking Bin Laden: visualizing the power of a single tweet

The shape of rumours on Twitter by Social Flow

 

SOCIAL FLOW

A full hour before the formal announcement of Bin-Laden’s death, Keith Urbahn posted his speculation on the emergency presidential address. Little did he know that this Tweet would trigger an avalanche of reactions, Retweets and conversations that would beat mainstream media as well as the White House announcement.

Keith Urbahn wasn’t the first to speculate Bin Laden’s death, but he was the one who gained the most trust from the network. Why did this happen?

Before May 1st, not even the smartest of machine learning algorithms could have predicted Keith Urbahn’s online relevancy score, or his potential to spark an incredibly viral information flow. While politicos “in the know” certainly knew him or of him, his previous interactions and size and nature of his social graph did little to reflect his potential to generate thousands of people’s willingness to trust within a matter of minutes.

While connections, authority, trust and persuasiveness play a key role in influencing others, they are only part of a complex set of dynamics that affect people’s perception of a person, a piece of information or a product. Timing, initiating a network effect at the right time, and frankly, a dash of pure luck matter equally. [Read more…]

 

Open Data And Emergent Digital Horizons At Future Everything 2011 [Event]

PSFK: by Stephen Fortune

Picture from the PSFK website

Now in it’s 16th year, the recently-renamed FutureEverything Festival will continue to showcase and illuminate creative technologies and digital innovation this coming May in Manchester, UK.

Befitting it’s role in leading Manchester’s recent Open Data revolution, FutureEverything will provide centre stage consideration of Open Data as part of it’s two day conference. Open Data is shifting the digital landscape in a manner comparable to the sea changes which followed in the wake of social media and FutureEverything 2011 offers the means to understand how it will transform the way consumers engage with brands, and the ways citizens engage in local government. The topics under consideration range from the enterprise that can be fomented with open data to what shape algorithm driven journalism will take. [Read more…]

 

10 things every journalist should know about data

NEWS:REWIRED: by SARAH MARSHALL

Picture from News:Rewired website

Every journalist needs to know about data. It is not just the preserve of the investigative journalist but can – and should – be used by reporters writing for local papers, magazines, the consumer and trade press and for online publications.

Think about crime statistics, government spending, bin collections, hospital infections and missing kittens and tell me data journalism is not relevant to your title.

If you think you need to be a hacker as well as a hack then you are wrong. Although data journalism combines journalism, research, statistics and programming, you may dabble but you don’t need to know much maths or code to get started. It can be as simple as copying and pasting data from an Excel spreadsheet. [Read more…]

 

The New York Times’ Cascade: Data Visualization for Tweets [VIDEO]

[youtube yQBOF7XeCE0]

MASHABLE – by Jolie O’Dell

The research and development department of The New York Times has recently been pondering the life cycle of the paper’s news stories in social media — specifically, on Twitter. Cascade is a project that visually represents what happens when readers tweet about articles.

Even now, however, Cascade is more than just a nifty data visualization. [Read more…]

 

#ijf11: the raise of Open Data

source: Joel Gunter from Journalism.co.uk

Picture: "Where does my money go?" by the Open Knowledge Foundation

The open data movement, with the US and UK governments to the fore, is putting a vast and unprecedented quantity of republishable public data on the web. This information tsunami requires organisation, interpretation and elaboration by the media if anything eye-catching is to be made of it.

Experts gathered at the International Journalism Festival in Perugia last week to discuss what journalistic skills are required for data journalism.

Jonathan Gray, community coordinator for the Open Knowledge Foundation, spoke on an open data panel about the usability of data. “The key term in open data is ‘re-use’,” he told Joel Gunter from Journalism.co.uk.

Government data has been available online for years but locked up under an all rights reserved licence or a confusing mixture of different terms and conditions.

The Open Knowledge Foundation finds beneficial ways to apply that data in projects such as Where Does My Money Go which analyses data about UK public spending. “It is about giving people literacy with public information,” Gray said.

The key is allowing a lot more people to understand complex information quickly.

Along with its visualisation and analysis projects, the Open Knowledge Foundation has established opendefinition.org, which provides criteria for openness in relation to data, content and software services, and opendatasearch.org, which is aggregating open data sets from around the world.

“Tools so good that they are invisible. This is what the open data movement needs”, Gray said.

Some of the Google tools that millions use everyday are simple, effective open tools that we turn to without thinking, that are “so good we don’t even know that they are there”, he added.

Countries such as Itlay and France are very enthusiastic about the future of open data. Georgia has launched its own open data portal, opendata.ge.

The US with data.gov, spend £34 million a year maintaining that various open data sites. Others are cheap by comparison, with the UK’s data.gov.uk reportedly costing £250,000 to set up.

 

The 4 Golden rules to Data Journalism from the NYT

 
data visualisation by the New York Times graphics team
 
 
source: Joel Gunter from Journalism.co.uk
 

The New York Times has one of the largest, most advanced graphics teams of any national newspaper in the world. The NYT deputy graphics editor Matthew Ericson led a two-hour workshop at the International Journalism Festival last week about his team’s approach to visualising some of the data that flows through the paper’s stories everyday. Here is a short guide on how to make good data journalism…

The New York Times data team follows four golden rules:

  • Provide context
  • Describe processes
  • Reveal patterns
  • Explain the geography

Provide context

Graphics should bring something new to the story, not just repeat the information in the lead. A graphics team that simply illustrates what the reporter has already told the audience is not doing its job properly. “A graphic can bring together a variety of stories and provide context,” Ericson argued, citing the NYT’s coverage of the Fukushima nuclear crisis.

“We would have reporters with information about the health risks, and some who were working on radiation levels, and then population, and we can bring these things together with graphics and show the context.”

Describe processes

The Fukushima nuclear crisis has spurned a lot of graphics work in many news organisations across the world and Ericson explained hoe the New York Times went about it.

“As we approach stories, we are not interested in a graphic showing how a standard nuclear reactor works, we want to show what is particular to a situation and what will help a reader understand this particular new story.

Like saying: You’ve been reading about these fuel rods all over the news, this is what they actually look like and how they work.”

Reveal patterns

This is perhaps the most famous guideline in data visualisation. Taking a dataset and revealing the patterns that may tell us a story. Crime is going up here, population density down there, immigration changing over time, etc.

These so-called “narrative graphics” are very close to what we have been seeing for a while in broadcast news bulletins.

Explain geography

The final main objective was to show the audience the geographical element of stories.

Examples for this section included mapping the flooding of New Orleans following hurricane Katrina and a feature of demonstrating the size and position of the oil slick in the Gulf following the BP Deepwater Horizon accident, and comparing it with previous major oil spills.

Some of the tools in use by the NYT team, with examples:

Google Fusion Tables

Tableau Public: Power Hitters

Google Charts from New York State Test Scores – The New York Times

HTML, CSS and Javascript: 2010 World Cup Rankings

jQuery: The Write Less, Do More, JavaScript Library

jQuery UI – Home

Protovis

Raphaël—JavaScript Library

The R Project for Statistical Computing

Processing.org

Joel Gunter from Journalism.co.uk wrote a very interesting article on the workshop led by Ericson. He spoke to Ericson after the session about what kind of people make up his team (it includes cartographers!) and how they go about working on a story. Here is what he had to say.