Skip to content

About Us

About the Project

Global Study of Collaborative Creative Projects: Data Collection​

This longitudinal project tracked common cognitive decision-making shortcuts and factors that supported versus inhibited progress on 300+ professional team-based innovative projects. The full dataset is comprised of interviews and observations of over 150 creative professionals. It includes projects from 176 companies, 23 industries, 8 countries, and 5 continents. This site makes 232 of these projects explorable.

0
Companies
0 +
Innovation Projects
0 +
Industries
0
Continents

About The Process

Methodology

Ethnographic Data Preparation

Projects were categorized as successful or unsuccessful, based on the innovativeness of the idea. Successfully innovative projects were defined similarly to a patent definition, in that they: a) solve a problem, and b) do so in an original way. Success was judged from the perspective of the people who worked on the project, as well as multiple expert raters.

Metadata tags make it possible to compare teams with similar characteristics and see what patterns of biases and common pitfalls occur most frequently. We identified 25 metadata categories. Example metadata tags include team size, industry, time pressure, budget pressure, team dynamics, number of ideas generated, etc. Metadata categories can be used to compare tags associated with projects in the same industry or what types of tags show up when budget was an issue on a team.

Tags were created using academic research on team-based innovation and creativity. These include individual traits and team-level factors that have been previously linked to innovation success and failure, as well as cognitive decision-making biases that the principal investigator hypothesized would affect the process and outcome of innovation projects.

Ethnographic Data Processing

Data collected from interviews and observations were transcribed, translated, and cleaned in preparation for ethnographic coding (similar to tagging). Tagging qualitative data for meaning is a time-consuming process. All of the tagging was done by human raters using Dovetail App software that is designed to help discover patterns in qualitative data. When tagging qualitative interview data, each transcript was read in its entirety twice by each reviewer. In the first round, the reviewer identified anything they found obviously matched a tag and marked other text to revisit later that seemed important but not yet clear. In the second tagging round, each reviewer resolved any question mark tags and checked for any errors in the original tags. After each rater completed their tagging, we used inter-rater reliability to test the degree to which independent raters agreed on the meaning of the text. The raters showed very high (80%) tagging agreement, which gives us confidence that human tagging was objective enough for our results to be valid.

There are over 180 known cognitive cognitive (decision-making biases and heuristics). We narrowed that list down to 86 biases to track as highly relevant to team based creative projects. Only 65 of these showed up with at least one tag.

Who We Are

Project Team

Dr. Beth Altringer, Harvard University
Role: Principal Investigator, Study Design, Data Collection, Quantitative Data Analysis, Qualitative Data Analysis

Federica Fragapane
Role: Collaborator, Data Visualization Lead

Laurie Delaney, Harvard University
Role: Research Assistant, Qualitative Data Preparation, Qualitative Data Analysis

Jared Meyers, Harvard University
Role: Research Assistant, Qualitative Research Preparation

Sign Up and Start Learning

ABOUT ME
MADISON BARNETT
I get my inspiration from the fictional world. I’m a social geek. Completely exploit 24/365 catalysts for change whereas high standards in action items. Conveniently whiteboard multifunctional benefits without enabled leadership.
GET IN TOUCH
Quickly communicate covalent niche markets for maintainable sources. Collaboratively harness resource sucking experiences whereas cost effective meta-services.