Crowdsourcing isn’t just a buzzword. It is also—and probably most importantly—an ecosystem of its own. History has witnessed many attempts to leverage the wisdom of the crowd, usually in the form of competition, such as the British government’s 1714 Longitude rewards. But crowdsourcing to solve problems and generate new knowledge comes in many forms, including idea generation, innovation, microtasks, corporate research and development, unpaid competitions, and paid crowds.
Crowdsourcing is often linked to various forms of online communities where members interact and communicate with each other primarily via the Internet. The 2001 launch of Wikipedia is a good example. In this context, there are different types of virtual communities. Some communities are based on interests, relationships, or transactions, while others provide geographic and demographic frameworks around topics and activities. Virtual communities of practice (VCoPs)—knowledge-based communities and expert networks—are often formed around these areas of interest. These communities are made up of professionals who, while working in different organizations and at times even subscribing to different disciplines, come together to discuss topics related to their field of study or expertise. We refer to the practice of leveraging the wisdom of VCoPs for ideation and research purposes as expert-sourcing.
Whether commercial or non-commercial, the essence of any type of expert network is participation and contribution, as the main goal of these communities is to generate knowledge. However, previous research has demonstrated that participation and contribution of content is not equally distributed among members of these communities, thus creating what is termed participation inequality. Participation patterns in VCoPs indeed seem to be unbalanced: in most communities, 90 per cent of users are described as lurkers with only marginal or zero contribution. Nine per cent are described as contributors, contributing a little and on an irregular basis. Only one per cent of users, called creators, account for most of the actions taken.
RESEARCH APPROACH
We are practitioners and academics who were in charge of Wikistrat’s community of almost 3,000 experts from various fields of expertise who collaborate in real time on a virtual platform with the purpose of creating new knowledge. Responsible for motivating contributions from online communities of experts, we wanted to know what drives experts to participate and contribute under the umbrella of VCoPs, and what prevents them from doing so.
In order to answer these questions, we analyzed four years of data regarding the behavioural patterns of Wikistrat’s members, and conducted semi-structured interviews. This combination of quantitative and qualitative methods gave us the advantage of analyzing many facets of (big) data accompanied by informal impressions.
The Wikistrat community is diverse. It consists of experts in various fields (e.g., geopolitics, economics, social science, finance, and technology) and at various levels of expertise. We also divided our analysts into three groups based on their quantitative contribution (i.e., number of actions):
- Leaders (more than 50 actions per year)
- Inbound (1–50 actions per year)
- Lurkers (only observing, no actions taken)
To eliminate the financial incentives factor, which has already been proven to be the prime driver of participation, we only analyzed participation patterns in unpaid research activities.
“The success of a virtual community of practice is fully dependent on fulfilling the crowd’s wants and needs.”
KEY FINDINGS
LESSON 1—Topics & Types of Interactions Motivate Experts:
We found that experts tend to participate more often when the topic relates to their field of expertise or is of great importance to them. We also found that experts often participate in activities with topics not necessarily related to their field of expertise if the topic is “hot” (i.e., high on the public’s agenda). Topics related to the future of the Middle East and the economy of China were of greatest interest to most of the analysts. Furthermore, it seems that experts are most engaged when an exciting or new methodology is presented to them; conversely, there is a sense of weariness if the same methodology is constantly being used. For example, a series of role-playing games conducted during 2014–15 demonstrated higher participation than other more common and less interactive activities.
LESSON 2—New Ideas are a Major Driving Force:
A common assumption proven repeatedly in research is that VCoPs’ members participate because of social motivations; that is, to network with their peers. However, our study demonstrated that participants are looking mainly for new ideas and insights, and not only to network. Furthermore, the seniority of other participants has little effect on willingness to participate, especially when compared to intellectual stimuli. Through interviews we conducted, a clear message arose: experts are best engaged in activities in a non-hierarchical, open-for-all environment in which discussions allow for the creation of multifaceted crowd-generated insights. As one of the analysts, a university professor, wrote: “Joining the community granted me numerous opportunities to participate in the simulations. In that process, I gained a lot and augmented my own pedagogy in teaching my students, and enhanced my analytical skillset.”
LESSON 3—Self-sense of Value is Key:
Given these insights, it is not surprising that the main factor that prevents people from contributing is their imagined or true feeling that their voice is not heard and that they have little or no influence over the content-generation process. Experts’ perception of lacking contribution, even if that contribution is not recognized as such by peers, creates a loop: the less involved the expert, the less meaningful their contribution will be. This, in turn, only suppresses their desire to further contribute to the process. A similar loop of disengagement is created when a topic or methodology is perceived as irrelevant or uninteresting. In such cases, experts tend to steer away from the exercises and contribute almost no content. Similarly, methodological ambiguity also creates barriers of participation. One experienced participant even stated that the “methodology inexperience” of others presented a barrier to his own participation on the platform.
LESSON 4—Time is of the Essence:
Analyzing user participation over a long period has taught us that people are most active during their first six months. After that, there is a constant but slow process of decline in participation. While this phenomenon is widely known, we found that simple exhaustion couldn’t explain it in our community’s case. Experts usually join the platform when they feel they have the time to participate in simulations. However, over time, and as their career progresses and changes, they have less free time, suggesting that their participation is constrained by the time they have available. We also observed that the more people experiment with the platform, the more they understand exactly what they appreciate about it, which makes them more likely to participate in activities around specific topics and methodologies of interest. This results in more mindful, and thus concise, participation patterns that emphasize quality over quantity of participation.
LESSON 5—It isn’t Only About Millennials:
Finally, we wanted to examine whether contributors with different ages and levels of expertise have different sets of motivations. We noted many behavioural nuances among different types of expertise and experts, but one key finding was especially interesting: surprisingly, young people are not the most active, despite the general notion that there is a strong correlation between age and technological savviness. The most active groups of participants are senior analysts and contributing analysts. Based on the professional characteristics of these groups (an assumption that was validated through interviews), those groups consist of 30- to 50-year-olds. Compared to this age group, younger analysts are less likely to participate.
Given the recent boom in expert crowdsourcing and its increase in use by public- and private-sector entities, including the U.S. government, it is imperative to understand what motivates expert behaviour within expert communities. Organizations and companies that are engaged in such activities must master the art of harnessing the wisdom of the crowd and provide expert communities with the incentives to make the best use of their platform. A successful VCoP must be a win–win proposition; the operator’s success is fully dependent on fulfilling the crowd’s wants and needs.
It is important to keep in mind that, material remuneration aside, members of VCoPs are primarily incentivized by intellectual stimuli. Such stimuli can stem from the research topic and question, the methodology, or other participants’ intellectual contribution to a given activity. It is therefore important for online community managers to pay close attention to how the problem is framed, to note the methodological constructs of such exercises, and to encourage experts to actively participate in such activities. There is nothing that frustrates experts more than lacklustre intellectual activity and boring peers.
Humanity already relies on those who truly know their fields to make progress. We can harness the synergistic power of their collective knowledge—and open new possibilities for advancement—only if we understand what motivates them to create, think, and interact with their peers. Our conclusions are useful not only for existing expert communities; they also provide insight into why experts choose to generate knowledge in the first place.