ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Towards an Understanding of the Crowdsourcing Activities

Stuart Madnick1, Sunny Cheung2,Changsu Kim3,Yongju Lee4*
  1. MIT Sloan School of Management, 50 Memorial Drive, Cambridge, MA 02142
  2. MIT System Design and Management, 50 Memorial Drive, Cambridge, MA 02142
  3. School of Business, Yeungnam University, Gyeongsan, Gyeongbuk, Korea
  4. School of Computer Science and Engineering, Kyungpook National University, Daegu, Korea
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

The explosive growth of the Internet has led to a more connected world, and as a consequence, crowdsourcing has become an efficient and inexpensive option worldwide. Crowdsourcing is an increasingly popular trend in which firms solicit assistance from the public for their activities that are typically performed by employees or contractors. This study proposes a crowdsourcing classification model and applies it to various crowdsourcing cases by examining the motivation and implications of some popular websites exemplifying crowdsourcing-based activities ranging from content ideas (the process of generating new ideas) to content services (providing customers with ongoing services). Other activities include designing, generating, subcontracting, and reviewing content and evaluating content solutions. Finally, the study proposes a taxonomy of crowdsourcing activities that has important theoretical and practical implications. This study offers interesting avenues for future research and provides insightful guidelines for crowdsourcing practitioners.

Keywords

crowdsourcing;classification model;crowd motive;taxonomy of crowdsourcing

INTRODUCTION

Crowdsourcing is an increasingly popular trend in which firms obtain assistance from the public for some of their activities that are typically performed by employees or contractors (Bonabeau 2009; Albors et al. 2008; Leimeister et al. 2009; Jouret 2009; Newstead &Lanzerotti 2010; Anthes 2010). Such activities can range from content ideas to content services. Although crowdsourcing has been criticized for being nothing more than a way for firms to increase their profits by paying the crowd (public) much less than what the work is actually worth, it can be argued that the crowd can sometimes be superior to employees in terms of the quality or quantity of outcomes. That is, crowdsourcing can provide multiple benefits if it is effectively leveraged, and therefore it is essential to have a better understanding of this concept by examining those crowdsourcing activities that are likely to lead to favorable outcomes (Leimeister et al. 2009).
This studyexamines how firms leverage new internet technologies to build websites or other systems that are designed to attract the attention of the crowd, help the crowd identify the most appropriate tasks, facilitate communication among members of the crowd, and communicate the results back to sponsoring firms. In addition, the study explores the motivation behind crowdsourcing from the perspective of firms as well as that of the crowd.Further, the study examines why the crowd can be better or cheaper and what firms can do to more successfully lead the crowd. The rest of this paper is organized as follows: Section 2 provides a review of previous research on crowdsourcing. Section 3 develops a crowdsourcing classification model, and Section 4 applies it to seven cases. Section 5 concludes by presenting a taxonomy of crowdsourcing activities.

LITERATURE REVIEW

A. Overview of Crowdsourcing

Initial battery energy (IBE) is 50Jules for each node.The term “crowdsourcing” was coined by Jeff Howe (and Mark Robinson, the editor of Wired) in an article in Wired in June 2006 (Howe 2006). Crowdsourcing is distinct from open source software development, user-generated content (UGC), outsourcing, and collective intelligence in several ways. First, crowdsourcing is different fromopen-source software development in that the former reflects a deliberate act by an organization to outsource a task or activity, whereas the latter involves individuals who organize themselves toward a particular goal (typically in the area of software development). Second, crowdsourcing is different from UGC in that the latter tends to be personal. Therefore, employees fill free to generate UGC. Third, before the advent of crowdsourcing, many firms tended to outsource their IT- and software-related operations abroad to countries such as India and China because these countries have a deep pool of skilled and inexpensive labor. On the other hand, outsourcing often requires certain amounts of time and effort to establish and maintain (Howe 2006), and this overhead makes crowdsourcing an ideal alternative for short-term or low-maintenance tasks. Finally, collective intelligence refers to organizing “groups of individuals doing things collectively that seem intelligent” (Malon et al. 2009). Crowdsourcing focuses on the ownership of outputs and how tasks are performed traditionally, whereas collective intelligence pays no attention to these factors.
The advantages of crowdsourcing are briefly summarized as follows: First, crowd size is a critical advantage. Second, crowd members are typically widely dispersed in terms of their geographic distribution. Third, a crowd typically has a diverse range of members, and under certain conditions, a diverse group can outperform a group of experts in solving problems that belong to the realm of those experts. Fourth, tapping a crowd for certain tasks can dramatically reduce development costs. For certain tasks, using employees or experts can lead to higher-quality outcomes than crowdsourcing, but the former generally entails much higher costs (Myers 2010). Although there are other advantages of crowdsourcing, these four are the major reasons why many firms consider crowdsourcing as an attractive alternative. Despite these advantages, however, previous studies have provided no robust model for understanding crowdsourcing applications in the context of the organization’s value chain. In this regard, this study addresses this gap in the literature by proposing a model of the crowdsourcing value chain based on related activities.

B. Previous Research on Crowdsourcing

Aim of the proposed algorithmHowe et al. (2006) introduced the concept of crowdsourcing, and since then, an increasing number of studies have examined this new phenomenon, particularly because of its sustained popularity. In general, there are four major streams of research on crowdsourcing: The first stream focuses on the general characteristics of crowdsourcing. Most studies have addressed the basic concept of crowdsourcing, and some have provided a deeper and systematic understanding by offering some representative examples of crowdsourcing (Greengard 2011), general discussions on crowdsourcing (Doan et al. 2001), guidelines for using crowdsourcing (Schweitzer et al. 2012), various definitions of crowdsourcing (Estellés-Arolas et al. 2012), and some underlying mechanisms (Newstead &Lanzerottl 2010; Felstiner 2011; Zheng et al. 2011). The second stream pays close attention to crowdsourcing-based cooperation for solving specific problems (Alborts et al. 2008; Hoffmann 2009; Scott 2010; Anthes 2010; Tang et al. 2011; Savage 2012; Afuah&Tucci 2012).
The third stream addresses the question of how crowdsourcing can facilitate decision making and generate good ideas in various areas such as commerce (Bonabeau 2009; Leimeister et al. 2009; Jouret 2009; Brabham 2012; Gast&Zanini 2012; Poetz&Schreier 2012; Stieger 2012), education, scientific research (Hey 2010), and even government (Hoffmann 2012). Finally, although most studies have highlighted the potential of crowdsourcing as a promising development tool, the fourth stream emphasizes the limitations and drawbacks of crowdsourcing (Roman 2009; Greengard 2011; Sibony 2012). For example, Roman (2009)argued that crowdsourcing has inherent disadvantages and that it is closer to “the mob that rules” than to “the wisdom of the crowd”. Greengard (2011) found that crowdsourcing outcomes may be regarded with suspicion. Sibony (2012) noted the risk of using crowdsourcing outcomes in the strategic decisionmaking process.
image
Most studies focusing on the application of crowdsourcing in the context of firms have emphasized only general issues or particular aspects of the overall workflow. Therefore, it’s necessary to gain a comprehensive understanding of crowdsourcing classification activities. A clear and in-depth understanding of firms’ crowdsourcing activities should provide important theoretical as well as practical implications.

DEVELOPMENT OF CROWDSOURCING CLASSIFICATION MODEL

To develop the crowdsourcing classification model, this study employs a two-stage research procedure: the construction of a conceptual model and the use of a focus group. That is, the study develops a conceptual research model and verified it by using a focus group.

A. Conceptual Model

It is useful to classify various types of crowdsourcing applications. In an analysis of specific applications of crowdsourcing activities, it is useful to model a chain of value-adding activities (Porter 1985).Crowdsourcing tends to be used to support two types of activities: primary and support activities. Primary activities are related directly to design, generation, subcontract, review, and service functions, whereas support activities facilitate primary functions. As shown in Figure 1, the initial classification model of crowdsourcing activities consists of primary and support activities. Primary activities include designing, generating, subcontracting, and reviewing content and providing content services, whereas support activities include generating content ideas and supporting content solutions. Becausethese activities are related each other in some ways, it isimportant to note their differences.
image

B. Use of the Focus Group

To verify the types of crowdsourcing activities and gain useful insights in the characteristics of each type, we executed a focus group. We selected 13 members of the focus group and divided them into developers and users with some crowdsourcing experience. Before the focus group, we sent documents to the participants by e-mail to explain the major objectives of the meeting and the pertinent questions for the initial crowdsourcing classification model. In the meeting, we presented the initial conceptual model and accounted for each type. Afterward, we discussed the issues through informal brainstorming. The participants provided their opinions on the initial crowdsourcing classification model in an unstructured and natural manner. The recorded meeting lasted about one hour, and afterward, we analyzed and summarized the content of the meeting. Some participants commented that it would not be logical to classify content ideas into support activities because such ideas are the first part of any process, not some support activity. One participant mentioned a circular model of the crowdsourcing process, and another suggested an interaction model of mutual exchanges of crowdsourcing activities. Despite various views on the model, the group was able to form a consensus. According to the comments, we constructed the refined classification model of crowdsourcing activities (Figure 2).
image

C. Implications of the Crowdsourcing Classification Model

The classification model of crowdsourcing has several implications. First, it shows that the model can provide guidelines for specifying a chain of value-adding activities through crowdsourcing. Second, the model can be a useful tool for devising appropriate business strategies. Some crowdsourcing mechanisms (Newstead&Lanzerottl 2010; Felstiner 2011; Zheng et al. 2011) cannot sufficiently explain particular crowdsourcing applications, and firms may need to provide detail information on such applications before establishing their business strategies. Third, based on the classification model, firms have the ability to identify effective areasof crowdsourcing applications that are worthy of additional attention. However, not all crowdsourcing activities conform to the classification model. For example, hybrids are possible. Nonetheless, this model serves mainly as a basic model of crowdsourcing activities for analytical purposes. The next section provides a case study to show the applicability of the refined classification model of crowdsourcing activities in terms of motives for and implications of strategic applications.

APPLICATION OF CROWDSOURCING CLASSIFICATION MODEL

A. Generating Content Ideas

Generating content ideas is similar to designing content in that both involve meeting some minimal requirements and that the objective is to select the best submission. However, they are different in that designing content is geared toward customers or end users, whereas generating content ideas is geared toward firms initiating the call for ideas (Hoffmann 2009). Generating content ideas can be divided into two activities: The first type of activity, which is quite prevalent, seeks suggestions from the crowd on how to improve, among others, products or services. Many websites have a webpage with a text box for users to submit their suggestions. The second type gives the crowd an assignment and then selects the best idea. In other words, firms make use of crowdsourcing to brainstorm.This type is less common and sometimes gives an impression that the purpose is business planning, not idea generation.

(1) Case Analysis

IdeaStorm, a website launched by Dell in 2007, allows Dell to determine the most important and relevant ideas from the perspective of its customers. Users can post their ideas for improving Dell products and services, and each idea falls into a certain category (Dell 2007). In addition, they can comment on other users’ ideas and vote to promote (oppose) ideas to make them more (less) visible on the Popular Ideas page, which can make them more (less) visible to other users and Dell. IdeaStorm also adopts the so-called “vote half-life” system to take into account the date of the vote. First, each vote for or against an idea is initially worth +10 and -10 points, respectively. The popularity score is then determined by the total number of favorable and unfavorable votes. Second, each vote has a half-life of four days. That is, the value of a vote decreases to half of its previous value every four days. For example, an unfavorable vote is worth -2.5 points after eight days. Nevertheless, this effect is internal and does not change the popularity score, which is why some ideas with lower popularity scores can appear above more popular ones.

(2)Crowd Motives

As in the case of many websites that crowdsource content creation, IdeaStore provides no financial rewards for contributing ideas. However, users contribute for several reasons: First, they are loyal to the brand and want to help it succeed (Journet 2009; Leimeister 2009). Second, they are frustrated with one of Dell’s products or services. Third, they desire some features themselves and want to see Dell implement them. For example, one of the most high-profile actions that Dell took as a result of these ideas was its support of Linux distribution. After numerous Linux fans voiced their discontent with Dell for failing to support Linux, Dell started selling several computer systems with a preinstalled Linux operating system. In addition, Dell implemented some other popular requests such as setting up technical support telephone lines in the U.S. and hiring fluent English speakers to operate them, allowing preinstalled software to be optional, and providing reinstallation CDs for free.

(3)Implications

Through the IdeaStorm website, Dell gathers suggestions from customers for use in all areas of its business. In addition, IdeaStormis a brand loyalty platform for clients to submit their ideas and the crowd to fulfill requests. The Dell brand attracts a strong user community on the IdeaStorm website. IdeaStorm activities are enjoyable, require little time, and can be performed anytime, anywhere. For these reasons, together with financial incentives, the crowd remains active in supporting the user community.

B. Designing Content

Designing content aims at meeting all requirements and emphasizes subjective factors. In generating content, crowdsourcing is considered successful if the crowd makes substantial contributions, but it is the opposite in designing content. Unpopular content merely wastes some technological resources, but poorcontent can be much more costly depending on the type of content. In addition, for the economy of scale, a content design should appeal to a large number of people. All these mean that quality is the most important factor in content design, whereas quantity matters little because firms ultimately select only a few designs.

(1)Case Analysis

Although this strategy is more commonly observed among start-ups whose business plans depend mainly on using crowdsourcing to design content, some large firms also make use of the collective wisdom of the crowd. One of the major characteristics of firms using crowdsourcing to design content is that they tend to sell products to those individuals who helped to design those products. Threadless is a representative example of designing content through crowdsourcing. The idea behind Threadless is simple: crowdsourcing the design of T-shirts. With only 50 employees, Threadless generated $30 million in sales in 2009 (Burkitt 2010). Threadless was founded in 2000 after the founders won an online contest for T-shirt designs (Weingarten 2007). On Threadless, designers upload their T-shirt designs, which are then rated by Threadless visitors and members. Threadless selects the winning designs each week and prints them, and the winners receive cash prizes or store credits. For each design, users can leave comments and suggestions for the designer.

(2)Crowd Motives

This section discusses the main motivation behind the crowd’s active participation based on Brabham (2010), who interviewed 17 Threadless users who submitted a design, actively rated designs, or routinely shopped there. Based on interviews, he identified five major reasons for their participation: making money, improving creative skills, freelance opportunities, a love of the community, and addiction. According to his findings, the first motive (making money) is a major driver of design submissions. When the purpose is making profits, the crowd needs to be financially rewarded so that they do not feel exploited. In contrast to the first motive, the second one (improving creative skills) is intrinsic. The third motive (freelance opportunities) is related to career advancement, and the fourth motive is a love of the community. In terms of the final motive (addiction), although the interviewees could not clearly explain their addiction, they stated that they had fun and felt a sense ofbelonging to theThreadless community.

(3)Implications

Threadless has a successful business model (Wikipedia 2012). Through its website, it can acquire popular designs at little or no cost while guaranteeing a certain volume of sales before printing each design. Although T-shirt designs are submitted by individuals, the community plays an important role. Threadless is known for its extremely loyal and vibrant user community. Without this community, designers would be less motivated. In addition, designs may be less successful without feedback from the community. This suggests that it is crucial for firms to understand not only how to tap the creativity of the crowd but also how to build a tight and enjoyable community around it.

C. Generating Content

Generating content encompasses all other categories because the purpose of the other tasks is to produce some specific content. That is, in content generation, the content itself is often the end product, whereas in the other activities, content is merely an input for some other processes, as in the case of Wikipedia.com. In addition, there were approximately 130 million websites worldwide as of June 2011 (Whois Source 2011). Therefore, how a user determines which websites to visit depends mainly on their relative value. In this regard, an effective way to attract users is to provide valuable content such as news articles, product reviews, and videos, as demonstrated by popular websites such as Wikipedia, Yelp,TripAdvisor, and YouTube (Doan et al. 2011).

(1)Case Analysis

YouTube users can watch a wide range of videos for free, including movie and TV clips, music videos, tutorial videos, blogging videos, and amateur videos that capture exciting moments (Wikipedia 2012), and post their comments. Here it is important to understand why so many users are willing to upload their videos on YouTube for free and what YouTube does to encourage this (YouTube 2011). Before the birth of YouTube, there was no easy way for people to share videos online. People’s desire to upload videos is so natural that they donot perceive it as a crowdsourcing task from YouTube. Thisis a true win-win situation not only for YouTube and its users who upload videos but also for those simply watch those videos. YouTube doesnot charge users for uploading or viewing videos, although these users consume IT resources of YouTube. From YouTube’s perspective, these videos and users represent its most important assets, and therefore, instead of charging users, which may drive them away, YouTube focuses on generating profits through other means such as advertising and affiliate programs (Schonfeid 2011).

(2)Crowd Motives

Although it is clear how YouTube benefits from uploaded videos, it is less obvious what motivates users to upload them. First, some contributors want share their videos only with their acquaintances and thus use YouTube as a video storage platform, whereas others upload them to attract attention from others (Huberman et al. 2008). Second, one’s status and recognition represent an extremely important motive for contributing within an online community (Lampel&Bhalla 2007). An important implication of these observations is that YouTube and other firms that employ crowdsourcing to create content should leverage people’s desire for attention, particularly when there is no material compensation. Third, some contributors upload videos for marketing purposes. YouTube’s view count feature suggests that this count is the most important piece of information on each video. Fourth, some contributors upload videos to promote certain things such as products or causes that they like or believe in.

(3)Implications

The above example of generating content through crowdsourcing illustrates what motivates users to contribute in the absence of material rewards. Money is not the only motivator. A love of the activity itself or those individuals who would benefit from the activity as well as recognition can also be an effective motivator (Scott 2010). This suggests that firms should leverage users’ desire for attention when crowdsourcing content creation in the absence of monetary rewards. In this regard, it isimportant to feature the top contributors as well as to ensure that these users are aware of how much attention their content is receiving. In addition, firms should consider monetary rewards whenever possible, particularly when content production may incur some costs. In this way, firms can attract those individuals who are motivated mainly by monetary rewards while further motivating those individuals who focus mainly on receiving attention.

D. Subcontracting Content

Subcontracting content focuses specifically on accomplishing those tasks that firms often outsource to achieve cost savings. Such tasks tend to be routine but difficult to automate. Although the Internet has already made it easier for people and firms to stay connected, firms remain unlikely to engage in direct crowdsourcing. Instead, they post their requirements on popular crowdsourcing sites, and users (i.e., the crowd) view them and choose those tasks that interest them. For such tasks, crowdsourcing has several advantages. The first advantage is that tasks can be accomplished at a much lower cost (Hoffmann 2009; Greengard 2011). The second advantage is that, because of the large size of the crowd and the relative simplicity of tasks, many individuals can work on a task, finishing it much faster than employees. The third advantage is that employees usually work at the office during office hours, whereas a crowd has no such space or time limitations. The fourth advantage is the diversity of crowd members, which can be an important asset in situations requiring activities such as language translation (Leimeister et al. 2009).

(1)Case Analysis

Amazon’s Mechanical Turk service, launched in 2005, is a crowdsourcing marketplace for subcontracting content (Wikipedia 2011). Buyers of this service (the so-called “requesters”) pose tasks that require human intelligence to perform but are often simple enough for just about anyone to perform. Requesters can reject any result submitted by a “provider,” which impacts his or her approval ratings. These approval ratings are important because some requesters set minimum approval ratings as one of the qualifications. On the other hand, after successfully completing a task, the provider receives some payment from the requester. These tasks typically take very little time and thus involve low pay,often ranging from less than a dollar to a few dollars. Ultimately, requesters have to pay a commission to Amazon in the amount of 10% of the total payment to the provider.The most common HITs (human intelligence tasks) involve transcribing and rating podcasts, tagging images, writing articles, and commenting on blogs, all of which require only basic literacy and internet navigation skills.

(2)Crowd Motives

Ipeirotis (2010) conducted a survey in 2010 to understand individuals’ motives for doing work through Mechanical Turk and found that 47% were from the U.S. and 34%, from India. Because workers from these two countries accounted for a vast majority, he compared the responses between these two groups and found important differences in motives between them, particularly in terms of household income. Indian workers had a much lower level of household income than their U.S. counterparts, and therefore many more U.S. workers perceived Mechanical Turk as a secondary source of income or simply as a productive way to pass time or apply their skills, whereas many Indian workers saw it as a primary source of income. This suggests that the motivation of the crowd may vary widely according to the geographic location and thus that a crowd cannot always be analysed as a whole.

(3)Implications

Many tasks cannot be automated and thus must be performed by humans. Such tasks are often simple enough for anyone with reasonable intelligence and computer skills to perform (Greengard 2011; Savage 2012). Instead of crowdsourcing such tasks directly, firms may tap crowds through marketplaces such as Mechanical Turk because this method is much less troublesome. On the other hand, although the pay is often much lower than the standard in the U.S. (U.S. Census Bureau 2011), it can represent a respectable income for those in less developed countries such as India. Because these marketplaces can well serve both parties, they are quite successful. On the other hand, their usefulness is influenced by the skill of workers the marketplace can attract. This indicates a need for specialized marketplaces for more complicated tasks requiring a higher level of skill or knowledge.

E. Reviewing Content

Polarized content reviews are common for movies and books on websites such as IMDb and Amazon. Therefore,although reviewing content tends to be subjective, it is different from designing content in that the main goal of the former is to motivate people to express their opinions, not to identify or predict what may satisfy a large number of people. Therefore, by reviewing content, low-quality content can be filtered out. There are many reasons why crowdsourcing can be useful for reviewing content. The first reason is that the rapid proliferation of content makes it almost impossible for firms to keep track of and catch up with new content on a daily basis. The second reason is that individuals often value one another’s opinions and reviews more editorial reviews, partly because the latter are generally perceived as biased toward firms’ best interests. The third reason is that collective opinions from hundreds or even thousands of individuals may be more persuasive than a single editorial (Levine & Shah 2007).

(1) Case Analysis

Digg is a social news website where the crowd decides which articles deserve more attention. Google once considered acquiring Digg for approximately $200 million when Digg’s revenue was less than $10 million (Wikipedia 2011).Digg employs crowdsourcing to review content because all links to articles are generated by users.Therefore, the main value for Digg lies in users’ opinions on articles, whereas that of YouTube, in content. From the perspective of the crowd, Digg users’ sustained contributions can be explained in several ways: First, when a user comes across an interesting article, he or she may want to see how his view compareswith that of other users (Scribd 2011). Here the user can achieve this by submitting the article to Digg and observing other users’ reactions. Second, the article invites comments and discussions on the Digg website, and therefore the submitter can find out more about the story in his or her article by considering various perspectives (Scribd 2011). Finally, contributing to an article that rises to the top of the Digg homepage can satisfy the submitter’s desire for attention and recognition within the community. Users can “digg” an article to increase its popularity and bring it to the top of the page.

(2)Crowd Motives

It is important to understand whyusers like Digg more than traditional news sites such as Reuters or Yahoo News. One reason is that users’ comments and discussions often make the story much more interesting and engaging. In addition, the reader can view the story from different perspectives. Another reason is that users do not know what unusual and obscure articles they may come across on Digg. Finally, the stream of news content changes constantly, and this level of freshness attracts many repeat visits from users who want to stay on top of global news.Digg’s success derives from the crowd that determines which articles should surface to the top. Although this type of democracy is welcomed by users, it can be dangerous for the company itself. Another challenge is to prevent the so-called “groupthink,” in which the crowd thoughtlessly follows the majority’s decisions and repeats its action. The wisdom of the crowd is based on its diversity, and therefore it is important to maintain its independent thought and opinions (Surowiecki 2005).

(3)Implications

Employing crowdsourcing for content reviews has many benefits. Firms do not have to review thousands (and sometimes millions) of pieces of content and can provide more trustworthy reviews (Savage 2012). In addition, one side benefit is fostering the community because each user is influenced by the community’s opinions and his or her own opinion influences the rest of the community. Therefore, asking users for their opinions can not only help generate reviews but also increase their sense of belonging to the community, which in turn can encourage them to contribute even more.Although there are many benefits in democratizing the voting process, care must be taken to preserve the crowd’s independent opinions (Hoffmann 2012). Otherwise, this can lead to groupthink, which can result in severely biased outcomes. In addition, firms should have appropriate mechanisms to prevent the abuse of their review systems by those individuals who wish to promote or hinder a piece of content. Unfortunately, the rise of crowdsourcing marketplaces has made such activities much easier and cheaper to perform than in the past. Therefore, crowdsourcing can be a double-edged sword that can have huge benefits for one firm but wreak havoc on another.

F. Providing Content Services

Providing content services generally entails those tasks requiring a high level of skill, and therefore firms pay closer attention to these tasks than to routine ones and are more selective in terms of the crowd’s qualifications and previous experience. Indeed, workers have to compete based not only on their qualifications and experience but also on their prices. For example, on vWorker (formerly “Rent a Coder”), firms hold virtual auctions to decide which of the many freelance programmers it would assign to a particular task. In addition, competition is quite fiercebecause many programmers come from countries with a lower standard of living and thus are willing to accept lower pay. On the other hand, the level of competition on vWorker may not be as high as that on TopCoder.

(1)Case Analysis

Founded in 2001, TopCoder is a firm specializing in holding programming competitions. Every couple of weeks, talented programmers from around the world compete in TopCoder’s SRM (single round match), which is a two-hour competition that tests the programming and debugging skills of contestants as well as their knowledge of algorithms.
Other types of competitions also take place, and all are related to software development and testing. By holding these competitions, TopCoder has built a community of 300,000 members (TopCoder 2011), and its revenue was $19 million in 2007 (Wikipedia 2011). TopCoder makes money throughout this community in several ways. One way is by helping firms to exploit its pool of talent through crowdsourcing various projects. For example, when a third party wants to crowdsource a software component, TopCoder can hold a design competition for that component. Here the best submission is selected, and the winner is rewarded. A separate development competition can be used to implement the chosen design. Again, the best submission is selected. TopCoderholds many other types of competitions, including the specification, architecture, assembly and testing of software. Here crowdsourcing is made more effective by dividing a task into small pieces, which allows contestants to focus on what they are best at.

(2)Crowd Motives

Because only the winners are rewarded, contestants often receive nothing for their hard work. On the other hand, because they treat their work as part of a competition, they often find the activity exciting and thus consider the actual reward as a secondary benefit (Tang et al. 2011; Anthes 2010). Instead, winning a competition and being acknowledged are often their primary motive because this achievement can brighten their career prospects and provide them with recognition in the community. Their motives are similar to those of Threadless users. To satisfy users’ desire for recognition, TopCoder features the top contestants prominently. TopCoder understands that its core asset is its strong community of programmers, and to better understand and serve its members, TopCoder frequently polls them on various issues.

(3)Implications

TopCoder has built a community of talented programmers by hosting competitions. As a result, it helps its clients crowdsource their projects by encapsulating their requirements through competitions. Here the clients pay much less, and programmers have fun while making some money. This crowdsourcing model is similar to Dell’s content service (Wikipedia 2011). On the other hand, TopCoder tasks require more time and expertise, and therefore a submission is much less likely to be selected. TopCoder addresses this issue by treating tasks as contests. Therefore, contestants perceive that they are losing a contest, not being unpaid for their work. This suggests that firms should portray their tasks as competitions or other enjoyable activities to encourage the crowd to work based on its intrinsic motivation.

G. Supporting Content Solutions

Generating content solutions aims at identifying feasible solutions to given problems and is only about fulfilling the requirements, and therefore it focuses on generating only a single valid solution. This activity involves evaluating submissions. A crowd is characterized by its large size, willingness to perform tasks at a low cost, and occasional creativity. On the other hand, generating content solutions, particularly scientific ones, requires specific skills, knowledge, and certain levels of dedication and patience. However, this way of thinking is flawed in several ways: First, a crowd can include anyone, and therefore it is simply a matter of targeting appropriate ones. Second, generating content solutions has become increasingly interdisciplinary in nature, and therefore it is beneficial to recruit more people to assess problems because some may offer entirely new perspectives.Third, crowdsourcing can not only motivate expert members of the crowd but also rekindle passion among those amateurs who have formal training in a particular scientific discipline but work in a different field (Roman 2009).

(1)Case Analysis

InnoCentive, founded in 2001 by Eli Lilly, is a platform for tapping an external pool of talent for tasks related to drug development. Since the beginning, the platform has been available to other firms (e.g., Proctor & Gamble and Boeing) eager to leverage this pool of expertise. These firms, which are known as “seekers,” post some of their most difficult problems on the InnoCentive website. “Solvers” offering successful solutions then receive rewards that typically range from $10,000 to $100,000. More than $28 million in reward money has been given to solvers since the founding of InnoCentive. Here problems (“challenges”) come from a wide range of disciplines, including chemistry, life sciences, and computer science, and can be theoretical or practical. To foster crowd collaboration, InnoCentive provides “team project rooms” on the website, which are basically a space for teams to share notes and discuss results privately. This has multiple benefits. First, it helps to expand the community by incentivizing members to tell others about InnoCentive. Second, in addition to leveraging crowd knowledge, it actually leverages its social networks. Finally, an individual whose expertise is in one field is no longer limited to challenges residing in that field because he may know someone who can solve challenges in other fields (Glance &Huberman 1994).

(2)Crowd Motives

These solvers are driven by many motives. Although monetary compensation is obviously attractive, other concerns such as reputation and recognition are also very important. In addition, they are driven by purely intrinsic motives. Solvers often indulge in the sheer joy of problem solving. The process of tackling a challenge is itself rewarding, and being able to solve a challenge is more like a bonus. In addition, some solvers are retired scientists who are just happy to be able to apply their skills and knowledge again. The case of InnoCentives shows that intrinsic motives have considerable influence on people’s desire to solve problems. On the other hand, participation based on career and social concerns or for the sake of the competition tends to reduce the likelihood of solutions (Defense Advanced Research Projects Agency 2010).

(3)Implications

InnoCentive is a marketplace where firms can tap a global pool of scientific talent. This is enabled by the increasing tendency of firms to crowdsource tasks and the empirical observation that diversity is extremely beneficial for solving difficult problems (Page 2008). Through InnoCentive, firms can solve some of their most difficult R&D problems, and solutions often come at a much lower cost than in-house solutions (InnoCentive 2012). On the other hand, the crowd not only receives monetary rewards for successfully solving challenges but also can satisfy their appetite for problem solving. Given the difficulty of problems, the crowd has been relatively successful in solving these challenges. Crowd members’ broad range of expertise is something that R&D labs admire. Although the crowd occasionally forms teams, a lack of collaboration among teams generally makes the crowd less effective. This suggests that firms wishing to leverage crowdsourcing to solve their problems should promote the openness of problems by offering appropriate incentives and removing relevant barriers.

CONCLUSION AND IMPLICATIONS

This paper proposes a crowdsourcing classification model and applies it to some internet-based crowdsourcing applications by considering some representative websites that make effective use of such applications. As shown in Table 2, crowdsourcing involves a diverse and complex set of human motives. The major characteristics of crowdsourcing activities can be summarized as follows:
First, creating content solutions is facilitated by monetary compensation.Second, designing content through crowdsourcing is driven by motives such as monetary rewards, the enhancement of creative skills, freelance opportunities, a love of the community, and addiction, among which the principal motivator is a feeling of closeness to the community. Third, generating content is facilitated by the following motives: a desire to share videos, achieve status and recognition, and facilitate marketing. Among these, the major motivators are monetary incentives and a desire for recognition. Fourth, subcontracting content is driven by the level of income, and therefore the primary motivator is monetary rewards.
Fifth, reviewing content is expedited by a chain of motives such as users’ comments and discussions, readers’ enjoyment, and new content, among which the major motivators are a love of the activity and a feeling of closeness to the community. Sixth, providing content services is motivated by career prospects and recognition in the community and the main motivator is a feeling of closeness to the community. Seventh, generating content ideas is promoted by brand loyalty and a desire to feature ideas, and the critical motivator here is a feeling of closeness to the community. Finally, Supporting content solutions is motivated by the joy of problem solving, and the essential motivatorsare a love of the activity and a feeling of closeness to the community.
image
The critical motivators of crowdsourcing classification activities generally fall into one of the following categories: monetary incentives, a love of the activity, a desire for recognition, and a feeling of closeness to the community. Their dominance depends on the way the crowdsourcing activity is applied. For example, money tends to be the primary incentive for boring and uninteresting tasks. For tasks that require creativity or are truly engaging, a love of the activity is the major source of motivation. On the other hand, tasks that are challenging or require considerable expertise are likely to attract people with a strong desire for recognition. Finally, a vibrant and supportive community can serve as a type of magnet for sustained contributions. In sum, identifying the major characteristics of a crowdsourced task and the associated crowd is crucial for designing the most relevant incentives.
The results have important theoretical implications. Compared with other internet applications, crowdsourcing is relatively new, and therefore it has attracted little research attention to date. The present study examines the motivation behind crowdsourcing and the critical motivators in terms of crowdsourcing activities. The findings are expected to serve as a springboard for future research on this topic. Because of its exploratory nature, this study presents only descriptive findings. In this regard, future research should employ sophisticated statistical approaches to provide an empirical analysis of the issues. In addition, the proposed taxonomy of crowdsourcing activities can be applied to other types of crowdsourcing projects and is expected to serve as a good theoretical platform for further empirical research.
The results also have important practical implications. The issues delineated in Table 2 can help practitioners to better plan their strategies for adopting crowdsourcing. In this regard, the results are expected to be beneficial to managers because they explain the key motivation behind crowdsourcing adoption based on crowdsourcing activities. In addition, the results suggest that managers should be familiar with the latest internet applications such as crowdsourcing to increase the effectiveness and efficiency of their operations in the organization.In sum, given the scarcity of relevant research, the findings provide researchers and practitioners with some valuable and novel insights and are expected to motivate further research on the adoption of crowdsourcing.

ACKNOWLEDGMENT

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science, and Technology (No. 2010-0008303).

References

  1. AACS_encryption_key_controversy, Wikipedia,http://en.wikipedia.org/wiki/AACS_encryption_key_controversy (Retrieved on August 15, 2011).
  2. Albors, J., Ramos, J. C.,and Hervas, J. L.,“New Learning Network Paradigms: Communities of Objectives, Crowdsourcing, Wikis and Open Source”, International Journal of Information and Management, Vol.28, pp.194-202, 2008.
  3. AllTopStartups,Scribd,http://www.scribd.com/doc/54413057/6/Digg-com (Retrieved on August 14, 2011).
  4. Amazon Mechanical Turk, Wikipedia,http://en.wikipedia.org/wiki/Amazon_Mechanical_Turk (Retrieved on August 14, 2011).
  5. Anthes, G.,“Mechanism Design Meets Computer Science”, Communications of the ACM, Vol.53, No.8, pp.11-13, 2010.
  6. Bonabeau, E.,“Decisions 2.0: The Power of Collective Intelligence”, MIT Sloan Management Review, Vol.50, No.2, pp.45-52, 2009.
  7. Brabham,D. C.,“Moving the Crowd at Threadless”, Information, Communication & Society, Vol.13, No.8, pp.1122-1145, 2010.
  8. Burkitt, L.,“Need To Build A Community? Learn From Threadless”, Forbes.com, 2010,http://www.forbes.com/2010/01/06/threadless-t-shirtcommunity- crowdsourcing-cmo-network-threadless.html (Retrieved on March 7, 2012).
  9. ComScore Releases May 2010 U.S. Online Video Rankings,comScore, 2010, http://www.comscore.com/Press_Events/Press_Releases/2010/6/comScore_Releases_May_2010_U.S._Online_Video_Rankings (Retrieved on October 27, 2011).
  10. Defense Advanced Research Projects Agency, DARPA Network Challenge project Report, 2010, http://www.hsdl.org/?view&did=17522 (Retrieved on October 05, 2011).
  11. Dell, Wikipedia,http://en.wikipedia.org/wiki/Dell_IdeaStorm (Retrieved Oct 15, 2011).
  12. Digg, Wikipedia,http://en.wikipedia.org/wiki/Digg (Retrieved on August 13, 2011).
  13. Doan, A.,Ramakrishnan, R., and Halevy, A. Y.,“Crowdsourcing Systems on the World-Wide Web”, Communications of the ACM, Vol.54, No.4, pp.86-96, 2011.
  14. Domain Counts & Internet Statistics,Whois Source,http://www.whois.sc/internet-statistics/ (Retrieved on June 25, 2011).
  15. Facts & Stats,InnoCentive,http://www.innocentive.com/about-innocentive/facts-stats (Retrieved on March 11, 2012).
  16. Glance,N.,and Huberman,B.A.,“Dynamics of Social Dilemmas”, Scientific America, Vol.270, No.3, pp.76-81, 1994.
  17. Google To Acquire YouTube for $1.65 Billion in Stock, News from Google,http://www.google.com/press/pressrel/google_youtube.html (Retrieved on February 20, 2012).
  18. Greengard, S.,“Following the Crowd”, Communications of the ACM, Vol.54, No.2, pp.20-22, 2011.
  19. Hey, T.,“The Next Scientific Revolution”, Harvard Business Review, November 2010, pp.56-63, 2010.
  20. Hoffmann, L.,“Crowd Control”, Communications of the ACM, Vol.52, No.3, pp.16-17, 2009.
  21. Hoffmann, L.,“Data Mining Meets City Hall”, Communications of the ACM, Vol.55, No.6, pp.19-21, 2012.
  22. Howe, J.,“Crowdsourcing: A Definition”, 2006,http://crowdsourcing.typepad.com/cs/2006/06/crowdsourcing_a.html (Retrieved June 25, 2011).
  23. Howe, J.,“The Rise of Crowdsourcing”, WIRED Magazine2006,http://www.wired.com/wired/archive/14.06/crowds.html (Retrieved on June 25, 2011).
  24. Huberman, B.A., Loch, C.,and Onculer, A.,“Status as a Valued Resource”, Social Psychology Quarterly,Vol.67, No.1, pp.103-114, 2004.
  25. Huberman,B.A., Romero,D.M.,and Wu, F.,“Crowdsourcing, Attention and Productivity”, Journal of Information Science, Vol.35, No.6, pp.758-765, 2008.
  26. Important Improvement to Idea Storm, 2007,http://www.dell.com/content/topics/global.aspx/ideastorm/moderator?c=us&l=en&s=gen (Retrieved Oct 15, 2011).
  27. Ipeirotis, P. G., “Demographics of Mechanical Turks Workers”, New York University Faculty Digital Archive, 2010,http://archive.nyu.edu/handle/2451/29585 (Retrieved on August 15, 2011).
  28. Jouret, G.,“Inside Cisco’s Search for the Next Big Idea”, Harvard Business Review September 2009, pp.43-45, 2009.
  29. Lakhani,K. R.,Jeppesen, L. B., Lohse, P. A.,and Panetta, J. A.,“The Value of Openness in Scientific Problem Solving”, HBS Working Paper Number 07-050, 2007.
  30. Lampel,J.and Bhalla,A.,“The Role of Status Seeking in Online Communities: Giving the Gift of Experience”, Journal of Computer-Mediated Communication, Vol.12, No.2, article 5, 2007.
  31. Leimeister, J. M., Huber, M.,Breischneider, U.,and Krcmar, H.,“Leveraging Crowdsourcing: Activation-Supporting Components for IT-Based Ideas Competition”, Journal of Management Information Systems, Vol.26, No.1, pp.197-224, 2009.
  32. Levine,S.S.and Shah,S.,“Cultivating the Digital Commons: A Framework for Collective Open Innovation”,TheAnnual Meeting of the American Sociological Association, 2004.
  33. Malone,T. W.,Laubacher, R.,and Dellarocas,C.,“Harnessing Crowds: Mapping the Genome of Collective Intelligence”, MIT Working Paper No. 2009-001, 2009.
  34. Myers, C. B.,“How to Effectively Crowdsource Product Design”,Social Media, 2010,http://thenextweb.com/socialmedia/2010/11/19/how-toeffectively- crowdsource-product-design/ (Retrieved on April 4, 2012).
  35. Newstead, B. and Lanzerotti, L.,“Can You Open-Source Your Strategy?”, Harvard Business Review October 2010, Vol.32, 2010.
  36. Page, S. E.,“The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies”, Princeton University Press, Princeton, 2008.
  37. Porter, M. E.,“Competitive Advantage: Creating and Sustaining Superior Performance”, Free Press, New York, 1985.
  38. Pink, D. H.,“Drive: The Surprising Truth About What Motivates Us”, Riverhead Hardcover, 2009.
  39. Roman, D.,“Crowdsourcing and the Question of Expertise”, Communications of the ACM, Vol.52, No.12, pp.12, 2009.
  40. Savage, N.,“Gaining Wisdom from Crowds”, Communications of the ACM, Vol.55, No.3, pp.13-15, 2012.
  41. Schonfeid, E.,“Citi: Google's YouTube Revenues Will Pass $1 Billion In 2012 (And So Could Local)”, TechCrunch, 2011,http://techcrunch.com/2011/03/21/citi-google-local-youtube-1-billion/ (Retrieved on February 21, 2012).
  42. Scott, P.,“Foldit Research Paper’s 57,000+ Co-authors”, Communications of the ACM, Vol.53, No.10, pp.15, 2010.
  43. Statistics about Business Size (including Small Business)from the U.S. Census Bureau.U.S. Census Bureau,http://www.census.gov/econ/smallbus.html(Retrieved on December 3, 2011).
  44. Surowiecki, J.,“The Wisdom of Crowds”, Anchor Books, 2005.
  45. Tang, J.C.,Cebrian, M.,Giacobe, N.A., Kim, Y.W., Kim, T.M., andWickert, D.,“Reflecting on the DARPA Red Balloon Challenge”, Communications of the ACM, Vol.54, No.4, pp.78-85, 2011.
  46. Terms of Use,Digg,http://about.digg.com/terms-use (Retrieved on August 14, 2011).
  47. Threadless, Wikipedia,http://en.wikipedia.org/wiki/Threadless (Retrieved March 06, 2012).
  48. TopCoder Homepage,http://www.topcoder.com/ (Retrieved on November 07, 2011).
  49. TopCoder, Wikipedia,http://en.wikipedia.org/wiki/TopCoder (Retrieved on November 7, 2011).
  50. Weingarten, M.,“Project Runway for the T-shirt Crowd”,CNNMoney, 2007, http://money.cnn.com/magazines/business2/business2_archive/2007/06/01/100050978/index.htm (Retrieved on March 6, 2012).
  51. Whitelaw, B.,“Almost all YouTube views come from just 30% of films”,TheDaily Telegraph, 2011,http://www.telegraph.co.uk/technology/news/8464418/Almost-all-YouTube-views-come-from-just-30-of-films.html(Retrieved on February 21, 2012).
  52. YouTube partners program,2011,http://www.youtube.com/partners (Retrieved on February 21, 2012).
  53. YouTube, Wikipedia,http://en.wikipedia.org/wiki/YouTube (Retrieved on February 21, 2012).