Making the Crowd Wiser: (Re)combination through Teaming in Crowdsourcing

Firms as solution seekers are increasingly adopting crowdsourcing contests to acquire innovative solutions to challenging problems. With complex problems, no individual solver may have the full range of requisite knowledge to develop an effective solution. Teaming in crowdsourcing practice has shown promise for developing effective solutions by (re)combining knowledge of diverse contestants. But there has been a paucity of theory on the process that (re)combines diverse contestants via teaming. First, due to the uncertainty of teaming up with strangers, it is not clear how contestants decide to team up with others will impact their eventual performance. Second, since teaming inherently reduces the number of solutions submitted by the crowd, it is not clear whether and how teaming will impact the quality of the solutions acquired by the solution seeker. In this paper, we theorize platform features as different quality signals for (re)combination and systematically explore: a) how the use of different quality signals affects contestants’ performance?, and b) how contestants’ use of signals for (re)combination affects crowdsourcing effectiveness to develop a comprehensive theory from a (re)combination perspective. Using simulation experiments, we find that different signals highlight the instant return of solution integration and potential return of team learning, albeit conditionally depending on problem complexity and timing of team formation. More interestingly, we find a potential misalignment between solver- and seeker-level outcomes. These findings provide new insights on contestant performance and crowdsourcing effectiveness and have implications for the design of crowdsourcing platforms.

Categories: