Back to Top

5. Open Collaboration and the Global Church

In the pre-digital “paper” era, large, complex projects could only occur in industry (private production) or government (public production). With the advent of the digital era, where content is comprised of bits of digital data, a new means of accomplishing such projects has emerged. Social production, using computing devices connected via the Internet, enables a geographically-distributed team of self-selecting individuals to accomplish complex objectives by collaborating openly toward a common goal. Compared to traditional models, these objectives can often be achieved in less time, with better results, and at a near-zero marginal cost. Open collaboration is a model that can go the distance and meet the need for adequate discipleship resources in every language of the world.

~ ~ ~

In the middle of the 15th century, Johannes Gutenberg invented the movable type printing press. Printing presses were already in use, but they required carving out each page of text from a block of wood. The movable type printing press made it possible to arrange carvings of individual letters into any words you liked. Books, indulgences of the Roman Catholic Church, and eventually the writings of the Reformers could be printed on Gutenberg’s press in a fraction of the time that it would have taken using the older printing technology.

In addition to the speed of printing, Gutenberg’s press also significantly lowered the cost of printing. By the end of the 15th century, a printer with a movable type press could print over three hundred copies of a book for the same price that a scribe could make a single copy of the same book. This rapid and less expensive means of printing books resulted, not surprisingly, in an abundance of books.

The financial cost of producing books was lessened using Gutenberg’s press, but the cost was still a significant one. The risk of printing hundreds of copies of a book that no one wanted to read could be disastrous for a printer. At first, the printers alone bore all the risk for the quality of the books they printed. Eventually, publishers became the ones who took on the risk of printing an unpopular book and incurring financial loss to their business.

In the five hundred years since Gutenberg’s era, many new media have been created, including radio, television, records, video tapes, CDs, DVDs, etc. The technologies behind these media may be quite different from each other in some ways. But they are all built on the same core of “Gutenberg economics”: massive investment costs.1

A “Scarcity” Model

Gutenberg economics are rooted in the “paper” era, where industries and businesses largely revolved around the creation and distribution of physical objects. Dealing in physical objects is costly and time-consuming. Since both time and money are scarce, the cost of production is high.

For instance, it costs a lot of money to professionally publish a book. Not only are there massive costs associated with creating and distributing it (e.g. editing, formatting, typesetting, designing, printing, etc.), the cost of correcting errors is prohibitively high as well. If an error is found in the book, the publisher has to absorb the cost of recalling the offending books, correcting the error, reprinting, and redistributing the book a second time. If this happens too often, the publisher could find himself in serious financial trouble.

To minimize the risk of losing money due to poor quality (or poor content), the traditional means of creating media is tightly controlled at every point in the process by a small group of people in positions of power at the top of the industry. In the world of books, the content creation process is controlled by the publishers. In the realm of music and television, the content creation process is controlled by the music labels and movie studios. Regardless of the media, the pattern is the same: a small group controls every step of the process in order to maximize the revenue stream back to the ones at the top who control the process.

An “Abundance” Model

With the rise of the digital age and the ability to encode content as “bits” of information, what used to be scarce became abundant. Creating and distributing content became virtually free and almost instantaneous. Instead of requiring a massive financial investment to create content and distribute that content to the general public, anyone could now create whatever they wanted from any computer and instantly distribute it on the Internet for free. Correcting errors in an article published on an online weblog was as simple as clicking “edit”, making the needed change, and clicking “save”. The massive cost of creating and distributing content in the “paper” era had been reduced to virtually zero marginal cost in the “bits” era.

Not surprisingly, this reduction in marginal cost to virtually zero had massive implications for industries and people everywhere. Chris Anderson, in Free: The Future of a Radical Price, lists some of the differences that resulted from this massive shift:2

  Scarcity Abundance
Rules “Everything is forbidden unless it is permitted” “Everything is permitted unless it is forbidden”
Social model Paternalism
(“we know what is best”)
Egalitarianism
(“you know what is best”)
Profit plan Business model We will figure it out
Decision process Top-down Bottom-up
Management style Command and control Out of control

The rise of computers and digital technology paved the way for the monumental shift from a “scarcity” mentality to one of “abundance.” But it also enabled another significant change: the potential for massively distributed, open collaboration among self-selecting individuals. Before looking at open collaboration, we first need to understand the context in which it came into its own.

Social Production

Some tasks are so large and complex that they are best accomplished by teams of people working together, instead of by an individual. Building a road, for instance, is a large and complex undertaking. The building of roads is best accomplished by teams of people instead of one individual attempting to do the whole thing alone. Building cars is also a large and complex undertaking and is best accomplished by teams of people instead of an individual attempting to do everything alone.3

It used to be that accomplishing a large, complex task as a team of people could only be done in one of two sectors: the private sector or the public sector. In the private sector, accomplishing a task is governed by market forces and the task is accomplished if the financial compensation (sales) of the finished task is greater than the cost of assembling the team and completing the task (expenses).4 Most cars are built in the private sector. In the public sector, a task is accomplished when it is deemed to be beneficial to the society, but is not compensated through the sale of the product. Most roads are built in the public sector.

The rise of digital technology (namely, computers and the Internet) has enabled a third means of undertaking large and complex tasks: social production.5 People who accomplish tasks through social production are driven primarily by intrinsic motivations (e.g. “having fun”) rather than extrinsic motivations (e.g. “getting paid”). Most picnics happen through social production, as do neighborhood music recitals, and church potlucks.

Social production has always been an important aspect of society, but before the digital era, it had been necessarily limited in its scope. The primary constraint on social production in the “paper” era was that it was almost completely dependent on physical proximity. People who shared the same interest or wanted to accomplish the same task had to be in the same geographic vicinity for social production to happen. If they could not meet together in person, they were severely limited in what they could do together. It was still possible to make phone calls and coordinate some elements of a task or event, but the task or event itself could not happen. What the Internet and personal computers enabled was the possibility for social production to happen virtually instantaneously among people who were distributed all over the world, for free.

Some forms of social production (like the music recital and church potluck) still require being physically in the same location as other participants. It is hard to have a potluck over the Internet (something for which we can all be grateful). But other forms of social production have flourished in the online world. Hobbyists of the most esoteric strain can compare notes and interact with anyone else who shares the same interests using web forums. People who enjoy posting humorous captions on pictures of cats (“lolcats”) can do so from all over the world. Citizens can join together in online mailing lists to discuss their concerns and bring about political change.

The ability to work together from anywhere in the world with others who share the same objectives has proven to be extremely compelling. The number of websites and web services that depend on this capability continues to grow rapidly. Today, social production is the foundation on which some of the most popular websites are built, including YouTube (for sharing videos), Wikipedia (for sharing knowledge), Open Street Maps (for mapping the world), and even some aspects of commercial websites like Amazon (their ratings and comments system) and eBay (establishing the trustworthiness of a buyer or seller).

According to Yochai Benkler, a professor at Harvard Law School, social production is the critical long-term shift caused by the Internet. It is, in some contexts, more efficient than either public production (governments) or private production (firms and markets). Social production in the Internet era is sustainable and is moving fast, but it is a threat to—and threatened by—existing industrial models.6

Open Collaboration and Crowd-Sourcing

Open collaboration is built on the model of social production, as opposed to production in the private or public sectors. This is the definition of “open collaboration” used in this book:

An approach to accomplishing an objective that encourages and depends on contributions of self-selecting individuals or entities who are not formally associated (such as project staff and partners) with the particular cause or initiative.

An openly collaborative project, then, is one that anyone can join and to which anyone can make meaningful contributions without first being formally inducted into it. The only prerequisites for involvement in openly collaborative projects are the desire of the individual to join the project and the existence of a means to contribute to it. As we will see later, the possibility for anyone to be involved in an openly collaborative project does not minimize the role of “experts” in that project or result in anarchy.

It should be noted that open collaboration is not synonymous with another frequently encountered term in the online world: crowd-sourcing. Crowd-sourcing is:

The act of outsourcing tasks, traditionally performed by an employee or contractor, to an undefined, large group of people or community (a crowd), through an open call.

Open collaboration includes elements of crowd-sourcing, but crowd-sourcing certain aspects of a task does not make the task one that is openly collaborative. Crowd-sourcing can happen in tasks that are not built on a social model of production. An example of this is the comments and rating system used by Amazon. Amazon is clearly in the private sector, but they use a crowd-sourcing model very effectively to improve the quality and value of their online store. By encouraging and enabling anyone to submit their comments and ratings on a product, potential buyers of that product benefit by finding out in advance what purchasers of the product think of it.

The crowd-sourced ratings and comments on Amazon are one of the aspects of the online merchant’s website that has made it very successful. But Amazon itself is not an openly collaborative project. For an example of an openly collaborative project, we will look at one of the most well-known software projects of the digital era: the Linux operating system.7

The Story of Linux

In 1991, Linus Torvalds, a Finnish student at the University of Helsinki, posted a message to an online computer newsgroup, informing the community that he was working on a new operating system for personal computers. He specifically invited feedback from the computer users and included a copy of the source code that he was writing.8 Five of the first ten people who wrote back included improvements to the code Torvalds had written. The openly collaborative Linux operating system had launched.

At first, Linux was intended only as a hobby. But it gradually attracted more “hackers” who continually improved it. Soon Linux began to be used by commercial companies to power their computer servers. Eventually, these companies began to assign their own programmers to help improve the open-source Linux operating system. It steadily increased in popularity and soon became one of the most widely-used operating systems for computer servers that power the Internet.

Today, Linux is a massive operating system, comprised of over 13 million lines of source code. It powers everything from supercomputers (more than 450 of of the 500 fastest supercomputers in the world run Linux9) to mobile phones (Android is Linux-based and is one of the fastest growing mobile operating systems worldwide). Since 2005, over 6,000 contributors from over 600 different companies have helped to improve the Linux operating system.10 These companies are fierce competitors in the economic arena, but they collaborate together in the creation of the operating system that benefits them all. They implicitly agree that “a rising tide raises all boats.”

Why has Linux been so successful and impervious to attacks by vendors of commercial operating systems who have been severely threatened by it? One reason is that Linux is free of charge, so anyone who wants to can download the entire source code, build it, and install the operating system on any computer without paying for a license. There is no financial barrier preventing access to the operating system. Not only can anyone use it for free, they can also give away copies of it to anyone else as well. Free access to the operating system has been an important factor in the success of Linux. But even more important than “free of charge” access to Linux is the license under which the source code to the operating system is released.

Early on, Torvalds released the source code to his operating system under the GNU General Public License. This license gives anyone the freedom to see the source code, modify the source code, and redistribute their modifications to the source code (with or without financial compensation), as long as their modifications to the source code are also released under the same license. This makes it possible for computer programmers all over the world to openly and legally collaborate in the creation of the Linux operating system. It also ensures that the work they have done cannot be improved by others without those improvements being freely shared with the rest of the community as well.

The General Public License was specifically written to ensure that “what was intended to be open, stays open.” Because of this, no commercial entity can buy the rights to Linux and shut it down to prevent competition with their own operating system. The Linux operating system has been legally locked open.

Linux is not the only open-source software project built on the model of open collaboration. Many other highly successful software projects (like the Apache webserver, the Firefox web browser, and the LibreOffice office suite) provide additional examples of how effective open collaboration can be in the development of computer software. But open collaboration is effective for more than the development of software. It is also a highly effective model for the creation of content—massive amounts of content.

The Story of Wikipedia

Almost ten years after the launch of the Linux operating system, an Internet entrepreneur named Jimmy Wales and a philosopher named Larry Sanger started an online encyclopedia, one that would be completely free. They assembled a team of contributors and began writing content. But they quickly ran into a significant obstacle. Before potential authors could write articles, they had to pass an elaborate screening method, greatly limiting the number of contributors. The actual creation of content involved a tedious, seven-step process before content could be published online:

  1. Assignment

  2. Find lead reviewer

  3. Lead review

  4. Open review

  5. Lead copyediting

  6. Open copyediting

  7. Final approval and markup

Not surprisingly, their encyclopedia progressed very slowly. Many months after the launch of their encyclopedia, called Nupedia, they realized something needed to change.

About that time, a new technology was invented: the wiki. Wiki software (from the Hawaiian word for “quick”) puts an “edit” button on every page, enabling anyone to quickly edit a web page while also preserving a log of the edits made to each page. This makes it possible for large numbers of people to collaborate together over the Internet to create content, using only the web browser on their computer.

In January, 2001, Wales and Sanger set up a wiki version of their encyclopedia, called Wikipedia, where anyone could join and edit any article. By the end of January, seventeen articles had been created. By the end of February, Wikipedia had 150 articles, then 572 in March, and over a thousand articles by the end of May. The trickle of content was turning into a stream and then into a deluge. By the end of the year, 350 “Wikipedians” had joined the project and the site had more than 15,000 articles.11

In little more than 10 years, Wikipedia would come to have more than 20 million articles in over 270 languages, created by more than 15 million contributors. The open collaboration of geographically distributed, self-selecting people continues to create an immense encyclopedia that is the equivalent of more than 1,600 volumes of the Encyclopedia Britannica.12

Open collaboration is an effective means of creating vast quantities of content. But what about the quality of the content? There are many examples from Wikipedia’s history showing that, at times, it has contained significant errors. The fact that errors exist in an encyclopedia built openly by the collaboration of the masses, rather than small numbers of experts behind closed doors can give rise to concern.

This potential for error can be especially concerning when considering open collaboration as a model for creating discipleship resources. It is one thing to have misinformation and inaccuracies in an encyclopedia. It is altogether something else when they are included in discipleship resources where the eternity of those who use those resources may rest in the balance.

Can Open Collaboration Be Trusted?

Wikipedia is, for better or worse, one of the most widely-known examples of open collaboration in the digital age. This can be a good thing, because the success of Wikipedia points out the strengths of the openly collaborative model, especially in its potential for engaging massive numbers of people in the creation of vast amounts of content. But there is also a downside. It is all too easy to unnecessarily attribute the problems in Wikipedia to the model of open collaboration itself. There can be a knee-jerk reaction against what some have termed “the wiki model” as a whole because of the concerns with Wikipedia, specifically.

It follows, then, that any discussion of open collaboration as a model for equipping the global church to grow in discipleship must address this misunderstanding. The objective is not to defend Wikipedia, but to attempt to uncouple the model (open collaboration) from one of the most visible examples of that model (Wikipedia), in an attempt to allow the model itself to rise or fall on its own merits. To begin with, we will address one of the most common concerns about Wikipedia—the notion that it is untrustworthy because it is not created exclusively by experts.

When the Experts Are Wrong

The Encyclopædia Britannica was first published in 1768 and is considered by many to be the pinnacle of encyclopedic perfection. Written by over 4,000 experts—including some Nobel laureates—in various fields, it is widely regarded as one of the most authoritative sources of information on a broad number of topics.

In 2005, the science journal Nature published an article entitled “Internet Encyclopaedias Go Head to Head.” In it, they compared the accuracy of Wikipedia and Britannica on a number of articles.13 The question they were seeking to answer was this: if anyone can edit articles in Wikipedia, how do users know if Wikipedia is as accurate as established sources such as Encylopaedia Britannica?

To answer this question, Nature selected entries from both encyclopedias on a broad range of scientific disciplines. They had relevant experts review the articles without being told which article came from which encyclopedia. The results of the reviews were surprising.

Of the 42 entries tested, the difference in accuracy between the two encyclopedias was not as significant as might have been expected. The average science entry in Wikipedia contained about four inaccuracies, while Britannica’s entries had about three. Of all the articles reviewed, only eight serious errors were encountered: four in each encyclopedia.14

When the results of this peer reviewed comparison were published, some immediately pointed out that the study confirmed what they had suspected all along: an encyclopedia written by volunteers is less trustworthy than one written by the experts. More discerning readers, however, made two very significant observations. The first is this: Encyclopædia Britannica had errors! Many people had been led to believe that it was unassailable “truth” on all topics that it addressed. The assumption is often that whatever is written in it is true because it was written by experts. But the evidence suggests that merely being written by experts does not mean it is free of errors.

Given the evidence that Britannica is not without errors in the 42 articles reviewed, some questions arise: What other errors are in Britannica, about which we do not yet know? What process does Britannica have in place for reviewing the remaining articles in their encyclopedia and providing timely corrections to errors encountered in them? Will Britannica provide a list of errata for the errors they do find, so that readers can know what the errors were?

It must be noted that Britannica, Inc. has never claimed their encyclopedia is error-free. But this points out a disturbing trend: it is very easy to begin implicitly assuming that if something comes from “the experts” then it is free of error (i.e. “truth”) and need not be further researched. Studies like the one in Nature show that this is an unwise approach to “truth”, because even the experts can be wrong. Sometimes, the error is due to honest error, without bias or ulterior motives. But if error or bias were to be introduced into the content created by “the experts” (which, by virtue of the fact that it came from them, is usually accepted as fully reliable), it would be much harder to correct. The centuries-long spiritual darkness of the Middle Ages bears witness to this.

A second observation about the results of the comparison between Britannica and Wikipedia is equally significant. Both encyclopedias contained errors in the articles reviewed. But only one of the encyclopedias was able to correct these errors within days of their discovery: Wikipedia.

Wikipedia is built on an “abundance” model—creating and editing content is easy to do and takes minimal time to accomplish. The end-goal of Wikipedia is a web page that is easily published and corrected, as needed. Wikipedia’s rapid editing framework (wiki technology) made it possible for volunteer contributors to quickly update the reviewed articles with accurate information from Nature’s study, in very little time.

Britannica, however, is built on a “scarcity” model. It has a much more involved editing and review process, resulting in a much slower error-correction process. The model on which Britannica is built has a centuries-old goal of producing a printed book, although it is also available online (for a fee). The articles in the online version of Britannica were corrected as a result of the peer review, but there are still some pressing questions: How long will it take for the known errors in these articles to be corrected in the printed volumes of Encyclopedia Britannica? What do the people who purchased those volumes—in the belief that they were written by experts and so contained only “truth”—do now? Do they need to repurchase the encyclopedia? Do they get their money back?

Contributions, not Contributors

The goal here is not to glorify openly collaborative projects like Wikipedia or denigrate traditional projects like Britannica. Nor is it to suggest that the contributions of “experts” are no longer needed now that the masses can collaborate together—far from it. The point is that open collaboration, as exemplified by Wikipedia, Linux, and other “open” projects like them, levels the playing field by enabling a contribution to a project to rise or fall on the basis of its own merit, rather than on the credentials of the contributor.

In openly collaborative projects, the hierarchy of authority is not determined by the credentials of the participants. Rather, such projects are built on a “meritocratic hierarchy” where what matters is not the degrees a contributor possesses, or the title they hold, but the work they do. Critics of open collaboration often fail to understand that although this change in structure is significant, it is not a shocking slide into “radical egalitarianism.” It is merely living out the Biblical principle of “by their fruits you will know them” (Matthew 7:15-16).

In an openly collaborative project, a contributor who consistently creates content of good quality, treats others graciously, and advances the project’s purpose will rise in credibility and authority, regardless of their age, gender, experience, or education. Conversely, someone who does not contribute quality content to the project and is antagonistic toward other contributors will not be trusted or given authority in the community, even if they have vast education and experience in the topic at hand.

This should not be a threatening situation to “the experts” who contribute to openly collaborative projects. The contributions of experts greatly increase the value and quality of the project. But the value and quality increases because experts tend to contribute content that is of greater value and quality, not because they have credentials stating they are experts. The shift is subtle, but crucial: the focus is no longer who created the content (thereby proving or disproving quality) but what the content is that was created. The proof of the content’s quality is in the content itself, rather than the identity of the content’s creator. If the contributor is an expert, their contribution to the project can stand on its own merit. But if they are masquerading as experts, their concern about the new way is not without basis. In meritocratic hierarchies, what matters is what you do, not who you think you are.

Shallow Errors

A significant advantage of creating content using a wiki platform is, to borrow a phrase from the open-source software community, “to many eyes, all errors are shallow.” That is, not only can errors in the content be spotted by anyone, they can also be easily corrected by anyone. The wiki technology itself makes it easier to create good content than to create bad content. Given enough collaborators, a well-managed wiki tends to incrementally progress toward better, more reliable content.15

This aspect of wikis can seem illogical—it is hard to make the theory of it “work.” Because of this, Wikipedia has been dismissed by many as a joke—an absurd project that could only ever result in unreliable content of inferior quality. But many people, once they understand the technology itself and see the result, have changed their minds. Kevin Kelly, former editor of Wired, was one of these skeptics who found that, over time, his view about Wikipedia changed:

Much of what I believed about human nature, and the nature of knowledge, has been upended by the Wikipedia (sic). I knew that the human propensity for mischief among the young and bored—of which there were many online—would make an encyclopedia editable by anyone an impossibility. I also knew that even among the responsible contributors, the temptation to exaggerate and misremember what we think we know was inescapable, adding to the impossibility of a reliable text. I knew from my own 20-year experience online that you could not rely on what you read in a random posting, and believed that an aggregation of random contributions would be a total mess. Even unedited web pages created by experts failed to impress me, so an entire encyclopedia written by unedited amateurs, not to mention ignoramuses, seemed destined to be junk…

How wrong I was. The success of the Wikipedia keeps surpassing my expectations. Despite the flaws of human nature, it keeps getting better. Both the weakness and virtues of individuals are transformed into common wealth, with a minimum of rules and elites. It turns out that with the right tools it is easier to restore damage text (the revert function on Wikipedia) than to create damage text (vandalism) in the first place, and so the good enough article prospers and continues. With the right tools, it turns out the collaborative community can outpace the same number of ambitious individuals competing…

Wikipedia is impossible, but here it is. It is one of those things impossible in theory, but possible in practice. Once you confront the fact that it works, you have to shift your expectation of what else that is impossible in theory might work in practice.16

What happened to Kevin Kelly continues to happen to many people. The “theory” of a wiki is hard to grasp—it has to be seen and experienced in practice before it can be fully understood.

It is not just the theory of a wiki that is difficult to grasp. One of the most frequently encountered misunderstandings about wikis is the assumption that all wikis function in exactly the same way. Sometimes, those who do not understand the technology make blanket statements about how the “wiki model” is deficient as a means to create reliable content of the highest quality. They have concerns about Wikipedia and assume that all wikis look and work like Wikipedia.

The concept of a single wide-open, free-for-all “wiki model” is inaccurate. There are actually many ways to configure wiki software. A wiki can be completely open for anyone to edit anonymously (like Wikipedia) or locked down so tightly that only a limited number of known contributors can edit the content, and then only over highly secure connections and with full names and datestamps logged on each edit. Want an example? Meet the wiki used by a U.S. intelligence agency: Intellipedia.

A Wiki Is Not a Wiki Is Not a Wiki

For decades, one of the primary goals of one of the top U.S. intelligence agencies was to find answers to relatively static questions about a relatively static enemy. The kinds of questions needing answers had to do with things like the number of missiles the Soviet Union had in Siberia. But the world changed rapidly after the decline of the Soviet Union. The terrorist networks the agency now faced were much more complicated and decentralized. This required the development of a more efficient means of collecting and processing intelligence on an increasing number of topics. This was a task much more complicated than their traditional, hierarchical model could accomplish.

So, in 2006, Intellipedia was launched. It uses the same open-source software used by Wikipedia, enabling the same ease of creating and editing content as the online encyclopedia. But that is where the similarities end. Intellipedia is, not surprisingly, on a highly secure, private network that is not publicly accessible. The only contributors to it are those who have the necessary security clearances, and all contributions are tagged with the name of the contributor. Because of the strengths of wiki technology, however, vast amounts of information have been rapidly assembled and collectively organized by the members of the agency. Within just a few years of launching the wiki, nearly a million articles had been created in Intellipedia.

This brief comparison of Wikipedia and Intellipedia suggests that the notion of a uniform “wiki model” where all wikis are alike is deficient. The same software can power a wide-open, anonymously-editable wiki (like Wikipedia) or it can power a highly secure, restricted-access wiki where all users are known and all edits are tagged with the author’s name (like Intellipedia). The difference is all in how the wiki is configured.

So it follows that a wiki should not be considered an inherently scary thing and content on a wiki should not be assumed to be unreliable, just because it is in a wiki. All a wiki does is make creating content on the web much easier. The reliability of the content and usefulness of the wiki for a given purpose is entirely dependent on the processes implemented by the configuration of the wiki software.

Wiki technology provides distinct advantages for creating reliable content that is easily corrected when errors are discovered. The configuration of wiki software is an important key to ensuring the reliability of the content produced by the contributors. In the next section we will step back from addressing wikis specifically and look at what makes open collaboration work. Or, to put it another way: What makes the crowd wiser than the experts?

The Wisdom of Crowds

The research & development departments of many companies are in a difficult position. Year after year, they need to invest more in R&D to develop innovative products, but the profits are not there to support it. Not only that, some of the research problems have the R&D departments completely stumped and continually throwing money at the problems is not making them go away.

InnoCentive is a “knowledge broker” company that addresses this problem. InnoCentive connects freelance problem-solvers with the R&D departments of major companies like NASA, Boeing, DuPont, and Procter & Gamble. The R&D departments post their most challenging research problems on InnoCentive’s website, and anyone who wants to can attempt to solve the problem, with a cash prize being awarded for a successful solution. Over 250,000 “solvers” from nearly 200 countries are in the InnoCentive network and have collectively solved more than 50 percent of the problems on InnoCentive’s website—problems that have already bested the brightest minds in the R&D departments that posted them.17 And this is where things get interesting.

The people who make up the InnoCentive network of problem-solvers come from a wide variety of backgrounds and fields of expertise. It is this diversity that is the single greatest factor contributing to the successful solution of problems posted on the website. A study conducted by Karim Lakhani made some interesting discoveries about the solutions made and the people who made them. His study found that the odds of a solver’s success increased significantly when the problem was in a field in which they had no formal expertise. For instance, successful solvers of problems in chemistry or biology often had a background in physics and electrical engineering. The farther the problem was from their specialized knowledge, the more likely they were to be able to solve it.18

A second finding is equally intriguing: nearly 75% of successful solutions were made by solvers who already knew the solution to the problem. The solution already existed, it just needed to be connected to the problem. Connecting the solution to the problem simply required broadcasting the problem to a large enough group of people (“crowd-casting”) such that the pre-existing solution known by the crowd could be identified. The key was not acquiring new knowledge, but in aggregating and utilizing the knowledge already available in the crowd.

Diversity

InnoCentive illustrates a crucial aspect of the wisdom of crowds: diversity at a cognitive level is one of the most significant advantages of the crowd. The people who comprise the R&D departments of most companies tend to be homogeneous in their training and expertise. A pharmaceutical company tends to have chemists in their R&D department, while an aerospace company tends to have physicists, and a technology company tends to have electrical engineers.

Because the members of each R&D department contain a largely identical set of skills and training, they are limited in their ability to “think outside the box.” The individual abilities and training of each member of the group may be extremely high, but what they are lacking is the diversity that would enable them to see solutions to the problems that are outside of their area of expertise.

James Surowiecki, in The Wisdom of Crowds explains it this way:

Diversity helps because it actually adds perspectives that would otherwise be absent and because it takes away, or at least weakens, some of the destructive characteristics of group decision making… Adding in a few people who know less, but have different skills, actually improves the group’s performance.19

When faced with a complex and involved task, the tendency may be to assemble a small team of the brightest experts with skills and training to accomplish the task. As heretical as it may sound, the best way to accomplish these tasks is actually to assemble a diverse group of people with varying skills and different degrees of knowledge, rather than having a smaller team with greater expertise but less diversity. Surowiecki explains why:

Groups that are too much alike find it harder to keep learning, because each member is bringing less and less new information to the table. Homogeneous groups are great at doing what they do well, but they become progressively less able to investigate alternatives… Bringing new members into the organization, even if they are less experienced and less capable, actually makes the group smarter simply because what little the new members do know is not redundant with what everyone else knows.20

To better understand why diversity is so crucial in achieving the best solution to a problem, consider MATLAB, the name of a programming contest started in 1999.21 Contestants attempt to solve a classic “traveling salesman problem,” submitting a solution in the form of an algorithm (computer code) that directs the salesman to accomplish the objectives of the problem in the fewest number of steps. The algorithms are graded in real-time and the results are posted on the contest website, with the leaders ranked by the efficiency of their algorithm.

But there is a twist in the contest: contestants can steal each other’s code. Not only are leaders ranked on the leader board, but the algorithm they use to solve the problem is available for anyone else to see and reuse, either completely or in part. If a contestant can improve the efficiency of the algorithm, it could vault them into first place, where others can see and improve on their algorithm.

Rather than being threatened by this “plagiarism” of their algorithms, contestants are inspired by the challenge. The ultimate goal is not to win so much as it is to be the one who develops a brilliant tweak to a good algorithm that makes it a great algorithm and impresses the other contestants. There is a good deal of prestige associated with being the one who develops the key algorithm that everyone else copies.

The MATLAB competition illustrates the importance of diversity as one of the key factors that make the crowd “wiser” than the experts. Jeff Howe, in Crowdsourcing observes:

The best coders have generally all learned the same tricks and shortcuts from years of using the MATLAB computer language. It is the inexperienced coders—the outsiders who have to come up with their own shortcuts—that make possible the giant cognitive leaps that allow the winning solution to improve on the initial solution by so many degrees of magnitude… A diverse group of solvers results in many different approaches to a problem.22

Shared Information

The MATLAB competition illustrates another important aspect of enabling the crowd to collectively create the best solution to a problem: being able to reuse the content created by others in the crowd. In MATLAB, the rules are thrown out and anyone can reuse anything without legal implications. This results in a tremendous increase in the speed of the problem-solving process and an exponentially greater quality in the resulting solution to the problem. Howe explains:

The extraordinary aspect of MATLAB isn’t the fervor it inspires, but the fact that the ten-day hurly-burly—in which all intellectual property is thrown into the public square to be used and reused at will—turns out to be an insanely efficient method of problem solving… On average… the best algorithm at the end of the contest period exceeds the best algorithm from day one by a magnitude of one thousand.23

This idea of specifically allowing anyone to reuse the work that has been done by others is one of the most crucial aspects of openly collaborative projects. It is a common factor in every open project that is successful. In Linux, the source code to the operating system is legally reusable. In Wikipedia, all the content is released under an open license that enables anyone else to use and reuse the content. In MATLAB, anyone can see and improve on the algorithms used by the leaders.

The importance in openly collaborative projects of being able to reuse and build on the content created by others cannot be overstated. Without this freedom, open collaboration cannot happen. When a diverse crowd of people works together toward a common goal and is able to build on and reuse the work that has been done by others, it is capable of accomplishing incredible feats—ones that would otherwise never have been possible.

When the Global Church Collaborates Together

God is raising up His Church, from thousands of people groups all over the planet. In people groups that were completely unreached with the Gospel as little as a month ago, there are now believers and young churches. These believers are different from each other in many ways: they live in different parts of the world, speak different languages, and come from different cultures. But they are alike in their fervent desire to grow spiritually. They are highly motivated and many are starting to translate discipleship resources into their own languages.

It used to be that translation of a discipleship resource could only be done by a small team of experts. They would work together to create a translated draft of the content (like a passage of Scripture) then present it to a subsection of the community for review. This approach to translation was constrained by the fact that the technology to collaborate openly on a large scale had not yet been invented. The traditional translation process is firmly grounded in the “paper” era, with all its requisite challenges and limitations. The only way for people to collaborate in a traditional translation process is to be in the same physical vicinity. This necessarily limits how many people can work together on the project.

In the “bits” era of the 21st century, large numbers of self-selecting people can work together at any point in a translation project, using computer technology. All the strengths of the openly collaborative model can be employed in the creation and translation of discipleship resources in any language. We are in the earliest stages of what may prove to be one of the most pivotal eras in the spiritual growth of the global church.

Open collaboration is a model that is able to “go the distance” and produce translated discipleship resources in every language of the world. Open collaboration is the future of the global church. Pioneering mission organizations are already developing and testing software platforms that enable the global church to work together to translate discipleship resources into their own languages.

One of these organizations is The Seed Company. In 2011, they published an article introducing the Ganbi translation project in South Asia. This Bible translation project uses custom web software to enable anyone who speaks the Ganbi language to join in the process of drafting and checking the translation. The results, according to Gilles Gravelle of The Seed Company, were astounding:

Within months, over 3,000 people participated via their own custom-designed Web site where the translation work resides. About 78 people were confirmed by the community as quality drafters. Over 100,000 votes were cast, answering essentially the same questions: Is the translation clear? Does it accurately convey the meaning of the original texts? And does it sound natural?

All segments of the community participated. Significantly, women and youth were able to participate, adding their perspectives which are typically missing because of cultural constraints. Non-literate people were able to participate because the people chose to work in groups. People from seven regions, across denominational boundaries, worked together with surprising unity and harmony. And most importantly of all, they view the translation work as their own from the very start, and it is already making an impact in their community in ways we could not have guessed.24

Given the deep, spiritual motivation experienced by Ganbi believers, results like these are not surprising. And the Ganbi are not alone. Believers in thousands of other people groups experience the same earnest need for discipleship resources in their own languages, and they are ready to work together to help make it happen.

The Future is Bright! (Or Is It?)

Think of the vast numbers of discipleship resources that could be translated for effective ministry in every language of the world! The Word of God, leadership training materials, Bible study guides, commentaries, children’s ministry resources, evangelistic materials—the list is massive. Think of the hundreds of millions of believers, in people groups all over the planet. Many of these brothers and sisters in Christ, who are desperate to grow in spiritual maturity, are ready to start today to equip themselves with discipleship resources in their own languages.

The technology to openly collaborate in translation of these resources is spreading all over the world, even to the most remote villages of the least-developed countries. As the global church—from experts in Biblical languages to speakers of a minority language—openly collaborates together, with each participant using their God-given gifts and abilities, we could see an incredible surge forward in equipping believers in even the smallest languages with what they need to grow spiritually. The rapid development of adequate discipleship resources for the spiritual growth of every believer in each one of the nearly 7,000 languages in the world, is possible.

But there is a problem, and it is a significant one.

If we consider discipleship resources as a garden, the vast majority of that garden is surrounded by a wall and the gate is locked shut. Copyright law, in this analogy, functions as a padlock that enables rights holders to maintain legal restrictions that effectively lock the global church in thousands of languages out of this “walled garden.”

Copyright law is not inherently a problem, nor should it be abolished or declared immoral. Copyright serves a good purpose and its use is both legal (government-sanctioned) and ethical (Biblically-sanctioned). The reality, however, is that the laws governing copyright were not designed to facilitate the openly collaborative translation of a large corpus of discipleship resources into every language of the world.

Many discipleship resources exist in some major languages of the world. There are, however, thousands of languages into which discipleship resources have not been translated and the speakers of those languages do not have the legal freedom to translate, adapt, build on, redistribute, and use existing discipleship resources for themselves. This is the problem, because this lack of legal freedom perpetuates the spiritual famine of the global church. Copyright law is merely the framework that makes it possible to restrict the global church in this way. Before proposing an alternative, we will seek to better understand what copyright is, why it was invented, and how it works.

~ ~ ~

Conclusion of Part 2 : The global church is already acquiring the tools and developing the capability to translate, adapt, build on, revise, redistribute, and use discipleship resources in their own language.


1 The concept of Gutenberg economics used here is borrowed from Clay Shirky, Cognitive Surplus (New York, NY: Penguin Press, 2010), 42–45.

2 Chris Anderson, Free: The Future of a Radical Price (Hyperion, 2009). This table, while generally helpful, can be easily misconstrued. For example, the management style in the abundance model (listed as “Out of control”) could seem alarming to some, as it is not really out of control. Social production (discussed in the next section) is not anarchy. Successful projects that are built on an “abundance” model have management, leadership, organization, and control. But they are quite different from their counterparts in the “scarcity” world. For those who only understand the “scarcity” model, projects that are built on an “abundance” model often look as though they are out of control.

3 The analogy of roads, cars, and picnics is borrowed from Clay Shirky in Cognitive Surplus.

4 As we will see, the same pattern holds true for “free of charge” resources that are restricted by licenses in order to preserve revenue from donations.

5 Social production is also called “commons-based peer production” or just “peer production”.

6 Yochai Benkler, “Yochai Benkler on the new open-source economics” (Oxford, England, jul 2005), http://www.ted.com/talks/yochai_benkler_on_the_new_open_source_economics.html

7 Technophiles will rightly point out that Linux is actually only the kernel of an operating system and is incomplete without other utilities and software programs that run on it. In the interest of simplicity, we will not attempt to nuance the definition but refer to it as the Linux operating system.

8 The “source code” of a computer program or operating system is the set of instructions that tells the computer what to do. This code is written in plain text files and then compiled into a “binary” (0s and 1s) computer file that is used directly by the computer. It may be helpful to think of the source code being to the computer what a recipe is to a finished cake. If you know what the recipe is, you can tweak and improve it (“a little less salt, a little more vanilla”). But if all you have is the finished cake, you either like it or you don’t—you do not know how it was put together. In the same way, if computer programmers have access to the source code of a program, it is possible for them to make improvements to the software. But if all they have is the finished program, they either use it or don’t use it—improving it is not an option (apart from reverse-engineering the software, which we will not address here).

9 “TOP500 - Statistics,” nov 2011, http://i.top500.org/stats

10 Jonathan Corbet, Greg Kroah-Hartman, and Amanda McPherson, “Linux Kernel Development: How Fast it is Going, Who is Doing It, What They are Doing, and Who is Sponsoring It” (The Linux Foundation, nov 2010), https://www.linuxfoundation.org/sites/main/files/lf_linux_kernel_development_2010.pdf

11 Marshall Poe, “The Hive,” The Atlantic (sep 2006), http://www.theatlantic.com/magazine/archive/2006/09/the-hive/5118/

12 Wikipedia Contributors, “Wikipedia:Size in volumes” (Wikimedia Foundation, Inc., jun 2012), http://en.wikipedia.org/w/index.php?title=Wikipedia:Size_in_volumes&oldid=462112380

13 Jim Giles, “Internet Encyclopaedias Go Head to Head,” Nature 438 (dec 2005): 900–901, http://www.nature.com/nature/journal/v438/n7070/full/438900a.html

14 A few months after the comparison was published in Nature, Britannica published a blistering criticism of it and suggested Nature should retract it. They said it was “so poorly carried out and its findings so error-laden that it was completely without merit”(“Fatally Flawed - Refuting the recent study on encyclopedic accuracy by the journal Nature” (Encyclopædia Britannica, Inc., mar 2006), http://corporate.britannica.com/britannica_nature_response.pdf, 14). They published their rebuttal in order to set the record straight, and “to reassure Britannica’s readers about the quality of our content.”

At this point in history, Britannica’s business model was being decimated. In the “paper” era when printed encyclopedias were the only option, Britannica enjoyed significant profit margins and a healthy business model. With the rise of significantly less expensive digital encyclopedias (like Microsoft’s Encarta) and free online encyclopedias (like Wikipedia), Britannica was experiencing significant economic turmoil at the time the article was published. This does not mean their criticism of the comparison is without merit. But it does mean that Britannica, Inc. was not immune to strong financial motivations for attempting to refute the article.

Nature responded to their criticism (“Nature’s Responses to Encyclopaedia Britannica,” mar 2006, http://www.nature.com/nature/britannica/index.html), asserting that their process for reviewing the encyclopedias was open, honest, and unbiased, and that they did not intend to retract their article. They pointed out that some of the allegations made by Britannica were unfounded, and that others applied equally to Wikipedia as well as Britannica. They also noted that Britannica took issue with less than half the points that were raised by reviewers of the articles.

15 I use the term “well-managed” to refer to a wiki that is sufficiently open and permissive to provide its contributors the freedom to join the project easily, contribute directly to the content, and correct errors that arise in the content. The tendency can be to stifle the inherent advantages of the openly collaborative model by creating too many obstacles in the configuration of the software. As we will see, the definition of a “well-managed wiki” is entirely dependent on the wiki’s purpose and its pool of contributors.

16 Kevin Kelly“The World Question Center 2008,” 2008, http://www.edge.org/q2008/q08_6.html#kelly

17 “Facts & Stats,” n.d., https://www.innocentive.com/about-innocentive/facts-stats

18 Karim R. Lakhani et al., The Value of Openness in Scientific Problem Solving, 2007

19 James Surowiecki, The Wisdom of Crowds (New York: Anchor Books, 2005), 29-30.

20 Ibid, 31.

21 MATLAB is also a programming language used by mathematicians and engineers to solve massively complex problems. Both the MATLAB programming language and the MATLAB programming contest were created by a company called Mathworks.

22 Jeff Howe, Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business (Crown Business, 2008), 145.

23 Ibid, 137-138.

24 Gilles Gravelle, “What Happens When A Crowd Translates the Bible?,” dec 2011, http://blog.theseedcompany.org/bible-translation-2/what-happens-when-a-crowd-translates-the-bible/