by Evgeny Glariantov, founder of FREED
In 2010, the Time magazine listed the sharing economy among the 10 ideas that would change the world. 8 years later, it is clear that the prediction of the influential publication has, to some extent, come true: it is hard to imagine modern life in developed or developing countries without collaborative consumption. Renting short-term lodgings while travelling, ride-hailing services with drivers using their own cars, coworkings — all these services have transformed our everyday life and habits. And Airbnb, Uber and WeWork, respectively, have firmly established themselves in the list of top ten ‘unicorns’ — companies valued at more than $1 billion.
Renting apartments, offices and cars is a relatively easy and straightforward business, but setting up computing projects by utilizing computing resources of several PCs on a temporary basis is not so simple. To solve any substantial computing task you’ll need more than one computer. (Unless it’s a high-end supercomputer with a computing capacity of 10 000 regular PCs.) Regular computers must be united into a sophisticated computing network; big tasks should be broken into smaller ones; and all the possible contingencies — like slow internet or sudden computer shutdowns — must be taken into consideration. Inevitably, some tasks don’t even make it to the processing centre on time and they have to be duplicated and rechecked. It is a complex IT project in and of itself. A method of solving time-consuming computing tasks by utilizing multiple computers usually united into a parallel computing system is commonly referred to as grid computing or distributed computing.
And even the above mentioned difficulties notwithstanding, it’s completely unclear how to pay those customers we’ve rented computing resources from. To the customer, the price in question would be really low: for instance, if we divide the price of renting the CompecTA supercomputer for one month by 10 000, we’ll get no more than 6 cents. That’s why many distributed computing projects involve volunteers who are willing to work for free in order to contribute to a worthwhile cause and to be part of a prestigious project.
These so-called volunteer computing projects dealt with scientific research. The first project of this kind was GIMPS, a large-scale program launched in 1996 for the purpose of searching for Mersenne prime numbers, that is prime numbers of the form 2p — 1 where P is a random integer. The majority of the largest known prime numbers are Mersenne primes. As of December 2017, the GIMPS project had reached an aggregate computing power of approximately 320 TeraFLOPS (320 trillion floating-point operations per second). But couldn’t they just do the same thing on some high-performance computer? Well, for instance, the most powerful iMac Pro released in 2017 gives only 22 TFLOPS.
It’s worth noting that large prime numbers are used in cryptography, because they can help develop pseudo random number generators. And that means the results of the project can find both peaceful and military applications.
Another important project was launched in 1999 at the UC Berkeley which has arguably become the most famous example of the volunteer computing model — the SETI@home program (if there’s a “@home” suffix in the name of a project it means that it involves exclusively home computers). It analyzes radio signals from the outer space searching for extraterrestrial civilizations. The aggregate computing power of the project is more than 960 TFLOPS.
Searching for extraterrestrial intelligence.
In 2005, SETI@home changed its software: instead of the old SETI@home Classic they started using the Berkeley Open Infrastructure for Network Computing, or BOINC for short — a software framework that enables volunteers to participate in projects other than SETI@home.
Here’s the list of the most notable projects powered by BOINC: ClimatePrediction.net launched in 2003 for the purposes of climate change computer modelling and by the end of 2017 reached the level of computing performance of 75 TFLOPS thanks to its 180,000 participants from all over the world; the LHC@home project started its work in 2004 in order to process data received from the Large Hadron Collider and to run computations to make it better; Rosetta@home is a project for protein structure prediction and protein design that was launched in 2005 and by 2016 its computing power was estimated as 210 TFLOPS; the Einstein@home project searches for pulsars by analyzing radio signals and gravitational-wave data. This last project was created in 2005 and it has managed to discover more than 50 pulsar and about 20 previously unknown gamma-ray pulsars. Thanks to its volunteers the Einstein@home project has an average computing speed of more than 3 PetaFLOPS.
In 2007, the BOINC framework became the home of World Community Grid, a distributed computing project created by the IBM Corporation. It was started in 2004 and it has since supported a lot of research programs devoted to studying diseases, searching for cures (malaria, Ebola, TB, Zika, cancer, muscular dystrophy, AIDS), and solving humanitarian problems like clean water, clean energy, developing more nutritious rice, and climate modelling in Africa.
In total, BOINC supports about 60 distributed computing projects in astrophysics, medicine, mathematics, cognitive sciences, seismology, etc. However, important volunteer computing projects can be found not only on the BOINC framework.
In 2000, Stanford University launched Folding@home, a program whose aim is to run computer simulations of protein folding and to study diseases caused by protein misfolding: Alzheimer’s disease, Huntington’s chorea, cancer, and many others. Folding@home is one of the most advanced and powerful projects of its kind: it utilizes the computational capacities not only of PCs and smartphones, like most distributed computing projects do (in 2013, BOINC made an Android-based mobile app), but that of PlayStation consoles as well (although, in 2012 the collaboration with Sony came to an end).
Thanks to this, Folding@home became the first distributed computing project ever to have crossed — within the last decade — the 1, 2, 3, 4, and 5 PetaFLOPS psychological milestones. As of 2016, its aggregate computing throughput was about 100 PetaFLOPS, that is it was more powerful than the most powerful supercomputer in the world. Overall, the project has already involved almost 2 million people and used more than 8.5 million CPUs. Folding@home has managed to attract such a wide audience because it studies a very important problem, it’s easy to take part (like other similar projects, Folding@home utilizes the resources of idle devices), and it is gamified — users can unite into teams and “compete” with other groups.
Distributed computing is used not only in science. It is also used for solving more practical tasks: neural network training, rendering 3D films and SVX, cryptography, calculating aircraft aerodynamic parameters, etc. Whatever the case, it’s extremely easy to make one’s modest contribution to the development of science and technology — all you have to do is make the computing resources of your home PC or any other device available at night (when you are asleep) or during the day, while you’re at work.
In commercial projects, participants are compensated — and not just in fiat or crypto currencies. Earlier, we mentioned that, as of yet, financial remuneration in these projects is quite modest, but video game fans might in return for making the computing power of their PCs available (in those rare moments when they are not playing) receive compensation in the form of in-game valuables —”gold,” weapons, or any other in-game objects.
Evgeny Glariantov, the founder of a blockchain project FREED, is an expert on the monetization of game projects. He has been professionally involved in online game market for 15 years now. In the early 2000’s, he launched several games, whose audience is over 30 million active users.