Enough background let's attack the thought experiment.
Synthetic ScoresOne of the requirements in the article is a synthetic score that can be used to rank teams. In the sales organization they were able to use metrics on calls, conversions, etc... Everyone was effectively doing close to the same thing and so everyone's efforts were both measurable and comparable.
Using online games as an example for similar synthetic scores, a guild (team) might have a level, some valuation of their assets, renown for doing things, etc... Generally these values being higher means that a guild is more successful and thus you can use this as a metric to compare and pick the "top teams". These synthetic scores can also provably create negative behaviors as well. For instance, there may be game provided ways to increase renown or reputation that are more efficient (thus more lucrative to the players/guilds) but less valuable to a yet larger roll-up (guilds are likely in some faction or alliance against some other similar grouping of guilds). Guarding a castle on the frontier for little renown while other guilds are not helping and are away doing instances can be quite an interesting and frustrating night on the server ;-)
So let's apply the synthetic scores to the proposed areas:
Coding - More generally software engineering, has a ton of data metrics that we can track, but NOBODY agrees on what a developer should be measured on. Positive measures like bug fixes, lines of code, code review defects found can be hard to agree on. More code is not necessarily better code, while not all bug fixes are the same difficulty. Negative measures, like defects introduced per k-loc, over/under on costed work items can be even worse and lead to "overly safe costing" or lobbying for less complicated tasks.
Recruiting - This has strong synthetic scoring potential. A hire is a hire, and while there will be some argument on the quality of a hire, this will generally be vetted during the hiring process. There is a built in validation phase (by the team who is accepting the incoming hire) that can be used to stabilize the synthetic score. In this case the simplest score is number of hires. The team spin is that acquisition of a hire will have many phases which will enable multiple team members to all contribute at their best. Whether it be technical screening, talent spotting, selling the hire on the package, etc...Building the synthetic score will be more effective in environments where all "players" can agree on what is a correct measurement for the area. Using a synthetic score is likely to not work well when the metrics can be gamed too easily, there is no validation of quality function or there is a lack of general consensus that the metric being measured is even valuable.
SeasonsIn the article there is a period of measurement that helps to build on this idea of using fantasy teams. In sales there are well organized sales seasons. They might be around the introduction of a new product, a flash sale on a discontinued product to reduce inventory or even a universally observed time period such as a holiday. Not to be overlooked, this is a very critical aspect of gamification on a large scale so we have to apply it to our proposed areas.
Nearly every major competitive eSport has seasons (I'd say all, but some appear more organized around specific yearly tournaments, than on a true season). LoL or League of Legends is a poster child for showing how seasons work. First, they establish a fixed period of engagement to which a player will have to agree. Not lasting through the entire period and quitting early results in degrading in the rankings. Also, staying on top of the leader boards or getting to the top is not something that one can do simply by popping in at the end. You'll simply lose to the better more experienced players who trained throughout the entire period. Second, the organize seasons into general performance play that ends up in a final record. Once the general season is over, you play in brackets and the emphasis shifts from consistent play game to game, reducing injuries (burnout), etc... into a more highly skilled, risky, burst style of play for the end. This later part gives everyone who makes the finals a chance to win regardless of their past play and entices them to work harder at the end. For LoL, this obviously means higher quality games, more viewers, more excitement and more money. To the tune of billions of dollars. For the sales organization this means a really good month ;-)
Now let's apply seasons to our proposed areas:
Coding - So software does have seasons. In fact, you could argue that Agile and Extreme and other types of planning and team execution processes that stepped in to replace the older waterfall models, was a form of gamification of the software process. This even shortened the seasons to reasonable periods of time that a software engineer might want to dedicate their full attention to. We can also break down our work into coding, testing, bug fixing and planning periods, which together could represent different quality metrics within a given a season. What we lack is the final brackets, which means we can't get the added benefit and burst from the playoffs and grand master finals. At least I've never seen a manager willing to play their teams against one another in the end game of a software product. If you have feel free to let me know how it went!
Recruiting - Hiring certainly has seasons. While you can get some free agents during the seasons as people change companies. Most hires are from college and there are fixed periods of college recruiting and internship. At least in fields whose primary injection of talent is from university. You can even arrange your seasons to maximize for a playoffs and finals. 2 out of 2 for recruiting, they should probably start building fantasy teams immediately ;-)
Data Analysis - I didn't talk about data analysis in synthetic scores because it seemed obvious. Everything in data analysis is pretty much scorable. It is also easy to organize seasons because you can manage when and how datasets are provided to the teams, how long they have to provide insights, etc... I would say of the proposed areas data analysis is already a game waiting to happen. Note: I'm kind of cheating here, if you frequent PAXDev or GDC or some other game development conference you'll find several enterprising game designers talking about data analysis insights. Coincidence? I think not!Seasons are going to be the easiest bit of organization you can do since it falls into your standard scheduling practices. It might require some clever insights in order to create play-off and grand master finals experiences (yeah, I'm going to keep calling it grand master finals, because this is a gamification article and the winner MUST get a super cool name).
TeamificationSo the final bit of this story is really whether or not you can structure your event to promote this teamification concept. Per the article, teamification is supposed to create an environment in which the transparency impacts of revealing everyone's performance numbers, doesn't negatively impact the low performing team members. By rolling up to the team level, there is a friendly and social opportunity for the team within to help the lesser performing individuals.
In practice, you will have to do quite a bit of work to ensure this is true. In most cases the statistics which play into the scores are easily available to everyone in the organization. In software we have the ultimate transparency. I know how often you check in, how many bugs you fix, how many k-locs you add/delete. It is all stored in the source history and there isn't much you can do to hide it.
In a sales organization, you can potentially hide some of this information from everyone except for the managers and direct team members. Similarly for recruiting, the hiring records can also be kept within teams until the end.
One thing I like about the promises of teamification is that we form our world views based on our interactions within our team. We very rarely come up with new principles and practices in a vacuum. It takes many people trying them out to validate them and often it takes quite a bit of support. For example, in coding, test driven development is a very strong practice. But if you are not good at testing, and you don't have anyone near you who is, you aren't going to be very efficient at TDD. In fact, you might shy away from writing tests. You might even be the top coder for a bit. And then you are going to be the top of the bugs introduced list and the owner of the most buggy features. The places where we need teamification can often be the places where it is hardest to introduce the concept.