Skip to contentSkip to navigation
/ Winter 2019/Issue 97

Best Business Books 2019: Tech & innovation

Speaking of code.

Safi Bahcall
Loonshots: How to Nurture the Crazy Ideas That Win Wars, Cure Diseases, and Transform Industries (St. Martin’s Press, 2019)

Kartik Hosanagar
A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control (Viking, 2019)

Clive Thompson
Coders: The Making of a New Tribe and the Remaking of the World (Penguin Press, 2019)
*A TOP SHELF PICK


This is an odd moment in the history of technology and innovation. Technology companies have never been more powerful or influential. The five most valuable corporations in the world are all American tech giants, and the products they make and the services they provide continue to colonize an ever-larger chunk of our daily lives. Yet that very power has occasioned a serious anti-tech backlash, driven in part by a sense that these companies have too often exercised their might in a cavalier and careless fashion, and in part by anxieties about how their dominance may be hindering innovation. So it’s only fitting that this year’s best business books on technology and innovation and grapple with the fundamental challenges facing the tech world today — how to continue to drive radical innovation, how to manage the rise of ubiquitous machine intelligence, and how to make software that’s socially useful and beneficial as well as lucrative.

Silicon Valley has always prided itself on offering innovation. And yet in recent years there’s been a nagging concern that for all the money being poured into startups and all the money invested by the tech giants themselves, the payoff has been disappointing. As the contrarian venture capitalist Peter Thiel famously griped, “We wanted flying cars, instead we got 140 characters.” Other industries have been wrestling with similar issues. The rate of drug discovery by big pharma, for instance, has slowed while the cost of developing new drugs has skyrocketed; the movie industry is increasingly dominated by big franchise production.

Safi Bahcall’s Loonshots offers both an implicit explanation for this phenomenon and a recipe for how even big, established companies can nurture the kind of crazy ideas that ultimately turn into world-changing innovations. What Bahcall terms a loonshot is an attempt to take on a problem that is, as Polaroid founder Edwin Land put it, “manifestly important and nearly impossible” to achieve. These are the kinds of problems that we want companies to go after. But they’re also the kinds of problems that are difficult for companies, particularly established ones, to go after, because the risks are high and the payoffs are hard to measure.

How can a company become what Bahcall, the founder of a successful biotech company, calls a “loonshot nursery”? The models he points to were spearheaded by Vannevar Bush, who ran the Office of Scientific Research and Development for the U.S. military during World War II and who later was instrumental in getting the Defense Advanced Research Projects Agency (DARPA) off the ground, and Theodore Vail, the president of AT&T who created Bell Labs more than 100 years ago. These leaders gave innovators — Bahcall calls them “artists” — the time and space they needed to develop ideas. They recognized that the task of coming up with new innovations is different from the task of turning innovations into concrete products and services, so they created separate groups for each function.

At the same time, Bush and Vail didn’t denigrate the work of the “soldiers” whose job it was to execute the artists’ ideas. On the contrary, they recognized that any healthy, successful organization needs both functions to thrive, and they worked to ensure what Bahcall calls a dynamic equilibrium between the two groups. Crucially, neither Bush nor Vail saw himself as a creator leading the organization, like Moses, to the promised land. Instead, they acted like gardeners, creating processes that allowed ideas to move from the nursery to the field, and allowed useful feedback to come from the field to the nursery. Because they were not personally invested in any one idea, they lowered the risk that they would bet too heavily on an ultimately doomed concept.

In other words, it isn’t enough to just set up a skunkworks. It’s also necessary to carefully tend to and manage the relationship between the skunkworks and the organization as a whole, lest you end up like the fabled Xerox PARC, churning out great idea after great idea that the company doesn’t do much with.

Bahcall also argues, in a message that seems particularly germane to the tech giants, that as companies grow bigger, incentives change, particularly for those middle managers who play a huge role in allocating time and resources in any organization. Specifically, the rewards for investing time in office politics increase, because it becomes harder for any one project, let alone any one manager, to make an obvious difference to the bottom line.

Finally, companies need to watch out for the “Moses trap,” when early success makes a company’s leader supremely powerful and convinced of his or her own genius. In evaluating loonshots, Bahcall argues, companies need to focus on process rather than outcome. They also must ensure that they have a rigorous system for evaluating ideas and making decisions, one that allows them to be comfortable with failure as long as the process was the correct one. Having a leader who’s seen as a visionary creator makes it harder to do this, because it’s difficult to challenge a Moses. Steve Jobs’s greatest successes at Apple, in fact, came after he failed multiple times, and he became less of a Moses and more of a gardener (even if he was a really tough, obsessive gardener).

Machine intelligence

Coming up with groundbreaking innovations may be enormously challenging. But it has become clear that the challenge doesn’t end there, because companies are increasingly being held responsible for managing the consequences of innovation. And in no field is that more true than artificial intelligence (AI), or what Wharton School professor Kartik Hosanagar more accurately calls “machine intelligence,” in his extraordinarily lucid A Human’s Guide to Machine Intelligence.

Hosanagar has developed and deployed his own algorithms at a number of companies, including ones that help businesses with advertising and marketing and with testing website design, and has spent many years studying the impact of algorithms on human behavior. Written for laypeople, this book is as much about human behavior and psychology as it is about technology, because it’s human behavior that AI algorithms seek to alter, and human psychology that determines how we respond not only to what algorithms do, but also to the broader concerns they provoke.

Those concerns typically focus either on machines taking all our jobs or, more apocalyptically, on machines becoming self-aware — à la Skynet in the Terminator movies — and then destroying (or trying to destroy) humans. But although Hosanagar touches on these issues, his real focus is on the way algorithms are already having a profound influence on our choices and decisions, remaking us in ways that we oftentimes don’t even notice.

Hosanagar is a firm believer in the long-term benefits of machine learning, which has dramatically improved the diagnosis of disease and the management of money. But he is also keenly aware of the costs and the dangers that may arise as machine learning becomes more ubiquitous. It’s essential, he argues, to pay attention to the negative effects of algorithmic decision making, because if we don’t, they “will become deep-seated and harder to resolve.” And if we don’t engage with how humans respond to algorithms, we risk a backlash against machine learning in particular that could chill innovation in the field.

Algorithms don’t just help us find the products or services we want more quickly. Instead, they “exert a significant influence on precisely what and how much we consume.” One reason for this is that we don’t always know exactly what it is we’re looking for — even if we think we do. Match.com, for instance, asks users to define their ideal dating partners, and its algorithms originally relied heavily on what people said they wanted. Over time, the company migrated its algorithms to rely instead on the profiles people actually visited — in other words, it looked at what customers actually did, rather than what they said.

That shift improved the recommendations that Match provided. But as Hosanagar points out, this isn’t a simple success story. Instead, it’s a story about a company deciding that it understands its customers better than they understand themselves, and that it should give its customers not what they ask for, but what they really want (or, rather, what the company thinks they really want).

That kind of behavioral tinkering is now par for the course in the machine intelligence world. As two well-known, but still resonant, Facebook experiments have shown, simply tweaking users’ news feeds can make them more likely to vote and can have a meaningful impact on their moods. Big social media companies can, then, markedly alter people’s behavior with just a few small alterations of algorithms that decide what they’ll see. And as far as we can tell, it can do so without the people whose emotions and actions are being shaped ever noticing.

Such issues are especially resonant today, in part because algorithms themselves are increasingly reliant on machine learning. That is, instead of programmers explicitly issuing a host of strict if-then commands that determine what the algorithm can and cannot do, algorithms are programmed to, in effect, learn from experience (in the form of huge chunks of data) and then teach themselves the best strategies for solving problems. In 2015, for instance, researchers at Mount Sinai Hospital programmed a deep-learning algorithm to study the test reports and doctor diagnoses of 700,000 patients, and then derive its own diagnostic rules. The algorithm eventually became as proficient at diagnosing as an experienced doctor. Even more strikingly, Google’s algorithm AlphaGo taught itself how to play the game Go by learning from a database of 30 million moves made by expert Go players, and then playing millions of games against itself. The algorithm became the best Go player in the world and made moves that struck experienced Go players as completely original.

What’s interesting about these moves is that there was no real way for AlphaGo’s programmers to explain why the algorithm did what it did. That’s not a big deal when we’re talking about a game. But it might well be a very big deal when it comes to fields where algorithms are increasingly relied upon to make decisions: in financial markets, or medical diagnoses, or decisions about which criminal suspects should get out on bail and which shouldn’t.

Cracking the coders

Hosanagar suggests, as a result, that what we need is an algorithmic bill of rights. The basic idea is that we need some measure of transparency and control, and that those devising algorithms need to acknowledge the way they can create unintended and perverse consequences. But as journalist Clive Thompson shows to great effect in his rigorous and fascinating Coders, the best business book of the year on technology and innovation, the challenge is that the kind of people who write and devise the algorithms that are coming to govern so much of our lives are not, at the moment, necessarily the kind of people who care all that much about their negative effects.

Coders is a book laced with deep affection for practitioners of the craft and for the act of traditional computer coding — for its clarity and rigor, and for the simplicity of a reward system in which a program either works or doesn’t (so different, as Thompson says, from the ambiguity and messiness of writing). Thompson writes as a kind of anthropologist investigating a distinctive and vivid subculture, but he’s an anthropologist who feels a certain kinship with his subjects.

Understanding coders has never been more important. One of the distinctive developments of the past 20 years is that coders are now the people running companies, the people in charge of making really important decisions that shape our politics, our economy, and much of our everyday lives. Those decisions have been enormously lucrative, but have also led to an enormous amount of skepticism about the value of the work that coders do. Although there are surely people in Silicon Valley who still see technology as the way to a brighter, freer, more connected future, the double-edged nature of technology, and of the Internet specifically, should be obvious.

One of the distinctive developments of the past 20 years is that coders are now the people running companies, the people in charge of making really important decisions that shape our politics, our economy, and much of our everyday lives.

Thus the importance of Thompson’s book is that it helps us understand, in a deep sense, the world coders inhabit. It’s a world in which efficiency is often seen as a paramount goal. And it’s a world in which the issues that matter most have been practical ones — did these lines of code accomplish the task they’re supposed to accomplish? It’s also a relatively homogeneous world: predominantly male, predominantly young, and overwhelmingly white and Asian. And Coders does an excellent job of illuminating how that homogeneity shapes the choices the group makes, and the innovations they produce.

It isn’t just that so many startup ideas of the past few years seem targeted to solve the problems of young single men who basically want someone to feed them and clean up after them. It’s also, more subtly, that a lot of the coders Thompson writes about seem imbued with a certain naivete about, or perhaps indifference to, social dynamics, and the way that power differentials affect behavior. Convinced that connecting people was, in and of itself, a good thing, companies such as Facebook ignored the dangers of connection: negative feedback loops, abuse and harassment, social contagion, and the potential for exploitation by malicious actors. (Thompson posits that had women or people of color played a bigger role in devising social media protocols, protections against abuse and harassment would have been far more robust from the start.) And, of course, the fact that playing to people’s worst impulses is a reliable traffic driver gave these companies an added incentive to look past the bad behavior they were facilitating.

If the largest tech companies are serious about changing, then, an easy place to start would be to diversify their workforce, both in terms of demographics and in terms of training. Coding, as Thompson describes it, can encourage a certain narrowness of vision, a limited perspective on how the world works and what matters. And what Silicon Valley needs now is a wider range of perspectives that can inform the decisions about what it chooses to build and, just as important, what it chooses not to build. In the absence of real government action, companies need to think much harder about those questions than they have before, because the ubiquity and influence of social media and the rise of machine learning mean that the stakes are incredibly high. These companies have great power. It’s time for them to show great responsibility.

Get s+b's award-winning newsletter delivered to your inbox. Sign up No, thanks
Illustration of flying birds delivering information
Get the newsletter

Sign up now to get our top insights on business strategy and management trends, delivered straight to your inbox twice a week.