Monday, February 28, 2011

The programmer’s dilemma: Building a Jeopardy! champion

In 2007, IBM computer scientist David Ferrucci and his team embarked on the challenge of building a computer that could take on—and beat—the two best players of the popular US TV quiz show Jeopardy!, a trivia game in which contestants are given clues in categories ranging from academic subjects to pop culture and must ring in with responses that are in the form of questions. The show, a ratings stalwart, was created in 1964 and has aired for more than 25 years. But this would be the first time the program would pit man against machine.
In some sense, the project was a follow-up to Deep Blue, the IBM computer that defeated chess champion Garry Kasparov in 1997. Although a TV quiz show may seem to lack the gravitas of the classic game of chess, the task was in many ways much harder. It wasn’t just that the computer had to master straightforward language, it had to master humor, nuance, puns, allusions, and slang—a verbal complexity well beyond the reach of most computer processors. Meeting that challenge was about much more than just a Jeopardy! championship. The work of Ferrucci and his team illuminates both the great potential and the severe limitations of current computer intelligence—as well as the capacities of the human mind. Although the machine they created was ultimately dubbed “Watson” (in honor of IBM’s founder, Thomas J. Watson), to the team that painstakingly constructed it, the game-playing computer was known as Blue J.
The following article is adapted from Final Jeopardy: Man vs. Machine and the Quest to Know Everything (Houghton Mifflin Harcourt, February 2011), by Stephen Baker, an account of Blue J’s creation.

It was possible, Ferrucci thought, that someday a machine would replicate the complexity and nuance of the human mind. In fact, in IBM’s Almaden Research Center, on a hilltop high above Silicon Valley, a scientist named Dharmendra Modha was building a simulated brain equipped with 700 million electronic neurons. Within years, he hoped to map the brain of a cat, and then a monkey, and, eventually, a human. But mapping the human brain, with its 100 billion neurons and trillions or quadrillions of connections among them, was a long-term project. With time, it might result in a bold new architecture for computing, one that could lead to a new level of computer intelligence. Perhaps then, machines would come up with their own ideas, wrestle with concepts, appreciate irony, and think more like humans.
But such machines, if they ever came, would not be ready on Ferrucci’s schedule. As he saw it, his team had to produce a functional Jeopardy!-playing machine in just two years. If Jeopardy!’s executive producer, Harry Friedman, didn’t see a viable machine by 2009, he would never green-light the man–machine match for late 2010 or early 2011. This deadline compelled Ferrucci and his team to build their machine with existing technology—the familiar semiconductors etched in silicon, servers whirring through billions of calculations and following instructions from many software programs that already existed. In its guts, Blue J would not be so different from the battered ThinkPad Ferrucci lugged from one meeting to the next. No, if Blue J was going to compete with the speed and versatility of the human mind, the magic would have to come from its massive scale, inspired design, and carefully-tuned algorithms. In other words, if Blue J became a great Jeopardy! player, it would be less a triumph of science than of engineering.
Blue J’s literal-mindedness posed the greatest challenge. Finding suitable data for this gullible machine was only the first job. Once Blue J was equipped with its source material—from James Joyce to the Boing Boing blog—the IBM team would have to teach the machine to make sense of those texts: to place names and facts into context, and to come to grips with how they were related to each other. Hamlet, just to pick one example, was related not only to his mother, Gertrude, but also to Shakespeare, Denmark, Elizabethan literature, a famous soliloquy, and themes ranging from mortality to self-doubt, just for starters. Preparing Blue J to navigate all of these connections for virtually every entity on earth, factual or fictional, would be the machine’s true education. The process would involve creating, testing, and fine-tuning thousands of algorithms. The final challenge would be to prepare the machine to play the game itself. Eventually, Blue J would have to come up with answers it could bet on within three to five seconds. For this, the Jeopardy! team would need to configure the hardware of a champion.
Every computing technology Ferrucci had ever touched had a clueless side to it. The machines he knew could follow orders and carry out surprisingly sophisticated jobs. But they were nowhere close to humans. The same was true of expert systems and neural networks. Smart in one area, clueless elsewhere. Such was the case with the Jeopardy! algorithms that his team was piecing together in IBM’s Hawthorne, New York, labs. These sets of finely honed computer commands each had a specialty, whether it was hunting down synonyms, parsing the syntax of a Jeopardy! clue, or counting the most common words in a document. Outside of these meticulously programmed tasks, though, each was fairly dumb.
So how would Blue J concoct broader intelligence—or at least enough of it to win at Jeopardy!? Ferrucci considered the human brain. “If I ask you what 36 plus 43 is, a part of you goes, ‘Oh, I’ll send that question over to the part of my brain that deals with math,’” he said. “And if I ask you a question about literature, you don’t stay in the math part of your brain. You work on that stuff somewhere else.” Ferrucci didn’t delve into how things work in a real brain; for his purposes, it didn’t matter. He just knew that the brain has different specialties, that people know instinctively how to skip from one to another, and that Blue J would have to do the same thing.
The machine would, however, follow a different model. Unlike a human, Blue J wouldn’t know where to start answering a question. So with its vast computing resources, it would start everywhere. Instead of reading a clue and assigning the sleuthing work to specialist algorithms, Blue J would unleash scores of them on a hunt, and then see which one came up with the best answer. The algorithms inside of Blue J, each following a different set of marching orders, would bring in competing results. This process, a lot less efficient than the human brain, would require an enormous complex of computers. More than 2,000 processors would each handle a different piece of the job. But the team would concern itself later with these electronic issues—Blue J’s body—after they got its thinking straight.
To see how these algorithms carried out their hunt, consider one of the thousands of clues the fledgling system grappled with. Under the category Diplomatic Relations, one clue read: “Of the four countries the United States does not have diplomatic relations with, the one that’s farthest north.”
In the first wave of algorithms to handle the clue was a group that specialized in grammar. They diagrammed the sentence, much the way a grade-school teacher would, identifying the nouns, verbs, direct objects, and prepositional phrases. This analysis helped to clear up doubts about specific words. In this clue, “the United States” referred to the country, not the Army, the economy, or the Olympic basketball team. Then the algorithms pieced together interpretations of the clue. Complicated clues, like this one, might lead to different readings—one more complex, the other simpler, perhaps based solely on words in the text. This duplication was wasteful, but waste was at the heart of Blue J’s strategy. Duplicating or quadrupling its effort, or multiplying it by 100, was one way the computer could compensate for its cognitive shortcomings, and also play to its advantage: speed. Unlike humans, who can instantly understand a question and pursue a single answer, the computer might hedge, launching searches for a handful of different possibilities at the same time. In this way and many others, Blue J would battle the efficient human mind with spectacular, flamboyant inefficiency. “Massive redundancy” was how Ferrucci’s described it. Transistors were cheap and plentiful. Blue J would put them to use.
While the machine’s grammar-savvy algorithms were dissecting the clue, one of them searched for its focus, or answer type. In this clue about diplomacy, “the one” evidently referred to a country. If this was the case, the universe of Blue J’s possible answers was reduced to a mere 194, the number of countries in the world. (This, of course, was assuming that “country” didn’t refer to “Marlboro Country” or “wine country” or “country music.” Blue J had to remain flexible, because these types of exceptions often popped up.)
Once the clue was parsed into a question the machine could understand, the hunt commenced. Each expert algorithm went burrowing through Blue J’s trove of data in search of the answer. One algorithm, following instructions developed for decoding the genome, looked to match strings of words in the clue with similar strings elsewhere, maybe in some stored Wikipedia entry or in articles about diplomacy, the United States, or northern climes. One of the linguists focused on rhymes with key words in the clue. Another algorithm used a Google-like approach and focused on documents that matched the greatest number of keywords in the clue, paying special attention to the ones that popped up most often.
While they the algorithms worked, software within Blue J would be comparing the clue to thousands of others it had encountered. What kind was it—a puzzle? A limerick? A historical factoid? Blue J was learning to recognize more than 50 types of questions, and it was constructing the statistical record of each algorithm for each type of question. This would guide it in evaluating the results when they came back. If the clue turned out to be an anagram, for example, the algorithm that rearranged the letters of words or phrases would be the most trusted source. But that same algorithm would produce gibberish for most other clues.
What kind of clue was this one on diplomatic relations? It appeared to require two independent analyses. First, the computer had to come up with the four countries with which the United States had no diplomatic ties. Then it had to figure out which of those four was the farthest north. A group of Blue J’s programmers had recently developed an algorithm that focused on these so-called nested clues, in which one answer lay inside another. This may sound obscure, but humans ask these types of questions all the time. If someone wonders about “cheap pizza joints close to campus,” the person answering has to carry out two mental searches, one for cheap pizza joints and another for those nearby. Blue J’s “nested decomposition” led the computer through a similar process. It broke the clues into two questions, pursued two hunts for answers, and then pieced them together. The new algorithm was proving useful in Jeopardy!. One or two of these combination questions came up in nearly every game. They are especially common in the all-important Final Jeopardy, which usually features more complex clues.
It would take Blue J almost an hour for its algorithms to churn through the data and return with their candidate answers. Most were garbage. There were failed anagrams of country names and laughable attempts to rhyme “north” with “diplomatic.” Some suggested the names of documents or titles of articles that had strings of the same words. But the nested algorithm followed the right approach. It found the four countries on the outs with the United States (Bhutan, Cuba, Iran, and North Korea), checked their geographical coordinates, and came up with the answer: “What is North Korea?”
At this point, Blue J had the right answer. But the machine did not yet know that North Korea was correct, or that it even merited enough confidence for a bet. For this, it needed loads of additional analysis. Since the candidate answer came from an algorithm with a strong record on nested clues, it started out with higher-than-average confidence in that answer. The machine would proceed to check how many of the answers matched the question type: “country.” After ascertaining from various lists that North Korea appeared to be a country, confidence in “What is North Korea?” rose further up the list. For an additional test, it would place the words “North Korea” into a simple sentence generated from the clue: “North Korea has no diplomatic relations with the United States.” Then it would see if similar sentences showed up in its data trove. If so, confidence climbed higher.
In the end, it chose North Korea as the answer to bet on. In a real game, Blue J would have hit the buzzer. But being a machine, it simply moved on to the next clue.

McKinsey on Finance on iPad
https://www.mckinseyquarterly.com/High_Tech/Software/The_programmers_dilemma_Building_a_Jeopardy_champion_2752?gp=1

Monday, February 21, 2011

Blogs Wane as the Young Drift to Sites Like Twitter

SAN FRANCISCO — Like any aspiring filmmaker, Michael McDonald, a high school senior, used a blog to show off his videos. But discouraged by how few people bothered to visit, he instead started posting his clips on Facebook, where his friends were sure to see and comment on his editing skills.

“I don’t use my blog anymore,” said Mr. McDonald, who lives in San Francisco. “All the people I’m trying to reach are on Facebook.”
Blogs were once the outlet of choice for people who wanted to express themselves online. But with the rise of sites like Facebook and Twitter, they are losing their allure for many people — particularly the younger generation.
The Internet and American Life Project at the Pew Research Center found that from 2006 to 2009, blogging among children ages 12 to 17 fell by half; now 14 percent of children those ages who use the Internet have blogs. Among 18-to-33-year-olds, the project said in a report last year, blogging dropped two percentage points in 2010 from two years earlier.
Former bloggers said they were too busy to write lengthy posts and were uninspired by a lack of readers. Others said they had no interest in creating a blog because social networking did a good enough job keeping them in touch with friends and family.
Blogging started its rapid ascension about 10 years ago as services like Blogger and LiveJournal became popular. So many people began blogging — to share dieting stories, rant about politics and celebrate their love of cats — that Merriam-Webster declared “blog” the word of the year in 2004.
Defining a blog is difficult, but most people think it is a Web site on which people publish periodic entries in reverse chronological order and allow readers to leave comments.
Yet for many Internet users, blogging is defined more by a personal and opinionated writing style. A number of news and commentary sites started as blogs before growing into mini-media empires, like The Huffington Post or Silicon Alley Insider, that are virtually indistinguishable from more traditional news sources.
Blogs went largely unchallenged until Facebook reshaped consumer behavior with its all-purpose hub for posting everything social. Twitter, which allows messages of no longer than 140 characters, also contributed to the upheaval.
No longer did Internet users need a blog to connect with the world. They could instead post quick updates to complain about the weather, link to articles that infuriated them, comment on news events, share photos or promote some cause — all the things a blog was intended to do.
Indeed, small talk shifted in large part to social networking, said Elisa Camahort Page, co-founder of BlogHer, a women’s blog network. Still, blogs remain a home of more meaty discussions, she said.
“If you’re looking for substantive conversation, you turn to blogs,” Ms. Camahort Page said. “You aren’t going to find it on Facebook, and you aren’t going to find it in 140 characters on Twitter.”
Lee Rainie, director of the Internet and American Life Project, says that blogging is not so much dying as shifting with the times. Entrepreneurs have taken some of the features popularized by blogging and weaved them into other kinds of services.
“The act of telling your story and sharing part of your life with somebody is alive and well — even more so than at the dawn of blogging,” Mr. Rainie said. “It’s just morphing onto other platforms.”
The blurring of lines is readily apparent among users of Tumblr. Although Tumblr calls itself a blogging service, many of its users are unaware of the description and do not consider themselves bloggers — raising the possibility that the decline in blogging by the younger generation is merely a semantic issue.
Kim Hou, a high school senior in San Francisco, said she quit blogging months ago, but acknowledged that she continued to post fashion photos on Tumblr. “It’s different from blogging because it’s easier to use,” she said. “With blogging you have to write, and this is just images. Some people write some phrases or some quotes, but that’s it.”
The effect is seen on the companies providing the blogging platforms. Blogger, owned by Google, had fewer unique visitors in the United States in December than it had a year earlier — a 2 percent decline, to 58.6 million — although globally, Blogger’s unique visitors rose 9 percent, to 323 million.
LiveJournal, another blogging service, has decided to emphasize communities. Connecting people who share an interest in celebrity gossip, for instance, provides the social interaction that “classic” blogging lacks, said Sue Rosenstock, a spokeswoman for LiveJournal, which is owned by SUP, a Russian online media company. “Blogging can be a very lonely occupation; you write out into the abyss,” she said.
But some blogging services like Tumblr and WordPress seem to have avoided any decline. Toni Schneider, chief executive of Automattic, the company that commercializes the WordPress blogging software, explains that WordPress is mostly for serious bloggers, not the younger novices who are defecting to social networking.
In any case, he said bloggers often use Facebook and Twitter to promote their blog posts to a wider audience. Rather than being competitors, he said, they are complementary.
“There is a lot of fragmentation,” Mr. Schneider said. “But at this point, anyone who is taking blogging seriously — they’re using several mediums to get a large amount of their traffic.”
While the younger generation is losing interest in blogging, people approaching middle age and older are sticking with it. Among 34-to-45-year-olds who use the Internet, the percentage who blog increased six points, to 16 percent, in 2010 from two years earlier, the Pew survey found. Blogging by 46-to-55-year-olds increased five percentage points, to 11 percent, while blogging among 65-to-73-year-olds rose two percentage points, to 8 percent.
Russ Steele, 72, a retired Air Force officer and aerospace worker from Nevada City, Calif., says he spends up to three hours a day seeking interesting topics and writing about them for his blog, NC Media Watch, which covers local issues in Nevada County, northeast of Sacramento. All he wants is to have a voice in the community for his conservative views.
Although he signed up for Facebook this month, Mr. Steele said he did not foresee using it much and said that he remained committed to blogging. “I’d rather spend my time writing up a blog analysis than a whole bunch of short paragraphs and then send them to people,” he said. “I don’t need to tell people I’m going to the grocery store.”

http://www.nytimes.com/2011/02/21/technology/internet/21blog.html

Thursday, February 17, 2011

Business and art: how they work together

ARTISTS routinely deride businesspeople as money-obsessed bores. Or worse. Every time Hollywood depicts an industry, it depicts a conspiracy of knaves. Think of “Wall Street” (which damned finance), “The Constant Gardener” (drug firms), “Super Size Me” (fast food), “The Social Network” (Facebook) or “The Player” (Hollywood itself). Artistic critiques of business are sometimes precise and well-targeted, as in Lucy Prebble’s play “Enron”. But often they are not, as those who endured Michael Moore’s “Capitalism: A Love Story” can attest.
Many businesspeople, for their part, assume that artists are a bunch of pretentious wastrels. Bosses may stick a few modernist daubs on their boardroom walls. They may go on corporate jollies to the opera. They may even write the odd cheque to support their wives’ bearded friends. But they seldom take the arts seriously as a source of inspiration.
The bias starts at business school, where “hard” things such as numbers and case studies rule. It is reinforced by everyday experience. Bosses constantly remind their underlings that if you can’t count it, it doesn’t count. Quarterly results impress the stockmarket; little else does.
Managers’ reading habits often reflect this no-nonsense attitude. Few read deeply about art. “The Art of the Deal” by Donald Trump does not count; nor does Sun Tzu’s “The Art of War”. Some popular business books rejoice in their barbarism: consider Wess Robert’s “Leadership Secrets of Attila the Hun” (“The principles are timeless,” says Ross Perot) or Rob Adams’s “A Good Hard Kick in the Ass: the Real Rules for Business”.
But lately there are welcome signs of a thaw on the business side of the great cultural divide. Business presses are publishing a series of luvvie-hugging books such as “The Fine Art of Success”, by Jamie Anderson, Jörg Reckhenrich and Martin Kupp, and “Artistry Unleashed” by Hilary Austen. Business schools such as the Rotman School of Management at the University of Toronto are trying to learn from the arts. New consultancies teach businesses how to profit from the arts. Ms Austen, for example, runs one named after her book.
All this unleashing naturally produces some nonsense. Madonna has already received too much attention without being hailed as a prophet of “organisational renewal”. Bosses have enough on their plates without being told that they need to unleash their inner Laurence Oliviers. But businesspeople nevertheless have a lot to learn by taking the arts more seriously.
Mr Anderson & co point out that many artists have also been superb entrepreneurs. Tintoretto upended a Venetian arts establishment that was completely controlled by Titian. He did this by identifying a new set of customers (people who were less grand than the grandees who supported Titian) and by changing the way that art was produced (working much faster than other artists and painting frescoes and furniture as well as portraits). Damien Hirst was even more audacious. He not only realised that nouveau-riche collectors would pay extraordinary sums for dead cows and jewel-encrusted skulls. He upturned the art world by selling his work directly through Sotheby’s, an auction house. Whatever they think of his work, businesspeople cannot help admiring a man who parted art-lovers from £70.5m ($126.5m) on the day that Lehman Brothers collapsed.
Studying the arts can help businesspeople communicate more eloquently. Most bosses spend a huge amount of time “messaging” and “reaching out”, yet few are much good at it. Their prose is larded with clichés and garbled with gobbledegook. Half an hour with George Orwell’s “Why I Write” would work wonders. Many of the world’s most successful businesses are triumphs of story-telling more than anything else. Marlboro and Jack Daniels have tapped into the myth of the frontier. Ben & Jerry’s, an ice-cream maker, wraps itself in the tie-dyed robes of the counter-culture. But business schools devote far more energy to teaching people how to produce and position their products rather than how to infuse them with meaning.
Studying the arts can also help companies learn how to manage bright people. Rob Goffee and Gareth Jones of the London Business School point out that today’s most productive companies are dominated by what they call “clevers”, who are the devil to manage. They hate being told what to do by managers, whom they regard as dullards. They refuse to submit to performance reviews. In short, they are prima donnas. The arts world has centuries of experience in managing such difficult people. Publishers coax books out of tardy authors. Directors persuade actresses to lock lips with actors they hate. Their tips might be worth hearing.
Corporations chasing inspiration
Studying the art world might even hold out the biggest prize of all—helping business become more innovative. Companies are scouring the world for new ideas (Procter and Gamble, for example, uses “crowdsourcing” to collect ideas from the general public). They are also trying to encourage their workers to become less risk averse (unless they are banks, of course). In their quest for creativity, they surely have something to learn from the creative industries. Look at how modern artists adapted to the arrival of photography, a technology that could have made them redundant, or how William Golding (the author of “Lord of the Flies”) and J.K. Rowling (the creator of Harry Potter) kept trying even when publishers rejected their novels.
If businesspeople should take art more seriously, artists too should take business more seriously. Commerce is a central part of the human experience. More prosaically, it is what billions of people do all day. As such, it deserves a more subtle examination on the page and the screen than it currently receives.
http://www.economist.com/node/18175675?story_id=18175675&fsrc=rss

The Middle Blingdom

MANY Chinese people still remember the days when luxury meant a short queue for the toilet at the end of the street, or a bus conductor who wasn’t excessively rude. Before the economy opened up, a chic suit meant one with the label of a state-owned factory sewn ostentatiously on the sleeve. How times change.
Sales of luxury goods are exploding, despite a hefty tax on importing them. A new report by CLSA, a broker, forecasts that overall consumption in China (including boring everyday items) will rise by 11% annually over the next five years. That is very fast. But sales of luxury goods will grow more than twice as quickly, reckons CLSA: by 25% a year. No other category comes close. Even spending on education, a Chinese obsession, is projected to grow by “only” 16% annually.
China is already the largest market for Louis Vuitton, a maker of surprisingly expensive handbags, accounting for 15% of its global sales. Within three years, reckons Aaron Fischer, the report’s author, China’s domestic market for bling will be bigger than Japan’s. By 2020 it will account for 19% of global demand for luxuries (see chart). And that is only half the story.

For the most ostentatious Chinese consumers like to shop abroad. CLSA estimates that 55% of the luxury goods bought by Chinese people are bought outside mainland China. This is partly because of those high tariffs, which can top 30%. But it is also because counterfeiting is rife. Ask a well-heeled Chinese lady about her new handbag and she is quite likely to point out that she bought it in Paris. This tells you not only that she is rich enough to travel, but also that the bag is genuine.
If you include the baubles Chinese people buy outside China, the nation’s share of the global luxury market will triple, to 44%, by 2020, predicts CLSA. The wealth of China’s upper-middle class has reached an inflection point, reckons Mr Fischer. They have everything they need. Now they want a load of stuff they don’t need, too.
In Hong Kong’s Tsim Sha Tsui shopping district, queues of bling-hungry mainlanders stretch into the streets outside stores carrying the best-known brands. Sales of jewellery in Hong Kong rose by 29% in the year to December; sales of high-end footwear and clothing shot up by 31%. Companies that cater to show-offs have much to boast about. Richemont, the world’s biggest jeweller, registered a 57% increase in Asian sales in the fourth quarter. Strip out Japan, the region’s sputtering ex-star, and sales probably doubled. Hermès, a maker of fancy accessories, saw its sales in Asia climb by 45%. Burberry China was up by 30%; LVMH Asia soared by 30% outside Japan. Luxury sales in December were “spectacular”, says Mr Fischer, and growth is accelerating.
In some ways the Chinese market is much like everywhere else. The same brands are popular, besides a few companies that are perceived in China to be Western but are in fact almost entirely geared toward China, such as Ports Design, a seller of posh clothes.
There are, however, substantial differences. The average Chinese millionaire is only 39, which is 15 years younger than the average elsewhere. Prosperous Chinese are less shy about flaunting their wealth than people in other countries. On the contrary, many believe they must show off to be taken seriously.
Whereas the market for luxury goods in other countries is typically dominated by women, in China the men fill the tills with nearly equal abandon. They buy both for themselves and for other men, since gifts lubricate business in China. They are often willing to pay a large premium over the list price for desired items—many believe, for some reason, that the more something costs, the better it is.
China’s growing taste for bling is a good thing not only for makers of luxury goods but also for Chinese consumers. It is a symptom of the fact that they have more to spend, that necessities no longer gobble up every spare yuan and that they can afford to add a little colour to their lives. Mao Zedong would not have approved, but his former serfs ignore his frowns and merrily fritter away the banknotes that still depict his face.

http://www.economist.com/node/18184466?story_id=18184466&fsrc=rss

Tuesday, February 15, 2011

Nokia at the crossroads

APOCALYPTIC language fuels the technology industry as much as venture capital does. But Stephen Elop, Nokia’s new boss, may have set a new standard. “We are standing on a burning [oil] platform,” he wrote in a memo to all 132,000 employees of the world’s biggest handset-maker. If Nokia did not want to be consumed by the flames, it had no choice but to plunge into the “icy waters” below. In plainer words, the company must change its ways radically.
On February 11th, at a “strategy and financial briefing” in London, Mr Elop is due to announce the change he has in mind. The main question is whether Nokia will continue with its own operating system for smartphones, team up with Microsoft or perhaps even make a bet on Android, the fast-growing system developed by Google. There has even been talk that Mr Elop, the Finnish firm’s first American chief executive, will fire senior managers and move the firm’s headquarters to Silicon Valley.
This would be an astonishing upheaval for what was one of Europe’s hottest firms. Behind Nokia’s woes lurks a dismal reversal of fortunes, not just for the Finnish company but also for much of Europe’s mobile-phone industry. In the 1990s Europe appeared to have beaten even Silicon Valley in mobile technology. European telecoms firms had settled on a single standard for mobile phones. Handsets became affordable, Europe was the biggest market for them and the old continent’s standard took over the world. “Europe was the cradle for innovation and scale in mobile”, says Ameet Shah of PRTM, a management consultancy.

This changed with the emergence of smartphones, in particular Apple’s iPhone, which appeared in 2007. Nokia still ships a third of all handsets, but Apple astonishingly pulls in more than half of the profits, despite having a market share of barely 4% (see charts, below). More Americans now have smartphones than Europeans. As for standards, Verizon, America’s biggest mobile operator, is leading the world in implementing the next wireless technology, called LTE.


Nokia, along with the rest of Europe’s mobile industry, is also being squeezed in both simple handsets and networking equipment. Cheap mobile phones based on chips from MediaTek, a company based in Taiwan, are increasingly popular in developing countries. By some accounts this system and its users now account for more than one-third of the phones sold globally, Mr Elop wrote in his memo. And at $28 billion in 2010, the revenues of China’s Huawei almost equal those of Sweden’s Ericsson, the world’s leading maker of gear for wireless networks.
At its most fundamental, this shift is the result of Moore’s Law, which holds that microprocessors double in computing power every 18 months. The first generations of modern mobile phones were purely devices for conversation and text messages. The money lay in designing desirable handsets, manufacturing them cheaply and distributing them widely. This played to European strengths. The necessary skills overlapped most of all in Finland, which explains why Nokia, a company that grew up producing rubber boots and paper, could become the world leader in handsets.
As microprocessors become more powerful, mobile phones are changing into hand-held computers. As a result, most of their value is now in software and data services. This is where America, in particular Silicon Valley, is hard to beat. Companies like Apple and Google know how to build overarching technology platforms. And the Valley boasts an unparalleled ecosystem of entrepreneurs, venture capitalists and software developers who regularly spawn innovative services.
Nokia had some additional problems to deal with. The firm realised its world was changing and was working on a touch-screen phone much like the iPhone as early as 2004. Realising the importance of mobile services, it launched Ovi, an online storefront for such things in 2007, a year before Apple opened its highly successful App Store.
But turning a Finnish hardware-maker into a provider of software and services is no easy undertaking. Nokia dallied and lost the initiative. Historically, Nokia has been a highly efficient manufacturing and logistics machine capable of churning out a dozen handsets a second and selling them all over the world. Planning was long-term and new devices were developed by separate teams, sometimes competing with each other—the opposite of what is needed in software, where there is a premium on collaborating and doing things quickly.
Olli-Pekka Kallasvuo, Nokia’s boss from 2006 until last September, was keenly aware of the difficulty. To get an infusion of fresh blood Nokia bought several start-ups and was reorganised to strengthen its software and services. And it tried to turn Symbian, its own operating system for smartphones, into a platform in the mould of the iPhone and Android. “But just like Sony, Nokia has not found a way to shift from hardware to software,” says Stéphane Téral of Infonetics Research.
To allow Nokia finally to shed its hardware skin, Mr Elop, a former senior executive at Microsoft, was brought in—and apparently given what Mr Kallasvuo never had: carte blanche. This is why most observers expect him to carry out thorough changes, concerning in particular the operating system on which Nokia intends to bet its future. The firm has to move fast if it wants to have a chance to create a third platform for mobile software and services next to Android and the iPhone.
Nokia is unlikely to throw in its lot with Android. The software may be open-source, but it comes with strings attached—notably Google’s mobile services and advertising. This would reduce Nokia to being a device-maker and render obsolete many of its investments in services. Nokia could go it alone with MeeGo, a technically advanced but still incomplete operating system it is developing jointly with Intel, but some think that is unlikely. Or it could bet on Microsoft’s new mobile operating system, Windows Phone 7.
Investors seem to prefer the Windows option. When rumours began swirling early this month that this was what Mr Elop was planning, Nokia’s share price, which has dropped by two-thirds since early 2008, surged by nearly 7%. Teaming up with Microsoft would indeed have benefits, says Ben Wood of CCS Insight, another market research firm. Given his background, Mr Elop could surely make such a partnership work. And it could help Nokia make a comeback in America, where its market share is in the low single digits. On the other hand, argues Mr Wood, Windows Phone 7 has not been a huge success so far. It would also take at least six months before the first “Windokia” phones hit the shelves—a long time in a fast-moving industry.
Other bits of Europe’s mobile-phone industry are already showing signs of revival. The revenues of ARM, a British firm, may only be in the hundreds of millions, but most microprocessors found in handsets and other mobile devices are based on its designs. Ericsson now generates 40% of its revenues with services, for instance by managing wireless networks around the world. And on February 7th Alcatel-Lucent unveiled technology that reduces the size of a wireless base station from a filing cabinet’s to that of a Rubik’s cube.
But for a full comeback, Europe will have to wait for an entrepreneurial culture like Silicon Valley’s. This may not be as hopeless as it sounds. The beginnings of such a culture have taken root in recent years, and some successful start-ups have sprouted. One of the most popular games for smartphones, for instance, does not hail from the Valley but from Finland. “Angry Birds” has been downloaded more than 50m times since its release in December 2009. It is so addictive that compulsive players have been asking their doctors for help in kicking the habit.


http://www.economist.com/node/18114689?story_id=18114689&fsrc=rss