Sunday, December 30, 2007

Here is a draft the second part of my article published by Financial Times Press in September, 2007. This article will be posted on FTPress.com on January, 25 2008 (in edited form of course).

Improving the "Stickiness" of Your Website Further:
Part 2: If they like A and B, would they like A+B?


Alex Gofman,

Vice President, Moskowitz Jacobs Inc.


Interactions in consumer research: searching for a needle in the hay

A few years ago, Heinz introduced quite weird Funky Fries – chocolate flavored and blue-colored fries. Heinz bet was on combining some highly popular ideas. Huge army of the consumers loves fries. Even bigger (arguably) crowd is sucker for chocolate. And kids love color.

As you can guess (or already know), the product has failed miserably. The ideas were so divergent that there was no synergy between them in the eyes of the consumers. Quite opposite, by putting the conflicting ideas together they lost appeal of both fries munchers and chocolate connoisseurs producing a negative effect (Bhatnagar, 2003).

In the marketing lexicon, the situation when reaction of consumers (their liking scores, purchase intent, etc.) to the messages (or ideas, elements of a package or a web page, etc.) combined together are not equal to the sum of their individual ratings, is called an interaction. A positive interaction (when customers' liking of the combined offer is higher than the sum of individual items scores) is called synergism. If customers like the combined idea less than the sum of individual liking scores of the components, then it is called a suppression (a negative interaction).

The problem lies in the shear number of possible pairs of elements. For example, if we have six placeholders on a webpage with six possible alternatives for each one, there are 540 possible pairs of elements.

This should explain why until very recently, the effect of interactions either was ignored or considered a middle ground between art and heavy statistics. In latter case, it required an expert guess about possible significant pairs. Such several 'alleged' (guessed) interactions were then tested with the consumers through a sophisticated statistical method of incorporating these pairs into the survey to confirm / reject the hypothesis.

Market researches tried to tackle the issue for many years (e.g., Green, 1973). Yet, many years later, if the expert was right (or lucky?) in foretelling the potential interactions, the results could lead to improved ideas. If not – too bad: some great ideas might have been discarded unnoticed or bad ideas went into production undetected.

Extending RDE to discover all and any interactions

In the previous article Improving the "Stickiness" of Your Website, we discussed Multivariate Landing Page Optimization (MVLPO) approach which helps to identify a winning combination of the elements of a webpage. Rule Developing Experimentation (RDE)
paradigm introduced in the article mixes and matches the elements of the page according to an experimental design and presents synthesized web pages to consumers for evaluation. Collected data then used to estimate individual contribution of every element to the liking of the web pages (conditional probability of people buying from this site, for example). This in turn allowed us to construct the most appealing webpage from the set of elements tested.

In most cases, the results of this approach help you to create optimized web pages. In a number of occasions although, some latent interactions exist between the elements of the page. Using a highly trained expert opinion to guess these interactions is not a very viable option in the fast moving world of web site design not taking into account the price implications. RDE easily overcomes the limitations of the old methods by automatically testing all and every combination of the elements of the page multiple times according to a built-in unique permuted experimental designs. Because the complexity of the statistical foundation are usually incorporated inside the tool, no special knowledge on the user side is needed (if you are still interested, you can find the details in Gofman, 2006; Moskowitz, Gofman, 2004).

Now let's explore how to make sure that the winning individual parts of the pages, when combined, do not fail. Furthermore, let's see how to find a combination of Web page elements that together produces more impact than just the sum of individual impacts. Putting to use the basic math formulas:

We do not want: 1+1 < 2

We want: 1+1 > 2

Golf Gear Case Study: deeper data mining

Note: All the data in this and previous articles are from the actual project, although the visuals and other marketing materials are representative equivalents and not related to any specific website.

In the previous article, we followed the operator of an online golf store who wanted to optimize the landing page to increase the conversion rate and revenue per visit. As it catered to affluent golf players, the general traffic was not very heavy. However, the revenue per customer (RPV) and the customer lifetime value (CLV) were high because the site sold luxury and premium equipment and strived to retain their patrons. The combination of these conditions precluded the operator from experimenting on live website to avoid possible less than optimal experience for their valuable customers.

The operator chose to use MVLPO in a simulated environment using an RDE tool. She had several options for the banner, feature picture, and different promotions and at the end of the project discovered the best combination of these components (Figure 1). She found out that by choosing 'wrong' elements (the lowest scoring vs. the highest) she would loose half of her potential clients. Or, in reverse, by selecting the best possible elements, she could double the number of happy visitors willing to buy from her site.

In virtually any MVLPO case based on traditional methods, this would be the end of the research stage. RDE on the other hand allows for mining the data even deeper.


Figure 1. Optimized webpage for the golf site without taking into account any possible interactions. The conditional probability of visitors being interested in buying from this site was 48%.


In some cases, there are potential interactions between the elements of the page (both positive and negative). Because of the unique permutation algorithm of experimental design, RDE allows for all and every combination of the elements to appear on the test screens multiple time. This means that it is possible to include them as independent variables into regression model. In our case, we have 90 possible combinations.

If this sounds for some readers a bit like a less than pleasurable lecture in statistics, don't quit reading. The good news – this is all incorporated inside RDE approach and available at a virtually 'point-and-click' level. One does not need to know how bits and bytes are moving inside a processor to use a PC for browsing. The same thing is true about discovering possible interaction using RDE – you do not to be a professor of statistics to find it out – RDE does it for you.

Not every case produces meaningful interactions. In many occasions, interactions are not very strong and could be ignored (considered not significant). If the utility (conditional probability of customers being interesting in buying from this site) of the combination is below the empirical threshold of (+/- 5), it could be discarded. In that case, the results of MVLPO would look like Table 1 in the previous article.

It also should be noted that the effect of the interactions changes the regression model and affects somewhat the rest of the utilities. In a model without interactions, the values of hidden synergies and suppressions are distributed among the individual elements. In a more detailed regression model that includes interactions, the values are extracted and assigned to the cross-terms (pairs of elements).

Comparing Standard and Interactions Models

Table 1 contains the utilities of the individual elements of the web page with several discovered meaningful interactions (right column) compared with the Standard model (middle column) from the previous article. This case does not have very high interactions values (in some cases, an interaction along could add 20 or more points to the liking score) but it does demonstrate the approach.

Table 1. Performance of the elements with interactions. Notice, that the values are somewhat different for the model with interactions compared to the standard model.


Standard

Model

Interactions

Model

Base Size

125

Constant

10

9

Banners

A3

Banner 3

0

-1

A1

Banner 1

-1

0

A2

Banner 2

-1

-1

Promo 1

B2

Free shipping

7

7

B3

$5.99 shipping

3

2

B1

Free $50 card

3

3

Visuals

C2

Golfer playing

16

15

C3

High-tech club

8

8

C1

Golf shoes

8

7

Promo 2

D2

Final clearance-up to 65% off

12

13

D1

Save up to $100

8

8

D3

Free personalization

4

4

Promo 3

E1

St. Andrews Sweepstakes

3

3

E2

115% price guarantee

3

3

E3

Golf vacation entry

0

0

INTERACTIONS

A2*C2

N/A

6

D2*E3

N/A

7

C1*E2

N/A

-9


The data suggest that the winning web page from the previous article was not the one that generates the highest interest in customers to buy from the site.

The optimal webpage (from the previous article) based on the standard model was:

(Conditional Probability of visitors buying from the site) =

= Const + A3 + B2 + C2 + D2 + E1 = 48%.

We can get a higher purchase intent score if we use a slightly different set of elements:

(Conditional Probability of visitors buying from the site) =

= Const + A2 + B2 + C2 + D2 + E3 + D2*E3 + A2*C2 =

= 9 + (-1) + 7 + 15 + 13 + 0 + 7 + 6 = 56%,

producing the optimal concept presented on Figure 2.

We have replaced two marginally higher scoring elements in two categories with lower scoring ones: in Banners, we switched from A3 (0) to A2 (-1); and in Promo 3, from E1(+3) to E3(0). Although with these subtle changes we have lost 4% in the individual values, the identified interactions in the case study compensated the shortfall and added additional 8% to the purchase intent (note, that the utilities for the interaction model are slightly different from the standard regression model and the elements in the case study are representative).


Figure 2. The highest scoring webpage created using Interactions Model. Although the differences are very subtle, the page has 8% higher conditional probability of customers buying from it compared to Standard Model optimization (Fig. 1).


Conclusions

This case study does not have the most impressive interactions I've seen in my experience. Sometimes, the synergy between the elements reaches 15-20 points or even more. In some cases, there are no significant interactions at all. Yet in some others, a negative interaction (suppression) is so strong that it negates the high positive contribution of individual elements (if any).

For many years, the researchers knew about the existence of possible interactions and tried to identify them by incorporating several handpicked pairs into surveys, usually by guessing. With the introduction of RDE to MVLPO, the permuted individual designs afforded for testing all and any possible combinations of the elements multiple times allowing for more precise models and more targeted optimized pages.

The bottom line, it is difficult not to agree that improving the conversion rate by 10-20% would make a very big difference for virtually any website operator. It is possible to achieve that by just recombining your existing materials with a tad deeper data-mining available in some tools as a simple push of a button.

References

Bhatnagar, Parija (06/20/2003). Blue food goes down the drain. CNN/Money. Retrieved on 11/21/2007.

Gofman, A. (2006). Emergent Scenarios, Synergies, And Suppressions Uncovered Within Conjoint Analysis. Journal of Sensory Studies, 2006, 21(4): 373-414.

Gofman, A. Improving the 'Stickeness' of Your Website. Financial Times Press (09/21-2007). Retrieved on 11/21/2007.

Green, Paul E. (1973). On the Analysis of Interactions in Marketing Research Data.
Journal of Marketing Research, Vol. 10, No. 4 (Nov., 1973), pp. 410-420

Moskowitz, H.R. and Gofman, A. (2004). A System and Method for Performing Conjoint Analysis. U.S. Provisional Application No. 60/538,787, Patent Pending.

Moskowitz, Howard R. and A. Gofman (2007). Selling Blue Elephants: How to make great products that people want BEFORE they even know they want them. Wharton School Publishing, 2007.


Thursday, December 13, 2007

Improving the ‘Stickiness’ of Your Website (Financial Times Press)

Some time ago I have published an article in Financial Times Press (although I am still a bit confused with the editorial 'chain' - the article was submitted to Knowledge @ Wharton). Here is the 'teaser' of the paper:

Financial Times Press, September, 21:
"For a long time, the only solution to make websites appealing and "sticky" was to rely on gurus (web designers who were just supposed to know the "right" answers). But what if the guru made a mistake or did not take into account all the variables and created less-than-optimal pages? Alex Gofman explores ways to involve consumers in the co-creation process in the form of multivariate landing page optimization as a possible solution for the problem of the ever-increasing bounce rate on many websites."

You can read the full paper at:
http://www.ftpress.com/articles/article.aspx?p=1015178&rl=1.

I have just completed a 'sequel' for this paper and hope to post it shortly.

My columns at Daily News and Analysis: How to Defeat Murphy’s Law in the Stock Markets

Daily News and Analysis, October 4, 2007
How to Defeat Murphy’s Law in the Stock Markets
Alex Gofman

Merck & Co recently announced that it has agreed to pay $4.85 billion to settle most of the claims that its painkiller Vioxx caused heart attacks and strokes in thousands of users. Although the settlement amount is almost twice as big as the GDP of Mongolia, it is substantially less than many analysts have expected.

In 2004, the news broke that one of the most powerful painkillers on the market, Vioxx, might be implicated in heart attacks. The following lawsuits, adverse publicity, less than optimal corporate responses by Merck and other drug companies in the pain-killer business had the inevitable impact on the stock prices of Merck and the “Big Pharma” in total. In just a few days Merck’s stock tumbled about 40% bringing down the whole pharmaceutical sector (to a lesser extent) and wiping out tens of billions of dollars in the sector’s market capitalization for shareholders. Investors lost fortunes, although some of the Big Pharma companies fared better than others. If one could predict what would be a reaction of investors in such crisis situation on a company by company basis…

On the other side of the conflict, if a company knows a possible response of investors and general public on some of the messages used by it’s PR in such crisis situation, it could have a tremendous impact on the brand image, finances and the future performance. But do they always know? Even some venerable corporations stumbled under the stress in a crisis because they were not prepared. A classic example of such unapt communications happened shortly after the launch of the Mercedes-Benz A-class in 1997 when one of the cars overturned during a test drive conducted by journalists in Sweden, triggering a major crisis for the car manufacturer. The reputation of Mercedes was at stake as the company was accused of producing unsafe cars. Early ill-equipped PR responses by Mercedes only succeeded in exacerbating the crisis, as they fumbled around with what they were going to say and then said the wrong thing at the wrong time.

Is it possible to be prepared to handle a potential crisis when, according to Murphy’s Law, anything that can go wrong, will? Going a bit further, is it possible to try to capitalize on the stock market during such tumult?

This is what the Rule Developing Experimentation (RDE), introduced in my previous articles (October 4, November 1), augurs to do. I could see some skeptical smiles on the faces of the readers saying, “Nobody could predict the stock market”. RDE does not predict the actual stock market performance. It quantifies the expected emotional reaction of investors to specific news and can even drill down the data on brand specific basis. For example, if the FDA (Food and Drug Administration, particularly empowered to oversee the safety of medications) announced that they discovered some new side effects in a flu vaccine, what would be the attitude of investors toward buying, holding or selling the stock of that company and other players in the sector? An astute and prepared investor could use this knowledge to his advantage with potentially huge profit. The ‘defendant’ would be anxiously sitting on the edge of the chair anticipating the answers on how different would be the attitude of the public if the right set of messages is promptly and confidently communicated. Is it possible for the company to ‘repair’ the damage and ‘engineer’ the public sentiments on the issues? Politicians have manipulated public opinions for ages, so why not?

Chance favors the prepared mind, as Louis Pasteur used to say. To be prepared to answer the questions, we can build a model of the consumers / investors minds using the RDE approach. It is not especially difficult, and a majority of businessmen could easily do that themselves.

Here is an example of the insights one could get from the model that was created at the peak of the Vioxx crisis. We searched the Internet for news and announcements about the case from media, FDA, public, experts and Merck itself. The messages were distilled to concise snippets (called elements), grouped by similarity into silos and put into an RDE tool for an automatic mixing and matching according to an experimental design. RDE created a set of vignettes representing a combination of the messages. A random group of investors was invited to participate in the online project and indicate their proclivity to buy, hold or sell the stock if they see the specific news (the details of the process could be found in Selling Blue Elephants book or at http://www.sellingblueelephants.com/ website).

The resulting regression model was so lucid that some experts called the approach a new behavioral economics sub-discipline. The data suggested that if, for example, investors read that The medication was pulled off the market after the company found the problem, the message would cause about 6% of them to change their attitude from buy to sell. But if the company communicated fast that It is in agreement with the FDA that this medication can be safely used for pain relief. Consumers should not exceed the recommended dose or take the product for longer than directed, this would effectively reverse the impact of the former news as, according to the model, it would increase the conditional probability of investors buying the stock by 6%.

The messages do not have a universal effect, much like fashionable cloth is attractive on models but often ludicrous on the majority of us. The messages are time and brand specific. The same message used by different companies in the same market environment will cause substantially different reaction. A model built in the midst of the Vioxx crisis showed that the message The manufacturer will continue to work with the FDA to sponsor a major clinical study to further assess this medication did not affect investors proclivity to buy the Pfizer’s stock while decreasing it by 10% for Merck. The same message in the same market conditions suggested an increase(!) in intended buying of Bayer and Wyeth shares by 6% and 7% respectively.

The easy and insightful results - what wins and loses, interactions between brands and messaging - give the stock analyst and the shareholder a sense of what people say they are likely to do. The vox populi, the feelings about each particular stock “in current time” in a specific situation, can then be compared against the suggestions of analysts, to determine where there are opportunities, where the analysts say one thing but the common voice of the crowd suggests something quite different. The same vox populi gives corporations a fair chance to prepare their PR for different crisis situations with suggested measured response.

As universal and resilient as it is, Murphy’s Law can’t be evaded, but its effects can be counteracted, neutralized and even utilized for profit with diligent preparation.

_________________________
Alex Gofman is VP of Moskowitz Jacobs Inc., a NY based company, and a co-author of the book Selling Blue Elephants: How to Make Great Products That People Want Before They Even Know They Want Them (
www.SellingBlueElephants.com) written with Dr. Moskowitz and recently republished in India (it is also currently translated in twelve countries). He may be contacted at alexgofman@sellingblueelephants.com.

My columns at Daily News and Analysis: Customer Research and the Curse of the Rear View Mirror

Daily News and Analysis, November 1, 2007
Customer Research and the Curse of the Rear View Mirror
Alex Gofman

Would you trust a driver to bring you to your destination if 95% of the driving time he spends looking at the rear view mirror? Even if it were the best and most sophisticated mirror in the world, with all possible bells and whistles to detect any obstacles and dangerous places AFTER you passed them?

A few days ago, I was a guest lecturer at the Wharton Business School (University of Pennsylvania), which many consider to be the best business school in the world. The class was in marketing research, and approximately half of the students were from Asia (mostly Chinese and Indian). Coincidently, my presentation was built around a hypothetical group of Asian kids successfully creating new-to-the-world products using advanced marketing research tools. Actually, it was not a coincidence. And here is why.

I work in the marketing research field which, in the West, is a huge and well funded industry. Annually, billions of dollars are spent on research that theoretically should help corporations sell more products with more profit to more consumers. On the surface it seems to work just fine – a lion portion of the US economy is based on consumer spending. A dirty little secret of the market research industry is that a huge majority of the money spent on research is wasted or not used.

The ‘staples’ of market research are tracking studies, consumer satisfaction and the like. Tracking studies are what happened in the past. It is quite easy and straightforward (but not necessarily cheap) to conduct them. In many corporations, it is a must (like a white shirt and tie). If you want to succeed in MR and be promoted or moved up one day into a high paying marketing department, you just have to do them! Tracking studies produce very nice looking pie charts in thick reports and give you a chance to shine during a presentation without being challenged. How could one challenge something that happened in the past? Billions of dollars go into this type of research. People get promoted because of it. And only about 5% (!) of the data is ever used!

On the other hand, if someone at a corporation tries to experiment with getting innovative consumer insights and finds better new products or invents a revolutionary service – this is another story. Expect to be annihilated by others who did not get this idea before! Any future forecast is easy to challenge. Corporate America, Europe and Japan are entrenched in the most ‘important’ war, an all-consuming task of … saving their jobs. Forget about the social or even corporate interests! We need to save our jobs! The truth is, nobody was ever fired for playing it safe by the approved rules even if the rules do not, did not and will not produce results. In marketing research, conducting a tracking study or a customer satisfaction survey is a ‘safe’ and ‘prudent’ way to climb the corporate ladder. It looks nice on the shelf and on a resume. If one tries to step out of the box and does something avant-garde that could bring a fortune to the company, this rebel most likely will be humiliated, attacked and even fired for violating the ‘order’.

Do not take me wrong – it’s very important to know what happened in the past. But much more imperative is what we do in the future trying to find new or improved products or services that people need and like. This decision cannot be based entirely on past experience which as we know it, is not a reliable predictor of the future. Of course, looking in the past could help to define the future. But if you spend most of your resources dwelling on the bygone events, you can not move ahead.

In the 20th century, Americans managed to beat Europe economically because of their risk-taking, ‘could be done’ attitude, inventing and achieving that which had never worked before. Some called them crazy, but the nay-sayers were ignored, and they kept moving ahead. Yet in most cases, this is no longer true. We are not as ‘hungry’ anymore. The initiative has now shifted to Asia, where young and energetic entrepreneurs are eager to get their piece of the world’s riches. They are not afraid to take risks. They are keen to experiment and try the new, ‘risky’ methods and tools available to achieve their goals. There are no ‘approved’ and ‘safe’ approaches (at least, not very many) to bind them to the past.

Many of the innovative methods that help companies create better products and services faster and in a more targeted manner, such as Rule Developing Experimentation (RDE), discussed in my previous column (October 4, 2007), are faster and more enthusiastically embraced in Asia than in the West where they were originally invented. Is it that these methods look forward too much and are thus too risky by Western standards for corporate employees?

And while their American counterparts prepare for self-serving corporate politics, Asian students are looking for everything that they can find to win their place at the world’s table, regardless of how risky from the corporate point of view. They will not be afraid to step out of the box and experiment, once they enter that world. They will continue to press forward, and only use the rear view mirror occasionally, just to avoid possible accidents and accumulate experience. I can see it in them.

__________________________
Alex Gofman is VP of Moskowitz Jacobs Inc., a NY based company, and a co-author of the book Selling Blue Elephants: How to Make Great Products That People Want Before They Even Know They Want Them (www.SellingBlueElephants.com) written with Dr. Howard Moskowitz and recently republished in India (it is currently translated in twelve countries). He may be contacted at alexgofman@sellingblueelephants.com.

My column at Daily News and Analysis: How to Defeat Murphy’s Law in the Stock Markets

After Selling Blue Elephants was republished in India, I got an unexpected invitation from the second largest (and the fastest growing) business newspaper in India, Daily News and Analysis (www.DNAIndia.com), to write a few columns for their Marketing and Management section.
Here are the copies.


++++++++++++++++++++++++++++++
Daily News and Analysis, October 4, 2007
Consumers know what they want. Or do they?
Alex Gofman

My daughter knows exactly what she wants. In a restaurant, she could order without even looking at the menu. And she always like her order. I, on the other hand, regret my choice the moment I see someone else’s dish delivered. THIS is what I want! Why didn’t I order it?!

It is a truism that to succeed in business you need to know what your customers want. In other words, a route to success appears to be simple: just ask your customers about their needs and desires and try to fulfill them. Sounds like a prudent way, but is it?

If you are, for example, in the banking business and want to create a new credit card offer that will send your bottom line off the charts, you could just ask what kind of card people want. Chances are, you will ‘find’ that they want 0% APR for the rest of their life, free airline miles for them and everybody they know and a lot of cash back for just having the card.

Not very insightful results. True, people may want all of that but how actionable is this knowledge? And the ‘insights’ are produced by the same consumers that relatively easily and realistically choose between real life offers and trade-offs. Asking them to explain why they like one or another may not help either. It is like asking a high school boy why he fell in love with the girl from his class. He knows he is deeply and madly in love with her but can he explain what specifically he likes about her? And would other people agree with him?

Asking customers in direct terms what they need and want will not work in most cases. Companies spend fortunes on focus groups and 80% to 90% of new product launches, based on the input from those groups, fail.

As Malcolm Gladwell once said, we cannot always explain what we want deep down (actually, he formulated this idea after interviewing my co-author of Selling Blue Elephants, world-renowned experimental psychologist Dr. Howard Moskowitz, but this is another story).

Does it mean that we eliminate the customers from the process of product creation and contest the famous John Wannamaker’s adage that the customer is always right? No and another categorical no. Customers might not be able to explain what they want and need but they will easily choose the winning offer if they see the options. An astute businessman should experiment with his offering, create multiple prototypes (physical or conceptual) and solicit customer feedback (liking, purchase intent, etc.) to find a potential winner. This is a much easier exercise for the consumers – they get to choose among different products on the shelves, various websites, offers, etc.

Businesses (some of them intuitively) understood this long ago. Companies like Seiko go through thousands of designs, tested in real stores (like in the Akihabara district of Tokyo), before shipping them around the world.

What is missing in many cases, is a disciplined approach to the experimentation afforded by the new paradigm Rule Developing Experimentation (RDE) co-developed with the Wharton Business School of the University of Pennsylvania (the best business school in the US and arguably in the world) and introduced in the book Selling Blue Elephants: How to Make Great Products That People Want Before They Even Know They Want Them.

RDE is a systematized, solution-oriented business process of experimentation that designs, tests, and modifies alternative ideas, packages, products, or services in a disciplined way so that the developer and marketer discover what appeals to the customer, even if the customer can’t articulate the need, much less the solution!

Scientific details of RDE, which is based on a unique application of experimental designs (conjoint analysis), might be daunting for a leisurely reader and well beyond the scope of this column. Until some time ago, it was an exclusive domain of statisticians and university professors. Fortunately, recent advances in software development and proliferation of the Internet allowed the algorithms to be incorporated in simple-to-use WEB based tools that can be deployed by anybody, anywhere around the world, without virtually any knowledge of statistics. The task is quite simple. First, you need to split your potential proposition into parts (buckets of ideas). In the case of the credit card offer, it could be different APRs, Security Guarantees, Rewards Options, Prestige Messages, etc. Second, you enter several options for each of the ‘buckets’ such as 2.5% cash back for gas purchases; One airline mile for every 100 Rupees spent, etc.

An offer may have 3, 4, 5 or more such ‘buckets’ with several options in each. An RDE tool will automatically mix and match the ideas according to an experimental design, present them to customers via WEB interviews asking them to rate how likely they would be to apply for this card - screen by screen (usually, between 20 and 50). This task is very simple for the majority of consumers. The tool accumulates the responses and at the end of the interview automatically calculates how much each idea individually adds or detracts from the purchase intent (regression model).

RDE helps businessmen to create better products, marketers - to optimize advertising, web designers - to find the most impactful landing pages, political candidates - to fine-tune their platform and messages, package designers - to synthesize packages that ‘fly’ off the shelves, investors – to know the reaction of the stock market on potential news, etc.

Many Fortune 500 companies like HP, Citibank, Unilever, Microsoft, Pepsi-Cola, etc. have benefited from using RDE. Their RDE experience could be summarized as the following: if you want to succeed by knowing what your customers want and need, do not ask them directly – show them experimentally designed prototypes according to the RDE rules and let them rate the prototypes (whether it’s a new product, an advertisement, a promotion idea, a mixture of ingredients in a soft drink, etc.). The result - the algebra of the consumer mind with precise knowledge about what works, what does not and for whom.

The very first use of RDE for credit cards (similar to our example exercise in the beginning of this column) by the HSBC bank in Hong Kong helped the issuer to achieve annual goals of new customer acquisition in the first two months. Six banks tried to issue affinity cards linked to the world football cup at the same time. HSBC’s use of RDE helped it to win while all other launches failed. Currently, MasterCard and Discover license this technology worldwide.

In another example, a wide cross-divisional use of RDE by Hewlett Packard helped the computer giant to create what they called “an always-on intelligence system”. The technology company has brought the consumer to the table in every design initiative or marketing decision in a way and scale that was unprecedented for HP. RDE fit in perfectly with HP’s new goals becoming one of the “evidence-promoting” components of their business and, in HP’s own words, with some spectacular results.

It is easy for the businesses to work with the customers like my daughter – just asking what they want will do. For the rest huge majority of us, one has to use more sophisticated approaches like RDE. For that and for many other applications of RDE – read the book. It’s all there.

__________________________
Alex Gofman is VP of Moskowitz Jacobs Inc., a NY based company, and a co-author of the book Selling Blue Elephants: How to Make Great Products That People Want Before They Even Know They Want Them (www.SellingBlueElephants.com) written with Dr. Howard Moskowitz and recently republished in India (it is currently translated in twelve countries). He may be contacted at alexgofman@sellingblueelephants.com

Wednesday, July 4, 2007

Uncover a gold mine of insights about your website visitors

I just finished reading a new book written by Avinash Kaushik Web Analytics. Here is my review of this excellent book (you can see it at Amazon.com as well).
AxG
---

Regardless of the fact that this book gives us an excellent and detailed description of Web Analytics, it is not about the ‘clicks’ and software features – it’s about the people and engaging them in making the Web better.

Avinash believes that 10% of the budgets should be spent on software while 90% of it on the people and their training. He also suggests a 20-80 rule: 20% of time should be allocated to presenting data vs. 80% dedicated to unstructured data analysis and thinking. Web Analytics – is a tool but it is people who should make the decision.

Albert Einstein once said “Everything should be made as simple as possible -- but no simpler!” This is exactly what Avinash has managed to do – demystifying the analytics without making it overly simplistic.

Analytics is an incredibly valuable tool available to every webmaster, marketer, analyst or just an amateur webpage maker, does not matter how big or small their site is. The reaction of many readers could be compared to an astonishment someone experiences when discovering that his house is built of gold. Website operators are sitting on the “gold mine” of available data frequently without realizing it. Not just pageviews - there is so much actionable information that could be uncovered using a few simple steps – it will change the Web landscape for years to come.

As a fellow book author, I understand and appreciate how much work Avinash had to put in to make the material accessible and engaging to us, the readers.

Read his blog (www.kaushik.net/avinash/) – an excellent source of information by itself and a continuation of the book. The fact that it has almost equal the amount of content generated by the author and the readers speaks for itself – Avinash stimulates your thinking, engages your curiosity. Be ready for many “Wow – I did not know I can do that!” and “I want to try it myself!” moments that will for sure interrupt the reading of every chapter.

After reading this book, you will agree that Avinash has indeed deserved his title of Analytics Evangelist – at least, he has substantiated my “conversion” and reaffirmed my “faith” – I can’t imagine doing any meaningful analysis of a website now without using the Analytics.

Friday, June 29, 2007

Wikipedia Entry

I just created LPO / MVLPO page on Wikipedia. Everybody is welcome to contribute:
http://en.wikipedia.org/wiki/Landing_Page_Optimization

axg

Wednesday, June 13, 2007

Landing Page Optimization / Multivariate Landing Page Optimization (modified draft)

I have modified the draft based on Avinash Kaushik's comments and posts in his blog. Please, comment.
Alex

~~~~~~~~~~~~


See also: [Search Engine Optimization], [Social Media Optimization]

Definition of Term


Landing Page Optimization (LPO, also known as WebPages Optimization) is the process of improving a visitor’s perception of a website by optimizing it’s content and appearance in order to make them more appealing to the target audiences as measured by target goals such as conversion rate or other.

Multivariate Landing Page Optimization (MVLPO) is Landing Page Optimization based on an experimental design.


Background

A recent study by researchers in Canada showed that the snap decisions Internet users make about the quality of a web page have a lasting impact on their opinions. They also reported that impressions were made in the first 50 milliseconds of viewing[1]. These findings underscore the importance of creating the most appealing landing pages for ROI.


In addition to obvious targets such as home pages, other parts of a website may also be affecting the goals such as conversion rate. According to MarketingSherpa data, the average ecommerce shopping cart has a 59.8% abandonment rate[2]. Many website designers do not consider these pages important. A simple improvement to this infrequently changed area (vs. the front page) could bring a substantial improvement to revenue per visitor (RPV) and ROI in general[3].



Description

LPO can be achieved through targeting and experimentation.

There are three major types of LPO based on targeting:

Associative Content Targeting also called ‘rules-based optimization’ or ‘passive targeting’). Modifies the content with relevant to the visitors information based on the search criteria, source, geo-information of source traffic or other known generic parameters that can be used for explicit non-research based consumer segmentation.

Predictive Content Targeting (also called ‘active targeting’). Adjusts the content by correlating any known information about the visitors (e.g., prior purchase behavior, personal demographic information, browsing patterns, etc.) to anticipated (desired) future actions based on predictive analytics.

Consumer Directed Targeting (also called ‘social’). The content of the pages could be created using the relevance of publicly available information through a mechanism based on reviews, ratings, tagging, referrals, etc.

There are two major types of LPO based on experimentation:

Close-Ended Experimentation exposes consumers to various executions of landing pages and observes their behavior. At the end of the test, an optimal page is selected that permanently replaces the experimental pages. This page is usually the most efficient one in achieving target goals such as conversion rate, etc. It may be one of tested pages or a synthesized one from individual elements never tested together. The methods include simple A/B-split test, multivariate (conjoint) based, Taguchi, Total Experience testing, etc.

Open-Ended Experimentation is similar to Close-Ended Experimentation with ongoing dynamic adjustment of the page based on continuing experimentation.
This article covers in details only the approaches based on the experimentation. Experimentation based LPO can be achieved using the following most frequently used methodologies: A/B split test, Multivariate LPO and Total Experience Testing. The methodologies are applicable to both – close-ended and open-ended types of experimentation.

A/B Testing (also called ‘A/B Split Test’): a generic name of testing a limited set (usually 2 or 3) of pre-created executions of a web page without use of experimental design. The typical goal is to try, for example, three versions of the home page or product page or support FAQ page and see which version of the page works better. The outcome in A/B Testing is usually measured as click-thru to next page or conversion, etc. The testing can be conducted sequentially or concurrently. In sequential (the easiest to implement) execution the page executions are placed online one at a time for a specified period. Parallel execution (‘split test’) divides the traffic between the executions.

Pro’s of doing A/B Testing:
- Inexpensive since you will use your existing resources and tools
- Simple –no heavy statistics involved
Con’s of doing A/B Testing:
- It is difficult to control all the external factors (campaigns, search traffic, press releases, seasonality) in sequential execution.
- The approach is very limited, and cannot give reliable answers for pages that combine multiple elements.


MVLPO structurally handles a combination of multiple groups of elements (graphics, text, etc.) on the page. Each group comprises multiple executions (options). For example, a landing page may have n different options of the title, m variations of the featured picture, k options of the company logo, etc.

Pro’s of doing Multivariate Testing:
- The most reliable science based approach to understand the customers mind and use it to optimize their experience.
- It evolved to a quite easy to use approach in which not much IT involvement is needed. In many cases, a few lines of javascript on the page allows the remote servers of the vendors to control the changes, collect the data and analyze the results.
- It provides a foundation for a continuous learning experience
Con’s of doing Multivariate Testing:
- As with any quantitative consumer research, there is a danger of GIGO (‘garbage in, garbage out’). You still need a clean pool of ideas that are sourced from known customer points or strategic business objectives.
- With MVLPO, you are usually optimizing one page at a time. Website experiences for most sites are complex multi page affairs. For a e-commerce website it is typical for a entry to a successful purchase to be around 12 to 18 pages, for a support site even more pages.


Total Experience Testing (also called 'Experience Testing') is a new and evolving type of experiment based testing in which the entire site experience of the visitor is examined using technical capabilities of the site platform (e.g., ATG, Blue Martini, etc.) [5].

Instead of actually creating multiple websites, the methodology uses the site platform to create several persistent experiences and monitors which one is preferred by the customers.

Pro’s of doing Experience Testing:
- The experiments reflect the total customers experience, not just one page at a time.
Con’s of doing Experience Testing:
- You need to have a website platform that supports experience testing, (for example ATG supports this)
- It takes longer than the other two methodologies.


MVLPO can be executed in a Live (production) Environment (e.g., Google Website Optimizer[4], Optimost.com, etc.) or through a Market Research Survey / Simulation (e.g., StyleMap.NET[5]).

In Live Environment MVLPO Execution, a special tool makes dynamic changes to the web site, so the visitors are directed to different executions of landing pages created according to an [experimental design]. The system keeps track of the visitors and their behavior (including their conversion rate, time spent on the page, etc.) and with sufficient data accumulated, estimates the impact of individual components on the target measurement (e.g., conversion rate).

Pro’s of Live Environment MVLPO Execution:
- This approach is very reliable because it tests the effect of variations as a real life experience, generally transparent to the visitors.
- It has evolved to a relativley simple and inexpensive to execute approach (e.g., Google Optimizer)
Con’s of Live Environment MVLPO Execution (applicable mostly to the tools prior to Google Optimizer):
- High cost
- Complexity involved in modifying a production-level website
- Long time it may take to achieve statistically reliable data caused by variations in the amount of traffic, which generates the data necessary for the decision.
- This approach may not be appropriate for low traffic / high importance websites when the site administrators do not want to loose any potential customers.


Many of these drawbacks are reduced or eliminated with the introduction of the Google Website Optimizer – a free DIY MVLPO tool that made the process more democratic and available to the website administrators directly.

Simulation (survey) based MVLPO is built on advanced market research techniques. In the research phase, the respondents are directed to a survey, which presents them with a set of experimentally designed combinations of the landing page executions. The respondents rate each execution (screen) on a rating question (e.g., purchase intent). At the end of the study, regression model(s) are created (either individual or for the total panel). The outcome relates the presence/absence of the elements in the different landing page executions to the respondents’ ratings and can be used to synthesize new pages as combinations of the top-scored elements optimized for subgroups, segments, with or without interactions.

Pro’s of the Simulation approach:
- Much faster and easier to prepare and execute (in many cases) compared to the live environment optimization.
- It works for low traffic websites.
- Usually produces more robust and rich data because of a higher control of the design.
Con’s of the Simulation approach:
- Possible bias of a simulated environment as opposed to a live one
- A necessity to recruit and optionally incentivise the respondents.

MVLPO paradigm is based on an [experimental design] (e.g., [conjoint analysis], [Taguchi method], etc.) which tests structured combination of elements. Some vendors use full factorial approach (e.g., Google Optimizer that tests all possible combinations of elements). This approach requires very large sample sizes (typically, many thousands) to achieve statistical importance. Fractional designs typically used in simulation environments require the testing of small subsets of possible combinations. Some critics of the approach raise the question of possible interactions between the elements of the web pages and the inability of most fractional designs to address the issue.


To resolve these limitations, an advanced simulation method based on the [Rule Developing Experimentation paradigm] ([RDE])[6] has been introduced. [RDE] creates individual models for each respondent, discovers any and all synergies and suppressions between the elements, uncovers attitudinal segmentation, and allows for databasing across tests and over time.



History

The first application of an experimental design to website optimization was done by Moskowitz Jacobs Inc. in the autumn of 1998 in a simulation demo-project for www.Lego.com site (Denmark). MVLPO did not become a mainstream approach until 2003-2004.



Some of the companies currently providing MVLPO in one form or another:

  • Google
  • Offermatica
  • Optimost
  • SiteSpect
  • Mmetrics
  • Widemile.
  • Moskowitz Jacobs Inc. (RDE based).


References


[1] Lindgaard G., Fernandes G. J., Dudek C. & Brown J. Behav. Inf. Technol., 25. 115 - 126 (2006).

[2] Can Multivariate Tests Reduce Your Shopping Cart Abandons? Real-Life Results... MarketingSherpa, October 3, 2006 (https://www.marketingsherpa.com/barrier.html?ident=29725)

[3] Andy Theekson. Rocket Conversion Rates With Multivariate Testing. (www.ezinearticles.com/?Rocket-Conversion-Rates-With-Multivariate-Testing&id=554332)

[4] Google Website Optimizer ( http://services.google.com/training/websiteoptimizeroverview/#slide=1)

[5] Avinash Kaushik. Experimentation and Testing: A Primer.
(www.kaushik.net/avinash/2006/05/experimentation-and-testing-a-primer.html)

[6] Howard Moskowitz and Alex Gofman. Selling Blue Elephants: How to make great products that people want BEFORE they even know they want them. Wharton School Publishing, 2007.


External Links

http://services.google.com/training/websiteoptimizeroverview/#slide=1

http://www.the-dma.org/cgi/dispnewsstand?article=5275

http://www.stylemap.net/

http://www.websiteoptimization.com/speed/tweak/blink/

http://www.optimizeandprophesize.com/

http://ezinearticles.com/?Rocket-Conversion-Rates-With-Multivariate-Testing&id=554332


Categories

Internet advertising and promotion Internet terminology Search engine optimization Internet marketing by method

~~~~~~~~~~~~~
Draft prepared by Alex Gofman

Tuesday, June 12, 2007

Landing Page Optimization
Multivariate Landing Page Optimization


To my surprise, there is no entry for LPO or MVLPO in Wikipedia. Here is a rough draft. Please, comment.
Alex




See also: [Search Engine Optimization], [Social Media Optimization]

Definition of Term
Landing Page Optimization (LPO, also known as WebPages Optimization) is the process of improving a visitor’s perception of a website by optimizing it’s content and appearance in order to make them more appealing to the target audiences as measured by target goals such as conversion rate or other.

Multivariate Landing Page Optimization (MVLPO) is Landing Page Optimization based on an experimental design.

Background
A recent study by researchers in Canada showed that the snap decisions Internet users make about the quality of a web page have a lasting impact on their opinions. They also reported that impressions were made in the first 50 milliseconds of viewing[1]. These findings underscore the importance of creating the most appealing landing pages for ROI.

In addition to obvious targets such as home pages, other parts of a website may also be affecting the goals such as conversion rate. According to MarketingSherpa data, the average ecommerce shopping cart has a 59.8% abandonment rate[2]. Many website designers do not consider these pages important. A simple improvement to this infrequently changed area (vs. the front page) could bring a substantial improvement to revenue per visitor (RPV) and ROI in general[3].

Description
In a wide interpretation, there are five major approaches to LPO:


Associative Content Targeting
(also called "rules-based optimization" or “passive targeting”). Provides relevant information to the visitors based on the search criteria, source, geo-information of source traffic or other known generic parameters that can be used for explicit non-research based consumer segmentation.

Predictive Content Targeting
(also called “active targeting”). Correlates any known information about the visitors (e.g., prior purchase behavior, personal demographic information, browsing patterns, etc.) to anticipated (desired) future actions based on predictive analytics.

Consumer Directed Targeting
(also called “social”) allows the consumers to adjust the relevance of publicly available information through a mechanism based on reviews, ratings, tagging, referrals, etc.

Close-Ended Experimentation
exposes consumers to various executions of landing pages and observes their behavior. At the end of the test, an optimal page is selected that permanently replaces the experimental pages. This page is usually the most efficient one in achieving target goals such as conversion rate, etc. It may be one of tested pages or a synthesized one from individual elements never tested together. The methods may include simple A/B-split test, multivariate (conjoint) based, Taguchi, etc.

Open-Ended Experimentation
is similar to Close-Ended Experimentation with ongoing dynamic adjustment of the page based on continuing experimentation.


This article covers in details only the last two of the approaches, while the first three entries could be better classified as targeting methods rather than optimization.

Simple LPO involves a series of one or more disconnected A/B tests, each representing a “slot” on a page template where content can be placed. This approach generally is very limited, and cannot give reliable answers for pages that combine multiple elements.

MVLPO handles a combination of multiple groups of elements (graphics, text, etc.) on the page. Each group comprises multiple executions (options). For example, a landing page may have n different options of the title, m variations of the featured picture, k options of the company logo, etc.

MVLPO can be executed in a live (production) environment (e.g., Google Website Optimizer[4], Optimost.com, etc.) or through a market research survey/simulation (e.g., StyleMap.NET[5]).

In Live Environment MVLPO, a special tool makes dynamic changes to the web site, so the visitors are directed to different executions of landing pages created according to an experimental design. The system keeps track of the visitors and their behavior (including their conversion rate, time spent on the page, etc.) and with sufficient data accumulated, estimates the impact of individual components on the target measurement (e.g., conversion rate). This approach is very reliable because it tests the effect of variations as a real life experience, generally transparent to the visitors. The drawbacks of the approach are the typically high cost, the complexity involved in modifying a production-level website, and the long time it may take to achieve statistically reliable data caused by variations in the amount of traffic, which generates the data necessary for the decision. This approach may not be appropriate for low traffic / high importance websites when the site administrators do not want to loose any potential customers. Many of these drawbacks are reduced or eliminated with the introduction of the Google Website Optimizer – a free DIY MVLPO tool that made the process more democratic and available to the website administrators directly.

Simulation (survey) based MVLPO is built on advanced market research techniques. In the research phase, the respondents are directed to a survey, which presents them with a set of experimentally designed combinations of the landing page executions. The respondents rate each execution (screen) on a rating question (e.g., purchase intent). At the end of the study, regression model(s) are created (either individual or for the total panel). The outcome relates the presence/absence of the elements in the different landing page executions to the respondents’ ratings and can be used to synthesize new pages as combinations of the top-scored elements optimized for subgroups, segments, with or without interactions. This survey approach using statistically designed combinations turns out to be much faster and easier to prepare and execute in many cases compared to the live environment optimization. It also addresses the issue of low traffic websites. Furthermore, the survey method may produce more robust and rich data because of a higher control of the design. The drawbacks of the approach include the possible bias of a simulated environment as opposed to a live one, and a necessity to recruit and optionally incentivise the respondents.

MVLPO paradigm is based on an experimental design (e.g., conjoint analysis, Taguchi method, etc.) which tests structured combination of elements. Some vendors use a full factorial approach (e.g., Google Optimizer that tests all possible combinations of elements). This approach requires very large sample sizes (typically, many thousands) to achieve statistical importance. Fractional designs typically used in simulation environments require the testing of small subsets of possible combinations. Some critics of the approach raise the question of possible interactions between the elements of the webpages and the inability of most fractional designs to address the issue. To resolve these limitations, an advanced simulation method based on the [Rule Developing Experimentation paradigm] ([RDE])[5] has been introduced. [RDE] creates individual models for each respondent, discovers any and all synergies and suppressions between the elements, uncovers attitudinal segmentation, and allows for databasing across tests and over time.

History
The first application of an experimental design to website optimization was done by Moskowitz Jacobs Inc. in the autumn of 1998 in a simulation demo-project for www.Lego.com site (Denmark). MVLPO did not become a mainstream approach until 2003-2004.

Some of the companies currently providing MVLPO in one form or another:
Google
Offermatica
Optimost
SiteSpect
Mmetrics
Widemile.
Moskowitz Jacobs Inc. (RDE based).

References
[1] Lindgaard G., Fernandes G. J., Dudek C. & Brown J. Behav. Inf. Technol., 25. 115 - 126 (2006).

[2] Can Multivariate Tests Reduce Your Shopping Cart Abandons? Real-Life Results... MarketingSherpa, October 3, 2006
(https://www.marketingsherpa.com/barrier.html?ident=29725)

[3] Andy Theekson. Rocket Conversion Rates With Multivariate Testing.
(www.ezinearticles.com/?Rocket-Conversion-Rates-With-Multivariate-Testing&id=554332)

[4] Google Website Optimizer ( http://services.google.com/training/websiteoptimizeroverview/#slide=1)

[5] Howard Moskowitz and Alex Gofman. Selling Blue Elephants: How to make great products that people want BEFORE they even know they want them. Wharton School Publishing, 2007.


External Links
http://services.google.com/training/websiteoptimizeroverview/#slide=1
http://www.the-dma.org/cgi/dispnewsstand?article=5275
http://www.stylemap.net/
http://www.websiteoptimization.com/speed/tweak/blink/
http://www.optimizeandprophesize.com/
http://ezinearticles.com/?Rocket-Conversion-Rates-With-Multivariate-Testing&id=554332


Categories:
Internet advertising and promotion
Internet terminology
Search engine optimization
Internet marketing by method

Monday, June 11, 2007

Essay on the history of the first use of Multivariate Landing Page Optimization

Who is it that deserves more credit for an invention? Is it the actual inventor, spending countless nights thinking, drawing, building and all too frequently failing repeatedly before shaping the idea? Or is it the astute businessman noticing someone else’s wild idea and seeing potential, taking a financial risk to get the rewards of the invention? Or perhaps the merchandiser that makes it to a commodity available to everyone?

We, in general, do not appreciate those out-of-nowhere troublemakers who are crying “I’ve been there first”. Whenever news breaks out about a patent infringement suite from an unknown company against an industry leader seeking untold riches in damages, the first idea that comes to our mind is “O, boy, another vulture”.

But try to look at the other side of the story, at someone’s feeling, the one’s who did in fact invent something but never patented it or even put real efforts to bring it to fruition. He might thought “I am sure someone else has done it before – I can’t be the smartest in the world - If nobody did it until now, it might not be that great idea after all”. Or any of many other explanations just to keep status quo. And a few years later to read about a phenomenal success story of ACME Corporation (or John Doe) that “got that crazy idea” and made that ingenious new gadget that nobody would even think about a few year ago.

But enough said. The story of multi-variate landing page optimization (LPO) is one of smaller opportunities “I’ve-done-that-N-years-ago –I-was-there-first!” I will not talk about the larger opportunities I’ve missed in my life – you will not believe me anyway.

So, without further ado, let’s rewind back the clock to 1998. The Internet craze was going out of control, sign on bonuses for startups were so ridiculous that everyone felt they were from another planet (perhaps because I did not get any?)

Software and hardware had finally reached the level that made creating dynamic web pages easy and displaying them in a sequence fast enough that the users did not to feel they could have a cup of coffee between the screens. After many years of successfully using conjoint analysis (a form of multivariate testing) on desktops around the world including with graphical variables, we had started working on a Web version of Ideamap, our flagship software. While the software was still in beta, suddenly we got a call from Copenhagen, from one of the longest standing licensees of Ideamap, Lene Hansen (GfK Denmark). Lego, a client of hers, was attempting to improve their website to make it stickier to the visitors. And Lene got the idea – could Ideamap be utilized to answer Lego’s question regarding the research-based Website optimization?

Although our Danish colleagues had not known yet precisely what they were searching for to make the website better, they had realized the need to use advanced customer research to achieve that goal. This was an incredibly critical thinking and a break-through – Lene’s and Lego’s realization that WebPages could be treated the same way as printed copy and optimized based on the consumer research. The rest was easy.

For us it was just another application of the approach we used for package optimization (dynamic graphical overlays based on an experimental design and a predefined template). It just has to be done online. In a few days I put online a demo (see picture below) utilizing a brand new Visual Basic functionality for web applications that allowed for systematic variation of the elements of the front page and presentation them in a sequence to respondents for their rating.

I arrived to Copenhagen on a cold gloomy day right after the New Year of 1999. The cultural experience of that city, from the colorful plums of the nobility and officers visiting the royal palace reception to the infamous Friday nights (it’s a kind of ‘happy hours’ on steroids that extends to early Saturday morning without the restrictions imposed by lack of a designated driver) deserves a separate story.

The next day we were in the airport quite early with Lene to catch our flight to Bellund, the headquarters of Lego. The only formality to get onboard was showing your ticket (no ID, metal detector or X-ray were needed to get to the jet for the 20 minutes flight).

Something was telling me (was it my lavish Danish breakfast?) that the 737 was not specifically designed to fly such short distances and at such low altitudes. I was very glad to land in what once was Lego corporate airport but later donated to the city.

Two Lego employees met us at the gate and whisked the small Opel to the sprawling campus nearby. We were a few minute late and the meeting was already running. Nobody seemed to notice or at least acknowledge our quiet entrance through the side door, and the meeting continued without hiccup. The only change was that the presenter switched to flawless English halfway thorough a phrase and the remainder of the meeting was as if I had never left New York (except for my heavy accent, which was the only noticeable one in the room).

It was the first (at least, as far as we know it now) case of using conjoint analysis approach for LPO. My search and interviews of industry leaders did not yield other contenders for the title yet.

Unfortunately, we have never really capitalized on that early experience. We were swamped with a multitude of ideas waiting to be implemented – multivariate video ad optimization, an ‘innovation machine’, establishing new science Mind Genomics, applying our approach to presidential elections, public policies, stock markets, etc... And so LPO remained in the virtually exclusive domain of web designers and webmasters.

A few years later several startups jumped on the idea, but it still lingered as a novelty until recently when Google marked this approach as mainstream by entering the field with its Google Optimizer.

So, who in fact deserves the credit for multivariate LPO? Is it the first inventors that did it mostly unbeknownst to the world? Or is it those startups that made it available to the public albeit on a very limited basis? I vote for Google – they made it readily available to everybody!


Good job, guys!

Alex Gofman


Figure 1. Two sample screens from the demo project with Lego.
















Welcome to my blog!

Welcome to my blog!
Although I am not a complete novice in writing, blogging is new to me. For long time it reminded me digital photography - a temptation to just keep clicking instead of carefully crafting every shot. But the quality and depth of the blogs that I read lately have changed my mind and finally convinced me to jump the wagon. Hope, my potential readers will not regret.
Alex