Journal of Advertising Research, 36:4 (July/August 1996), 55-64

Effectiveness, Objectives and the EFFIE Awards

The EFFIE awards* began in 1969 as a small prestigious award program within the advertising research community sponsored by the New York Chapter of the American Marketing Association. In contrast to the creative award shows like the One Show, the Clios, and the Art Directors' Club Awards which are characterized as beauty shows, the EFFIE competition is focused on the effectiveness of the advertising, not just the creativity of the copy or the art. As the EFFIES Call for Entries explains, "To win an EFFIE, a brand requires a mix of marketing, media, research and creative, of objective and strategy, of client and agency."

Although some in the industry believe that the EFFIE awards minimizes the creative dimension, defenders point out that it is the only award that judges the results of the overall advertising process--the marketing strategy, the media mix, the research, as well as the creative ideas. (Author, 1993) Rather than recognizing creative stars, as so many advertising award programs do, the EFFIE Awards recognize agency-client teams and the complex interrelationships--across a variety of media and complex message strategies--that make marketing communication effective in building brands.

Most importantly, the show attempts to evaluate the effectiveness of the advertising as measured against its own objectives. The Brief of Effectiveness submitted with the entry is a synopsis case history of a campaign. It must specify the product's category, describe the campaign objectives and give specific goals for the campaign, state the target audience definition and rationale, explain and justify the

*EFFIE is a short form that stands for Effectiveness

creative strategy, describe other communication programs and activities implemented in conjunction with the campaign, identify the media used and explain the media strategy, and finally give evidence of the results. The evidence must relate directly to the campaign objectives. Judging involves an assessment of the strategy, the achievement of the objectives, and the creativity of the campaign.

Unfortunately, the evaluation of effectiveness is not easy given that the industry has not been able to fully articulate the dimensions of effectiveness. (Flandin, 1992) Wright-Isak, Faber, and Horner (1994) made a significant contribution in a Consumer Psychology Conference paper when they defined effectiveness as:

While effects are clearly concerned with an individual level of measurement aggregated across many consumers, we might best conceive of effectiveness as a societal level concept concerned with how communities of people (based on occupation or organizational affiliation) develop shared meanings regarding what outcome measures and what forms of evidence make a convincing argument that advertising returned a sufficient value for its investment.

They conclude, however, that "this would imply that there is no single objective criterion for determining the 'effectiveness' of an advertising campaign." For that reason case writers who are preparing submissions typically provide a variety of indicators of effectiveness by which the advertising's objectives can be evaluated.

The objective of this study was to deconstruct a set of EFFIE winning entries to identify patterns that might be characteristic of winners. In particular, it looked at the schema of objectives used by these award winning entries to identify the type of impact being emphasized by the winners in order to help the industry and the academy better understand both the logic of the objectives that underlie effective marketing communication and the evidence needed to document effectiveness.

Objectives and Effects

One of the most important steps in developing a marketing communication strategy is the establishment of campaign objectives. Ideally both objectives and measurements of effectiveness would be built on some grand theory of how advertising works. Unfortunately advertising is an industry that is still searching for its theoretical roots. Syracuse professor and former Ogilvy executive John Philip Jones observed at an ESOMAR (European Society for Opinion and Marketing Research) seminar in Amsterdam in 1991 that, "Our knowledge of advertising, in particular our knowledge of how it actually works, is imperfect." Simon Broadbent, Vice Chairman of Leo Burnett, London, and one of the founders of the U.K.'s Institute of Practitioners Awards (IPA) voiced similar concerns at the same seminar (1991):

Simple, practical assumptions or theories govern Most of our decisions in designing, testing and evaluating our marketing activities. But these assumptions are themselves rarely tested and their limitations are almost never spelled out....if you ask the researchers and the analysts, the media planners, or the hard-headed businessmen, to justify their activities, you get confused and contradictory answers.

Setting objectives, the focus of this paper, is the first step in planning an advertising campaign, however, even such a basic practice is not commonly understood. In most discussions of strategic planning in advertising, experts recommend that objectives be specific, measurable and that the evaluation of the campaign be based on its proven success in meeting these objectives. (Schultz and Barnes, 1995; Heibing, 1990, Murphy and Cunningham 1993) This is the model on which the EFFIES award program has been built.

But how do planners proceed to establish such objectives. One approach comes

from the hierarchy of effects literature which identifies the presumed effects of advertising in terms of a ladder of consumer response and was reviewed in an article by Thomas Barry (1987). A common formula, the AIDA model, hypothesizes that attention, interest, desire, and action are the most important responses consumers might make to advertising with attention being the initial response and action being the last. Another approach developed by Lavidge and Steiner (1961) identified advertising impact first in terms of cognitive, conative, and affective categories of responses. Embedded within those categories was a hierarchy which includes awareness, knowledge, liking, preference, conviction, intention, and purchase.

A refinement of Lavidge and Steiner's schema, which has been called the "think, feel, do" model, was developed by Rossiter and Percy (1980) who added the concepts of low and high involvement. These and the other various hierarchies provide a structure on which a set of objectives can be based because they presume that consumers follow some sort of logical process in building product images and making purchase decisions. Further work on the philosophy and practice of copytesting and how it relates to objectives is available in the Rossiter, Percy and Donavan article (1991) and the Journal of Advertising Research Special Issue on Copy Testing (1994).

Moriarty (1983) moved away from the idea of hierarchy in order to accommodate other models of consumer information processing that are not hierarchical. Her approach posited three domains of effects. The idea is that impact may be occurring in different areas simultaneously. The three domains in this model are 1) perception with its subcategories of brand/product awareness, interest and memory; 2) learning with its subcategories of awareness of and communication of copy points; and 3) persuasion, with its subcategories of attitude and behavior change.

Another way to structure a set of objectives is to base them on the copytesting measures used to evaluate advertising effectiveness. The three most common type of copytesting measures traditionally specified by advertising managers are persuasion, communication, and recall. The presumption might be that these measures represent some theory of what makes advertising effective, so one might expect to see arguments in the EFFIES briefs supporting these dimensions.

Behavior and Causal Links The difficult decision in strategic planning for advertising is whether to state a behavioral response as a measurement of advertising effectiveness and, in particular, whether to tie advertising to sales impact. Schultz and Barnes (1995) separate advertising objectives into communication and behavioral responses implying that purchase behaviors are separate from or different from communication responses like awareness and attitude change.

Heibing and Cooper suggest that advertising objectives should focus primarily on awareness and attitude goals (1990). Murphy and Cunningham (1993) specifically state that it is not appropriate to link advertising with sales impact because 1) other marketing factors impinge on sales and 2) advertising's impact is long-term.

This issue has been raised by the London-based Institute of Practitioners (IPA) award show. The IPA judges look for effectiveness that is clearly anchored in proven marketplace impact. The IPA show demands that briefs establish a causal link between the advertising and the desired response. As Simon Broadbent, one of the show's founders, explained in the introduction to the 1992 IPA Awards book (1983) , "It is not enough to say, 'We did this and sales or sales or consumer measures did that.'" In other words, IPA applicants have to be able to prove that the sales went up because of the advertising, not just establish some kind of simple association. Channon explained that requirement further in his analysis of the 1986 IPA awards: "Very often in the case of unsuccessful entries, the judges' problem is not one of disbelief, but of accepting that the presumption of effectiveness has been properly supported and demonstrated." (1987) Rupert Brendon, chairman of Canada's recently created Cassies award program, has said that the Cassies were modeled on the IPA awards because the demands for proof are much tougher than those for the EFFIES. (1993)

Wansink and Ray ( 1992) deconstructed brand related behaviors and investigated the impact of advertising not just on purchase and purchase intention but on consumption intention and found that objectives should vary depending upon the level of brand loyalty.

Research Questions One way to better understand the logical structure of objectives would be to compile and analyze the objectives used in the effectiveness briefs presented by EFFIES winners. If these campaigns have been determined to be successful, then the type of objectives used in their strategic planning may also be "winners" in that they illustrate officially recognized dimensions of impact. So the first research question for this study is to determine what types of performance are stated in the objectives and, furthermore, whether the objectives as stated are measurable. In addition, the measures used to track the accomplishment of the objectives could be considered as chronicling recognized techniques for establishing effectiveness. So the second research question asks if the objectives are linked to performance measures and, if so, what types of measures are being used as indicators of effectiveness.

A set of award winning EFFIE briefs should also provide some insight into the question of whether or not to evaluate advertising effectiveness on the behavioral or sales dimension, as well as the relative importance of more traditional communication effects such as awareness and attitude impact. Furthermore, it should be possible to determine from the briefs whether the EFFIES case writers are attempting to prove that marketing communication actually caused marketplace impact. The third research question, then, focuses on the use of behavioral objectives, particularly sales, and, more specifically, whether the case writers are attempting to prove that the behavior was caused by the advertising.

Finally, the measure was analyzed in terms of the source of the documentation for the effectiveness evidence. The question was whether the documentation was provided through internal studies or data collection systems or by external companies such as Nielsen or trade association tracking studies--or was there any source of documentation stated at all?

This Study

This study sought to determine both what makes a good EFFIE effectiveness brief and what we can learn from these briefs about the construction of objectives and the evidence needed to prove effectiveness.

The EFFIES entries are confidential and the sponsor, the New York Chapter of AMA, has a policy of not releasing the briefs for research or any other use. The briefs are only available from the marketers who submitted them. This set was collected by writing to the 1993 winners and asking if they would send copies of their entries after explaining that the material was to be used for educational purposes. This research attempts to protect the confidentiality of the briefs by not referring to any companies specifically. Of the 43 companies contacted, a total of 29 responded and provided copies of their entries.

The Content Analysis This study is essentially a content analysis although it is closer methodologically to interpretive analysis than to quantitative analysis. The statements in the briefs were deconstructed using a code sheet which tracked objectives in one column, evidence in the second, and the source of the evidence in the third. Objectives were matched with evidence on the same line where these relationships were apparent in the brief in order to analyze the link between objective and evidence. In some cases the measures were not linked to any specific objectives and in those cases the objectives occupied their own lines and the measures occupied a separate set of lines. In other words, in some cases there was nothing in the context of the presentation that attempted to link the objective with a measure or a measure with an objective.

The stated objectives for each brief were then coded as either "predominately" or "exclusively" oriented towards marketing (such as sales, share) or communication (such as awareness, understanding, change in preference). The difference between predominately and exclusively was determined by the number of objectives--if all were marketing oriented then they were "exclusively" focused; if most were marketing oriented, then they were assigned to the "predominately" category. In a few cases, the objectives were equally divided between the two and that was also noted.

The marketing objectives were further categorized using categories that emerged from the content and these subcategories are: sales, share, competition, distribution, selling, penetration, and cost. The communication objectives were categorized using Moriarty's domains of effect model (1983) as either perception (awareness of product, interest, memory); learning (awareness of information, communication of copy point); and persuasion (attitude, behavior). Finally the documentation of the evidence was noted by recording the source of the measure and whether it was in-house or external to the company.


The focus of the study was on categorizing the objectives stated in the Effies briefs against hierarchy of effects models and connecting information about objectives with information about measures. As part of the content analysis, frequencies of occurrence also were tabulated for different types of objectives, measures, and documentation sources. This quantitative information was used to support the analysis of the objective/measure relationship and the quality of the documentation, rather than to determine statistical difference.

Measurable Objectives The first research question asked whether the objectives were focused on marketing or communication results and what types of impact they represented. Surprisingly, it was found in this set of entries that only 29 of the 167 objectives (17%) were clearly measurable as stated, 66 objectives (40%) did not state a goal, and another 64 could best be described as unclear (38%). In other words, if this set is typical of winners in general, then it doesn't seem to be necessary to state measurable objectives in order to be an EFFIES winner. Example of measurable objectives and non-measurable objectives are as follows:


-Generate a minimum of 350 leads from operators who, via exposure to the advertising, would show interest in becoming authorized service providers (note: no baseline stated)

- Increase unaided awareness from 21% to 25%-- a 20% increase (note: baseline is stated in this objective)

Not Measurable:

- Increase market share for adult and infant blankets

- Remind moderate and heavy users why they like the taste

- Lift their attitudes on these measures

- Drive up their intent to purchase

- Differentiate the product from competition through recall of unique brand performance and emotional benefits

Looking specifically at each of the entries and categorizing the brief itself in terms of its general focus, this study found that 5 of the 29 cases (18%) were focused exclusively on achieving marketing objectives and another 2 (7%) were predominately focused on marketing effects for a total of 25% focused on marketing activities. There were 10 cases (36%) that were exclusively focused on communication objectives and another 4 (14%) that were predominately communication oriented for a total of 50 percent that were communication focused. The objectives in another 7 cases (25%) were evenly divided between marketing and communication. In other words, 50% of the cases were communication focused, 25% were marketing focused, and another 25% were split between the two. It is important to note, however, that five cases (18%) only stated marketing objectives and made no effort to establish any communication objectives for the message, which is a surprising finding for an advertising award program.

This study also found a total of 167 individual objectives stated in the 29 briefs. Of that number, 96 (58%) were communication oriented and 71 (43%) were marketing oriented.

In terms of the marketing objectives, the most frequently stated objective was sales (n=30, 18%) followed by share (n=22, 13%). The distribution of the marketing objectives as a percentage of the total 167 objectives is given in Table 1.

[Table 1 goes near here]

The communication objectives were first divided into the three general categories outlined in the Moriarty domains model. In terms of the more general categories, persuasion, which includes attitude and behavior, clearly dominated the logic of these objectives at 33% (n=55); learning objectives, which deal with communication and comprehension of information, was second at 15% (n=25); and perception, which includes awareness, memory and interest, was third at 9% (n=15). Table 2 lists these breakdowns and gives examples of the types of objectives included in each.

[Table 2 goes near here]

Behavior change is probably the most difficult objective to accomplish so it is interesting to find that many of these winners are focusing on that level of impact. The communication and recall measures, which are two of the most commonly emphasized objectives measured by traditional copytesting programs, are fourth and sixth out of this list of seven which suggests that the EFFIES briefs may be moving beyond the dimensions of advertising typically tracked by copytesting services.

Another potential problem in the construction of an objective is the identification of a baseline. In other words, if a company says only that the sales increased 10% then there is not enough information given to understand the scale of the increase. Properly stated the objective would state that the sales increased 10% from a baseline of $200 million to $220 million. This study found that only three of the 29 case studies included a baseline in the objectives. For example, one of the cases called for "an increase of 7 share points which would be more than triple the national share increase of 2.08 points" thus providing a useful benchmark against which the change can be evaluated. Another objective was "to generate a 20% increase in phone calls over an 8-week period which had a benchmark of 152 calls per day." However, another 13 case studies did in fact have baseline data but it was introduced later in the case in the discussion of the evidence. In terms of the 144 evidence statements, this study found that 36 of them (25%) stated a benchmark, 32 assumed or implied a benchmark (22%), and 76 did not refer to a benchmark at all (53%).

In other words, benchmark data is generally not stated in the objectives. However, in some cases the baseline data is available, it is just stated at the end as part of the evidence. Clearly the idea of quantifying objectives with either numbers or baselines has not caught on yet with the most of the EFFIES competitors.

Measures The third research question asked to what extent the measures used to document the claim of effectiveness was clearly linked to the objectives. In other words, do these briefs follow logically from objective to proof? This study found that 145 of the 167 objectives (87%) were clearly linked to evidence; another 14 of the objectives (8%) were clearly not supported, and with 8 (5%) the relationship between objective and evidence was unclear. On the other hand, there were 51 pieces of evidence that seemed to be unrelated to any of the objectives. They documented effectiveness that apparently was not part of the campaign plan, or at least was not part of the stated objectives for the campaign.

Clearly the campaign managers are doing a good job documenting the achievement of their objectives, however some are also missing a bet in that evidence exists in their cases to further substantiate the effectiveness of their campaigns. This suggests that, in some instances, the campaign strategy was not as well thought out as it might have been. in developing advertising strategy planners might want to work forward from the objectives and backwards from the measures and anticipated documentation in order to arrive at a broad set of objectives that is comprehensive enough to eliminate unexpected effects, whether positive as in these EFFIES cases, or negative.

What was observed infrequently in the briefs was an attempt to prove that the impact, particularly in the area of sales and share increases, was caused by the marketing communication. Only three of the 29 briefs attempted to establish a causal link specifically to the communication. For example, one brief mentioned that the campaign produced 31% more responses than a similar campaign several months earlier which differed only in its creative message. Another example reported that buyers switched brands specifically for a benefit that was only mentioned in the mailing. A third case was able to document that 73% of the inquires were directly attributable to the advertisement. In most cases, extensive documentation was provided about sales growth and share increase but no effort was made to document that the marketing communication program was the driving force behind these marketing accomplishments. As we know, an increase in sales could just as easily be driven by a change in pricing or distribution strategy, rather than the advertising.

To see how such documentation should be presented, review the winners showcased in the IPA's Advertising Works volumes. Extensive analysis of sales data relative to advertising effects such as claim recall is presented with tables, statistics, and external documentation. These are formal research studies that have been carefully designed to track, test, and monitor effects over time. In fact, recently a separate section has been added to the IPA categories to reward advertising that contributes to long and deep effects, such as brand building, as well as the more short-term sales results.

Documentation The fourth research question sought to analyze how the proof was presented. First this study determined if any source was given for the evidence and then it looked at the source to determine if it was external to the company. Of the 29 cases, only 14 (48%) identified a source for the documentation. In other words, more than half of the briefs simply said that some effect happened but no source was given for this statement from either internal tracking studies or external research companies. In those 14 cases that did state a source, a total of 17 sources were used and 10 of them were outside sources external to the company presumably providing the most objective documentation. The other seven were internal corporate tracking studies. The external sources mentioned in this set of briefs included Gallup, Maxwell reports, Opinion Research Corp., DRI, trade association tracking studies, Nielsen, IRI INFOSCAN, R. L. Polk, and Millward Brown Inc.

More importantly, however, 15 of the cases (52%) did not identify any source for the documentation of effectiveness. Critics of the EFFIES claim that a company can just

"make up" its evidence and this finding suggests that may be possible since the majority of these winners did not provide a source for the evidence, whether internal or external.

A wide variety of types of evidence was mentioned in the 144 documentation efforts described in the briefs. Sale figures were highest with 23 mentions followed by three other types of evidence--market share, inquiries/requests, and awareness--at 15 mentions each. Dominance ("achieved number one)" was next with 11 mentions followed by purchase at 9. A number of evidence categories grouped at five and six mentions including intention/preference, change in image, communication of copy points, and usage. At the bottom of the list with from one to three mentions are reduced costs, sales leads, distribution, penetration, recall, trial, likability, preference, conviction, profits, interest, repeat purchase, demonstrations, visits, and involvement. It should also be clear from the breadth of this list that the evidence of effectiveness offered by these winners is far broader than the limited types of information usually evaluated through copytesting.

Two of the highest mentioned categories of documentation--sales and market share--were marketing oriented and the other two highest mentioned evidence categories--inquiries/requests and awareness--were communication measures. It seems clear that bottom-line evaluation, as well as communication effectiveness, is important for EFFIES winners. Furthermore, with the exception of the awareness measure, evidence of behavioral impact appears to be more important, at least in terms of frequency of use, than attitude change or knowledge level.

Conclusions and Discussion

This study focuses on how objectives and their evaluation are used in preparing winning EFFIES brief. It also helps develop a better understanding of the logical schema of marketing communication objectives used by competitors in the EFFIES award show and it provides insight into the most important types of evidence used in documenting advertising effectiveness.

The most important finding is that only 17 percent of the objectives stated were measurable which means that most of the EFFIES winners in this group of briefs are not working towards provable effectiveness. Whether that is because they are not stating their objectives in this fashion, or whether they simply aren't linking objectives and evaluation in their campaign planning, is unclear. It may very well represent a lack of clarity in the process by which advertising strategy is translated to creative plans. Regardless, however, if an agency hopes to make an argument for accountability to its client, as well as EFFIE judges, it is necessary that the campaign planners begin with measurable objectives and conclude with a plan to evaluate the objectives accordingly, and this clearly isn't being done in the majority of these winning EFFIE briefs. This is the area that is the most different from the IPA awards where measurement and objective documentation is a requirement for entry and one area that the EFFIES board might consider tightening up in its call for submissions.

To make the argument for the effectiveness of an advertising or marketing communication program according to these findings, a manager should include marketing as well as communication objectives and specify persuasive impact, both behavior and attitude change, as objectives for the communication message. In constructing evidence, bottom-line response in terms of share and sales is important but so are behavioral responses such as inquiries and requests, as well as the more usual communication response of awareness. Once again, the EFFIES submission requirements might be strengthened by spelling out these different types of objectives, giving examples of them, and requiring that these categories of impact be addressed in the brief.

To develop better EFFIES briefs, competitors should also include baseline data in order to be able to confidently assert that there has been a significant change. This could be another submission requirement. External documentation is probably not something that can be required, however, it might be suggested in a more forceful manner in the submission requirements to call attention to the importance of objectively documenting the claimed results.

Speaking as someone who has also been an EFFIES judge, it would be helpful if more emphasis was placed in the briefs on carefully linking the evidence to the objectives they support. Most importantly the briefs should attempt to better link the marketing response (sales, share) with the communication program. In order to prove that the communication caused the marketing response, it will be necessary to develop the communication objectives and evidence in more depth and detail than is currently being presented in these briefs. As Simon Broadbent, one of the founders of the IPA awards, explained in an interview with this researcher about the early days of the London program, simply saying "we advertised and the sales went up" doesn't make the argument that the advertising caused the increase in sales. He had to teach the industry that it takes a carefully designed research program to actually prove cause and effect. Advertising researchers know how to design these studies but they don't seem to be a part of the initial campaign planning, at least as represented by these briefs.

All of these suggestions can be handled more deliberately in the submission requirements by calling attention to their importance, stating that they are basic requirements, and giving examples of how to do it. The fact of the matter is that there is a tremendous educational mission that needs to be undertaken by the EFFIES program in order to further develop the credibility of the award program, as well as the accountability of the industry. The two objectives stated for the IPA program (1994) illustrate such intentions:

1. To generate a collection of case histories which demonstrate that, properly used, advertising can make a measurable contribution to business success.

2. To encourage advertising agencies (and their clients) to develop ever-improving standards of advertising evaluation (and, in the process, a better understanding of the ways in which advertising works).

As the IPA program found when it first started, entrants need help developing their briefs. They need examples, training, and a reason to believe that their submission will not be acceptable unless these minimal standards are met. In interviews with others who have been EFFIE judges as well as entrants, this researcher has been told numerous times that it is easy to "fake" a brief. In order for the EFFIES program to build its credibility, it needs to tighten up its requirements in the area of documentation and convince the advertising and marketing industry that it means business in the area of proving effectiveness claims. Not only would that contribute to the credibility of the award program, it would also have an immeasurable impact of the improvement of standards of quality of U.S. advertising.

Judges also need help in deconstructing the briefs to identify these basic requirements, particularly in the preliminary judging. This Phase I judging seeks to determine which entries should qualify as finalists but it operates largely unstructured with judges focusing on whatever catches their attention. It should be at that level that submissions are eliminated that do not meet the following criteria:

measurable objectives stated

an evaluation plan that measures the stated objectives

a range of objectives that includes both communication and marketing effects

baseline data against which change can be accurately evaluated

a clear and provable link between the communication and the impact evidence of formal, objective, and hopefully external documentation of the claimed effects

This can handled by the development of a checkoff sheet that will help the judges frame their evaluations of the briefs. As someone who has been a Phase I judge, I can attest that the EFFIES program can't depend upon judges looking at these complicated briefs and dependably sorting out these dimensions without some guidelines. Obviously these same criteria also should be communicated in the call for entries.

Limitations of the Study

The biggest problem with this study is that it only used 29 briefs from one year of the EFFIES award program. Clearly the insights gained from this analysis could be strengthened by more cases from more years. If a collection of briefs could be compiled over time, it might even be possible to track change and evaluate improvement in the construction of the briefs, as well as in the logic of the presentations. Unfortunately, it is very difficult and time consuming to get access to these briefs because they are not available through from the EFFIES award program.

It would also be interesting to compare the winners of the EFFIES over the years with the winners of the more creative contests to see if there is overlap. Broadbent (1988) claims that over time more of the creative winners have also been winners of the IPA awards as their account executives have learned how to better document effectiveness of creative efforts.

It would also be interesting to determine if the EFFIES really do discriminate against creative work in that the judging of the awards is more heavily based on effectiveness measures that outweigh the creative evaluation. The sponsoring association says no, but without insight into the details of the scoring, which would let us analyze the performance of high scoring creative work, there is no way to know if that is true. The problem here, of course, is that association maintains a posture of confidentiality so the data is unavailable to researchers, as well as the professional community.

The Association's posture on research, as well as its insistence on the proprietariness of the briefs, also has the unfortunate consequence of not encouraging brief writers to learn more from the experience. In contrast to the IPA Awards, where winning submissions are published in a series called Advertising Works, the details of the EFFIES briefs are not available for study. The EFFIES award program could make a significant contribution to the improvement of the field of advertising but unfortunately, because of its restrictive nature, there is little opportunity for the industry and the case writers to learn from each other's experiences.

One final suggestion then is that the EFFIES board insist in the submission instructions that the entrants agree to publish their briefs if they are winners. The IPA has been publishing its winning briefs since the early 1980s and there has not been a problem with confidentiality. From a practical standpoint, by the time the briefs are written, judged, and reprinted, so much time has gone by that the data and marketing situation is rarely the same. In the most recent volume of Advertising Works (1994), the editor, Chris Baker, announced that all previous entries to the awards--almost 600 in total--are now available via the newly established IPA Advertising Effectiveness Data Bank. The same practice should be adopted for the EFFIES winners.

The EFFIES board continues to operate with the notion that confidentiality is its most important responsibility, but it is unclear whether the board has ever asked entrants if they would refuse to participate if the winning briefs were published. In contacting these winners to ask permission to use the briefs in academic work, many of them indicated they were delighted and flattered to be asked to showcase their work. That is the same response the IPA awards found after the first several volumes had been published. What initially was thought to be an obstacle to publishing turned out not to be a problem. It is now a requirement for entry.

In the EFFIES case, the board might be pleased to find that implementing voluntary public disclosure might help motivate agencies to improve the rigor and accuracy of their documentation. Obviously clients and prospective clients, as well as competitors, will be reviewing the cases so there is more incentive to better document the effectiveness of their work. And improving the standards of effectiveness is, after all, what the EFFIES is all about.


Author, (1993), Interviews personally conducted by the author in New York in 1993 with 15 advertising executives who have served as award show judges.

Baker, Chris. Advertising Works 8 (London: NTC Publications Ltd, 1994).

Barry, Thomas E. (1987). "The Development of the Hierarchy of Effects: An Historical Perspective," Current Issues and Research in Advertising. 1 & 2: 251-295.

Brendon, Rupert. (1993). Personal conversation, AAA Conference, Montreal.

Broadbent, Simon. (1988). Personal Interview, London, March.

Broadbent, Simon. (1991). "How Advertising Works: Introduction." How Advertising Works and How Promotions Work. Proceedings of an ESOMAR seminar, Amsterdam, April: 1-11.

Broadbent, Simon. (1993). Advertising Works 2. (London: Holt, Reinhart and Winston), viii.

Channon, Charles. (1987).Advertising Works 4 (London: Casell), viii

Flandin, M.P., E. Martin and L. P. Simkin. (1992). "Advertising Effectiveness Research: A Survey of Agencies, Clients and Conflicts." International Journal of Advertising. 11: 203-214.

Heibing, Roman G. and Scott W. Cooper. (1990).The Successful Marketing Plan. (Lincolnwood IL: NTC Business Books): 184-185.

Jones, John Philip. (1991). "Over-Promise and Under-Delivery." How Advertising Works and How Promotions Work. Proceedings of an ESOMAR seminar, Amsterdam, April: 1-11.

Journal of Advertising Research, (1994) Special Issue on Copy Testing 3 (May/June): 19-32.

Lavidge, Robert C. and Gary A. Steiner. (1960). "A Model for Predictive Measurements of Advertising Effectiveness." Journal of Marketing 25 (Oct.): 59-62

Moriarty, Sandra E. (1983). "Beyond the Hierarchy of Effects: A Conceptual Framework." Current Issues & Research in Advertising. 1&2: 45-55.

Murphy, John H. and Isabella C. M. Cunningham. (1993). Advertising and Marketing Communication Management. (Fort Worth: The Dryden Press): 86-94.

Rossiter, John R. and Geoff Eagleson. (1994) "Conclusions From the ARF's Copy Research Validity Project." Journal of Advertising Research, Special Issue on Copy Testing 3 (May/June): 19-32.

Rossiter, John R. Larry Percy, and Robert J. Donovan. ( 1991) "A Better Advertising Planning Grid," Journal of Advertising Research 5 (October/November): 11-22.

Schultz, Don E. and Beth E. Barnes. (1995). Strategic Advertising Campaigns. 4th ed.

(Lincolnwood IL: NTC Business Books): 86-87.

Wasink, Brian and MIchael Ray. (1992) "Estimating an Advertisement's Impact on One's consumption of a Brand." Journal of Advertising Research (May/June): 9-16.

Wright-Isak, Christine, Ronald Faber, and Lewis Horner. (1994) "Advertising Effectiveness: Notes From the Marketplace." Advertising and Consumer Psychology Conference, Minneapolis.

Table 1

Breakdown of Marketing Objectives

% n =71 category examples

18.0 30 sales Increase unit volume; dollar volume

13.2 22 share increase Increase share of market; overtake leader

3.0 5 competition Freeze market in advance of new product introduction; overtake leader

2.4 4 Distribution Increase shipment levels; open up new markets

increase distribution points

2.4 4 Selling Increase number of qualified leads

1.8 3 Penetration Increase penetration; return to peak levels

1.8 3 Cost Efficiency Reduce cost per call; increase pull so smaller percent sold on deal

Table 2

Breakdown of Communication Objectives

% n=96 category examples

18.6 31 Pers: Behavior Increase number of requests/inquiries/calls;

halt decline in usage

14.4 24 Pers: Attitude Reminder of positive liking; strengthen

emotional bond

33.0 55 Total Persuasion

11.4 19 Lrng: Info Awareness Create unique position/image; associate

friendliness factor with product;

3.6 6 Lrng: Comprehension Build understanding of copy points;

increase differentiation of key benefits

15.0 25 Total Learning

5.4 9 Pcptn: Awareness Increase awareness of brand/product/seal/logo

3.6 6 Pcptn: Memory Increase aided/unaided recall; 

correct description of seal/logo;

playback of copy points should meet norms

.07 1 Pcptn: Interest Stimulate interest

9 16 Total Perception

Effectiveness, Objectives and the EFFIE Awards


This is a study of how briefs are constructed for the Effies award program which showcases effectiveness in advertising. The study investigated the objectives used and the design of the evaluation mechanisms by which the effectiveness was assessed. The study found that most of the objectives were not measurable as stated.

Of the cases used in this analysis, only 17 percent stated measurable objectives. Most of the cases (50%) were focused on communication objectives although some were centered primarily on marketing effects (25%) with little attempt to assess communication effects; the remaining 25% were split between communication and marketing effects. The marketing effects were dominated by sales and share objectives; the communication effects were dominated by persuasive effects that focused on behavior and attitude. In terms of support, the study found that 91% of the evidence statements were clearly linked to objectives, however, it also found that few of the cases (10%) made a clear causal argument linking the effect to the advertising message.

Effectiveness, Objectives and the EFFIE Awards


Sandra E. Moriarty, Professor

University of Colorado

Journalism and Mass Communication

Campus Box 287

Boulder CO 80309


FAX 303/492-0969

E Mail: Sandra.

January 19, 1996

William A. Cook, Editor

Journal of Advertising Research

Advertising Research Foundation

641 Lexington Ave.

New York NY 10022

Dear Bill:

I have received your latest letter with requests for additional changes. I have complied with most of them. I wasn't able to find an angle to work into the objectives discussion the Crimmins work since it is only remotely connected to that topic. The Wansink and Ray piece is very good and does add to this discussion.

The minor editing and reformatting is all done.

I'm not sure how to handle your suggestion about the reference to inquiries and requests on page 15 of the previous revision. I agree with your point that these are legitimate goals of advertising, but I'm not criticizing anything in that paragraph. I'm merely reporting the usage of this evidence in the order of frequency of mentions. Inquiries and requests happen to have been tied for second highest in the frequency of mentions. The concluding sentence observes that there are far more types of evidence being used than we accommodate through copytesting. That's not a judgment on the value of any of these methods. I"m going to assume that you were thinking something different when you read this paragraph.

I'm curious to see how Chris approaches her part of the assignment. In the past we've both agreed on the view I'm expressing in this paper.

Anyway, thanks again for the opportunity to work with you on this piece. I look forward to seeing it in print.

Most sincerely,

Sandra E. Moriarty


January 26, 1995

William A. Cook, Editor

Journal of Advertising Research

Advertising Research Foundation

641 Lexington Ave.

New York NY 10022

Phone: 914/967-2750

Article # 974

Dear Dr. Cook:

Enclosed you will find two copies of a report I have written on a study of the briefs submitted by winners of the EFFIES competition. I hope you will find it of interest to the readers of JAR.

I should mention that I have been a little critical of the EFFIES policy toward research at the end of the article. If that appears to be unwarranted, I can edit that discussion and make it go away.

Thank you for reviewing this piece and I would appreciate any comments you or your reviewers might wish to suggest.

Most sincerely,

Sandra E. Moriarty


November 16, 1995

William A. Cook, Editor

Journal of Advertising Research

Advertising Research Foundation

641 Lexington Ave.

New York NY 10022

Article # 974

Dear Dr. Cook:

I have revised my article on the EFFIE Awards as you have suggested. I have make all the changes and have tightened the conclusion to make the point more emphatically that the EFFIES program needs to toughen up. I hope you are happy with these changes. If you have any more suggestions, please let me know. Your comments were very constructive and I am sure have led to a much better article.

How the EFFIES people will look at it, is another matter. I suspect this is the end of my participation as a judge. Oh well, it's a good cause.

Thanks again for allowing me the opportunity to do this revision. I am excited

about having a publication in JAR.

Most sincerely,

Sandra E. Moriarty