Next Steps: Designing an Outcomes-based Ethics and Compliance Program Evaluation

December 31, 2005
Document

2005
Kenneth W. Johnson

This is the fourth article in an ERC series commenting on the US Sentencing Commission's amended requirements for an "effective program to prevent and detect violations of law". (Click here to read other articles in the FSGO Series.) In this installment, Mr. Johnson explores how an organization might approach evaluating its ethics and compliance program. To aid the reader in understanding this broad topic, he divides his discussion of program evaluation into the following segments:

  • Introduction
  • Two Approaches to Program Evaluation
  • Measuring Organizational Culture
  • Measuring Overall Program Performance
  • Developing a Data Collection Plan
  • Benchmarks and Baselines
  • Reporting Program Performance
  • Organizational Learning

Next Steps: Designing an Outcomes-based Ethics and Compliance Program Evaluation(1)

By Kenneth W. Johnson,

Principal Consultant, Ethics Resource Center

The essential goal of an ethics and compliance program is to help governing authorities, managers, employees, and agents work together to pursue the purpose of an organization and achieve its more specific goals and objectives in a manner consistent with its standards for ethical business conduct.

practice note

As employees and agents pursue organizational purpose, organizational learning is a tool and way of life that helps them address and adapt to the conditions facing the organization; they learn how to continuously expand their capacity to create the future they truly desire to live.(2) An ethics and compliance program is an integral part of how an organization learns. It is a form of action learning: a process and culture of learning by doing.(3)

Evaluating an ethics and compliance program has always been recognized as a good practice, though few organizations evaluated their programs in any comprehensive sense.(4) Few ethics and compliance programs have been able to demonstrate that their programs achieved expected program outcomes. For most, moreover, it was enough that they could argue that their programs met the minimum requirements of the 1991 Federal Sentencing Guidelines for Organizations. Ethics and compliance conferences, as a result, tended to concentrate on sharing "best practices," without seriously questioning whether the practices actually contributed effectively, efficiently, and responsibly to achieving expected program outcomes.

Now, however, "periodic program evaluation" is a minimum requirement for an effective ethics and compliance program under the 2004 revisions to the Federal Sentencing Guidelines for Organizations (2004 FSGO).(5) As we noted in the first article of this series, this is one of the profound changes of the revised guidelines, along with the requirements for risk assessment and attention to organizational culture, which are themselves inextricably linked to program evaluation.

The issue is no longer whether to evaluate one's ethics and compliance program, but rather how to go about designing and implementing a plan for periodic program evaluation. It remains to be seen what programs and best practices will eventually prove to be effective, efficient, and responsible.

Footnotes

1. This article is based upon a chapter devoted to organizational learning and program evaluation in Business Ethics: A Manual on Managing the Responsible Business in Emerging Market Economies (Washington, D.C: Department of Commerce, 2004). Worksheets labeled, RBE Worksheet # 14, for example, refer to worksheets the "responsible business enterprise" (RBE) might use. The author of this article was principal author of the cited work. It is available for free downloading at http://www.responsible-business.com/manual.html. Mr. Johnson is a member of the American Evaluation Association, and has conducted ethics and compliance program diagnostic work since 1994.

2. See, e.g., Peter M. Senge, The Fifth Discipline: The Art and Practice of the Learning Organization (New York: Doubleday/Currency, 1990)

3. See, e.g., Michael J. Marquardt, Action Learning in Action: Transforming Problems and People for World-Class Organizational Learning (Palo Alto, California, Davies-Black Publishing, 1999); Richard P. Nielsen, The Politics of Ethics: Methods for Acting, Learning, and Sometimes Fighting with Others in Addressing Ethics Problems in Organizational Life (New York: Oxford University Press, 1996).

4. Most truly comprehensive program evaluations, in our experience, were required through the Defense Department or Health & Human Services Inspector General's voluntary disclosure programs.

5. While the 2004 FSGO refer to a "compliance and ethics program," we hold to the convention of referring to an "ethics and compliance program." This is a result of both our interest in ethical conduct over compliance in general, but also our research over the years that suggests that having an organizational culture defined by certain "ethics-related actions" at the management, supervisor, and co-worked levels is a far better indicator of an effective program than formal program elements. See, e.g., Joshua Joseph, ERC National Business Ethics Survey 2003: How Employees View Ethics in Their Organizations (Washington, D.C.: Ethics Resource Center, 2003), chapters 4-7. (cited hereinafter as NBES 2003)

A. Two approaches to program evaluation

As program evaluation has grown to become a more accepted, even required process over the last few decades, practitioners have characterized program evaluations as being of two broad types: process evaluations and outcomes-based evaluations. Governing authorities and managers and other stakeholders must understand, distinguish, and integrate these two approaches if they are to design, implement, and use a program evaluation plan to meet organizational needs, address 2004 FSGO requirements, and apply evaluation resources as effectively, efficiently, and responsibly as possible.

1. Process Evaluation

Practitioners design a process evaluation to analyze how a program was implemented in practice. It is designed to track which program activities were actually performed and measure their outputs, the direct products of each activity. Examples of process activities and output measures include standards and procedures drafted and published through mandatory annual ethics training courses (program activities) and the number of ethics training course participants, the extent that specific skills and knowledge were recalled after a period of time, and participant satisfaction with the training (output measures).

From a program evaluator's perspective, program activities and outputs have little value, in and of themselves, in offering measures of effectiveness. Their value is derivative, instrumental. But for meeting the specific requirements of the 2004 FSGO, their value would be derived solely from the extent they contributed to achieving program outcomes. If activities and outputs do not apparently contribute to achieving expected program outcomes, then they have no apparent value, beyond the appearance of doing at least something

Program Evaluation

 

2. Outcomes-based Evaluation

An outcomes-based evaluation, on the other hand, is concerned primarily with the extent to which a program achieves its intended results. The outcomes it measures are changes in the lives, attitudes, and conduct of an organization's employees, agents, and other stakeholders or changes in the organization as a whole. Unlike measurable outputs, which can be more or less directly attributable to a given activity, program evaluators must accept the fact that factors other than program activities often influence whether changes in behavior actually occur. This tends to make outcomes-based program evaluation somewhat less precise than some are comfortable with.(6)

Over the last decade or so, program evaluators and managers have come to consider outcomes-based evaluations to be the more valuable form of program evaluation.  This has been particularly true for government programs, where it often was not clear that bureaucracies doing things actually led to better communities. For example, the U.S. Congress mandated in the Government Performance and Results Act of 1993 (GPRA) that federal agencies report actual results of their programs relative to program objectives, rather than simply report program activities.(7)

In turn, governing authorities and managers of an organization with an ethics and compliance program will want to know the answers to specific outcome questions:

Aristotle On Precision in Ethics

  • Is there less misconduct?
  • Is there less exposure to risk for misconduct?
  • Are employees and agents able to recognize responsible business conduct issues on the job?
  • How often do employees and agents speak in terms of standards, procedures, and expectations?
  • How often are decisions made with reference to standards, procedures, and expectations?
  • How willing are employees and agents to seek advice?
  • How willing are employees and agents to report concerns?
  • How satisfied are those who report their concerns with management's response?
  • How committed are employees to the organization?
  • How satisfied are stakeholders with the organization?
  •  Does the culture of the organization promote ethical conduct, and discourage misconduct?

Since it is the outcomes--not the activities and outputs--that an organization truly wants to achieve, we at the ERC practice and espouse outcomes-based ethics and compliance program evaluation.

3. Comprehensive Program Evaluation

To illustrate the interplay between these three concepts--program activity, output measures, and outcome measures--using actual data, let us look at the relationships between four program activities and six expected program outcomes.(8)

As set forth in the table below, an Ethics Resource Center longitudinal study, the National Business Ethics Survey SM 2003,found that, generally speaking, the more ethics program elements employees reported as being present in their organizations, the more favorable the program outcomes they reported.(9) With the exception of reduced observed misconduct, there was a strong relationship between program activities and achieving program outcomes.(10) One interesting note is that it was often better to have no formal ethics program elements at all than to only have written standards. For example:

  • Seventy-eight percent of employees of organizations having a formal ethics program, as defined,(11) reported misconduct they observed, while only 39% of employees in organizations having no such elements did so.
  • Of the above, 86% of employees of organizations with a formal ethics program were satisfied with management's response to their reports, compared to only 33% of employees where their organizations had no such elements.
  • But, organizations having no formal ethics program elements had more favorable outcomes in reduced pressure to compromises organizational standards, a culture of accountability and employee satisfaction, than did organizations having only written standards (phenomenon indicated by in table below).
Relationships of Program Elements to Program Outcomes
Percentages of employees reporting favorable outcomes by number of ethics program elements

 

All four elements Standards plus Standards only No elements Favorable Outcomes
N/A N/A N/A N/A Observed misconduct
(less is more favorable) (12)
7% 11% 23% 16% Pressure to compromise
(less is more favorable)
(large orgs.) (13)
78% 67% 52% 39% Willingness to report misconduct (14)
86% 63% 50% 33% Satisfaction with management response
(small orgs.) (15)
93% 90% 75% 86% Managers held accountable(16)
95% 91% 81% 87% Satisfaction with organization (17)

 

On the face of these findings, having a formal ethics program presumptively contributes to organizations achieving certain desired program outcomes.(18) This means that an ethics and compliance program evaluation must include both forms of evaluation: both process and outcomes-based. Since this is the same analysis an organization should pursue to determine whether its own program is effective, program evaluators must devote substantial attention to process, even for an outcomes-based program evaluation.

Footnotes

6. Much like the effect, on some, of ending a sentence with a preposition.

7. State governments have instituted similar programs. See, e.g., the State of Maryland's Managing for Results program. Available at: http://www.dhr.state.md.us/mfr/, accessed April 22, 2005.

8. It is beyond the scope of this article, but this interplay between culture, activities, outputs, and outcomes is captured by program evaluators graphically with what is known as a "program logic model." This discussion of program logic models is adapted from University of Missouri Extension & Outreach, "Program Planning & Development--Program Logic Model," available at <http://outreach.missouri.edu/staff/programdev/plm/>, accessed 23 May 2003.

9. As discussed in section B., below, regarding organizational culture, ethics-related actions of management, supervisors, and co-workers were even stronger contributors to achieving program outcomes.

10. There is no way to know, of course, whether an outcome is reduced or increased without baseline or benchmarking data or some other reference.

11. A formal ethics program was defined in NBES 2003 as an organization having a code of conduct, ethics training, an office where employees could seek advice, and a reporting mechanism (program activities).

12.NBES 2003, p. 32.

13. NBES 2003, p. 37

14. NBES 2003, p. 42.

15. NBES 2003, p. 49.

16. NBES 2003, p. 54.

17. NBES 2003, p. 58.

18. The 2004 FSGO, moreover, set certain minimum requirements, such as adequate standards and procedures; specific roles and responsibilities at various levels of the organization; due diligence in hiring; compliance and ethics communication, including training, and monitoring, auditing, and a safe mechanism for employees and agents to seek advice and report misconduct, to name but a few.

B. Measuring Organizational Culture

While it is true that there are two fundamental approaches to program evaluation--process and outcomes-based--there is one additional aspect of organizational life that must be measured for a program evaluation to be effective: the culture of the organization itself.(19)

The most immediate reason to measure culture is that the 2004 FSGO provide that, to have an effective ethics and compliance program, an organization shall: (1) exercise due diligence to prevent and detect criminal conduct; and (2) otherwise promote an organizational culture that encourages ethical conduct and a commitment to compliance with the law. A better reason is that program activities can be better designed with the culture of the organization in mind. For example, culture influences communication styles, training packages, and whether the program should take a compliance or values approach. The best reason to measure organizational culture, however, is that the ultimate measure of success for an ethics and compliance program is an organizational culture fully committed to organization core beliefs, standards, procedures, and expectations, as they develop over time.(20)

In one sense, of course, developing an organization culture is a program outcome. But, in another, more important sense, the culture of the organization is the fundamental starting point for program design and implementation. Measurement of organizational culture answers the questions, "Who are we and what do we stand for?" Unless an organization asks and answers these questions in some specific and measurable ways, it cannot know the best nature and format of program activities, the outputs they should desire, and, most importantly, the desired changes in the culture of the organization itself.

To help organizations address the influence of organizational culture, the Ethics Resource Center has developed a survey instrument that measures certain indictors of how committed an organizational culture is to ethical conduct and compliance with the law. Applying the survey instrument, and interpreting the data though our experience in ethics and compliance program diagnostic work, our findings are quite important. They suggest a strong, positive relationship between employees who see their top management, supervisors, and co-workers engage in certain "ethics-related actions," as defined, and important program outcomes.(21)

For example, as indicated in the table below:

  • The influence of organizational culture, as measured by the number of ethics-related actions per level in the organization, was dramatic: observed misconduct ranged between 15% and 56% at the top management level, 17% and 70% for supervisors, and 17% to 62% for co-workers. (Pressure to compromise standards was similarly influenced.)
  • The influence of co-worker ethics-related actions on willingness to report misconduct resulted in a range from 73% to 56%.
  • The influence of top management's ethics-related actions on satisfaction with management's response was quite significant, resulting in a range from 87% to 14%; the influence of co-worker ethics-related actions resulted in a range of 74% to 0%.
Relationships of Organization Culture to Program Outcomes
Range of percentages of employees reporting favorable outcomes by ethics-related actions
Executives Supervisors Co-workers Favorable Outcomes
15% / 56% 17% / 70% 17% / 62% Observed misconduct
(less is more favorable)
4% / 43% 4% / 53% 6% / 37% Pressure to compromise
(less is more favorable) (23)
N/A N/A 73% / 56% Willingness to report misconduct (24)
87% / 14% 78% / 11% 74% / 0% Satisfaction with management response (25)
97% / 32% Similar Similar Managers held accountable (26)
98% / 35% 97% / 26% 95% / 48% Satisfaction with organization (27)
   

 


In contributing to a number of important program outcomes, we often found organizational culture to be the more critical aspect than program elements. On the face of these findings, the United States Sentencing Commission was correct in adding a requirement for management to "promote an organizational culture that encourages ethical conduct and a commitment to compliance with the law." A truly comprehensive program evaluation, therefore, requires attention to all three components: culture, process, and outcomes.

Footnotes

19. We use the term "organizational culture" uniformly, unless we have a specific reason to distinguish between organizational climate or environment or any number of other terms that are often used interchangeably.

20. See Peter Kline and Bernard Saunders, Ten Steps to a Learning Organization 2nd rev. ed. (Arlington, VA: Great Ocean Publishers, 1993, 1998), p. 24.

21. NBES 2003, chapters 4-7.

22. NBES 2003, pp. 30-32.

23. NBES 2003, pp. 36-37.

24. NBES 2003, pp. 41-42.

25. NBES 2003, pp. 48-49.

26. NBES 2003, p. 54.

27. NBES 2003, pp. 57-58.

C. Measuring Over-all Program Performance

 To sustain the confidence of stakeholders in an ethics and compliance program, its process and outcomes should be evaluated on a routine basis. Evaluating program processes answers the question, Did we do what we said we would do? Evaluating program outcomes adds the question, Did the changes we expected occur?(28) Evaluating organizational culture adds the questions, Who are we, and how committed are we to ethical conduct and compliance with the law?

Program evaluation processes reflect the same dynamics addressed in designing and implementing the ethics and compliance program. They depend upon the relevant context and organizational culture of the organization and the reasonable expectations of its stakeholders. Evaluation efforts can be of varying intensity; they can be more or less informal.

1. Purpose of Program Evaluation

The first step for governing authorities and managers to take in evaluating an ethics and compliance program is to agree upon the questions they want answered. In the early years of an ethics and compliance program, governing authorities and managers may be primarily concerned about process. Is the organization establishing standards, procedures, and expectations? Is the training being accomplished effectively? Are reports to stakeholders being well received?

Management's ultimate purpose in having an ethics and compliance program, however, is not simply to have a code of conduct or conduct ethics, compliance, and responsibility training. Governing authorities and managers will eventually want to know whether the program is achieving its expected program outcomes, such as those in the text box opposite.

Moreover, management will have its own expected program outcomes, such as reducing risk or reforming some aspect of the organizational culture, such as pressure to compromise core beliefs and standards of conduct.

2. Scanning the Relevant Context

Before proceeding to determining what aspects of process and program outcomes to evaluate, governing authorities and managers need to conduct a scan of the organization's relevant context. A significant part of the scanning process is engaging stakeholders and determining their reasonable demands for information. Only by being alert to the demands of stakeholders can the organization determine what outcomes should be evaluated, what indicators will be most effective (and credible), and how and to whom to report its findings. The AA1000S Framework, discussed in more detail below in the section on reporting, provides a standard for quality in engaging stakeholders.

2. Scanning the Relevant Context

Before proceeding to determining what aspects of process and program outcomes to evaluate, governing authorities and managers need to conduct a scan of the organization's relevant context. A significant part of the scanning process is engaging stakeholders and determining their reasonable demands for information. Only by being alert to the demands of stakeholders can the organization determine what outcomes should be evaluated, what indicators will be most effective (and credible), and how and to whom to report its findings. The AA1000S Framework, discussed in more detail below in the section on reporting, provides a standard for quality in engaging stakeholders.

3. Tracking Organizational Culture

Organizational culture change need not be a reason for having an ethics and compliance program. By its very nature, however, such a program will make subtle changes to the organizational culture. Moreover, cultural aspects will influence the program processes and its prospects for success. For example, there is a close relationship between perceptions that the governing authorities and managers care as much about ethics and values as the economic bottom line and program success.(29)

For these reasons, it is critical that the organization track a number of key aspects of organizational culture. As we described in Section B, above, program evaluators should measure and track the "ethics-related actions" of top management, supervisors, and co-workers. In the text box opposite are other measurable factors of organizational culture.

An even richer profile of organizational culture can be captured in the following five characteristics:

Measurable Factors

  1. Extent to which leaders and members alike embrace the organization's core purpose and values, and are adept at preserving them while stimulating progress
  2. Extent to which leaders and members hold themselves responsible--and others accountable--to high standards
  3. Extent to which leaders encourage members--and members welcome and accept the opportunity--to participate in organizational affairs
  4. Extent to which leaders and members have the knowledge they need, when they need it
  5. Extent to which conflict and mistakes made in good faith are seen as opportunities for learning and growth(30)

Evaluators can use the Organizational Culture Worksheet below (RBE Worksheet # 14) to develop a plan to monitor, track, and measure its organizational culture. Evaluators will work with the organization's stakeholders to how determine cultural factors might be measured. For example, to measure whether "leadership is perceived to care about ethics/values as the bottom line," evaluators may determine that three indicators apply:

RBE
Worksheet # 14
Organizational Culture Worksheet
  Interviews Focus Groups Surveys Document Review Direct Observe
           
Factor of Organizational culture:          
Indicator # 1:          
Indicator # 2:           
Indicator # 3:          
  1. Employee perceptions of leadership determined through interviews, focus groups, and a survey
  2. Statements made by leadership determined through a review of leadership communications and interviews in its communications department
  3. Intentions of leadership itself determined through interviews of key personnel

4. Process Evaluation

Process evaluation looks more to how the program works. It looks to see if resources are being used well, if assigned activities are being performed, and if specific outputs are being produced.

Many general management evaluation models, especially in the continuous quality improvement arena, are process evaluations, including the International Standards Organization (ISO) management systems models. For example, the ISO 9000 series for quality certification does not define quality. It is interested in whether the processes that have shown to lead to quality goods and services have been followed. The same is true for the ISO 14000 series for environmental management. It does not define what protecting the environment is; it looks to see whether management systems are in place.

In the social reporting arena, the Global Reporting Initiative (GRI)(31) also does not set standards, but it does provide a comprehensive, even exhaustive, framework for what should be reported. AccountAbility, formally known as the Institute of Social and Ethical AccountAbility, sets no specific standards for a responsible business either, but it does provide a framework for planning and reporting in a manner designed to give external stakeholders confidence in the report ("AA1000S")(32)

The Process Evaluation Worksheet (RBE Worksheet #15) below can be used by evaluators to develop a plan to collect and analyze information. For example, where communicating standards and procedures is the process the organization is evaluating, RBE Worksheet #15 helps examine a number of specific questions: who was involved and affected, when, and where. It can be used to look at training expenses, the number of employees trained per year, participant satisfaction with the training, and performance on action plans following training. As indicated, process evaluation also leads to consideration of more subjective indicators, such as activity successes, challenges, unexpected developments, and insights.

RBE
Worksheet # 15
Process Evaluation Worksheet
  Interviews Focus Groups Surveys Document Review Direct Observe
Who:

Stakeholders involved in activity

Stakeholders affected by activity

         
What:

Activities

Output produced

         
When:

Timeline

Milestones

         
Where:

By location

By division

By region

         
Developments:

Successes

Challenges

Barriers

Unexpected

Insights

         

5. Expected Program Outcomes

Governing authorities and managers need to define outcomes they can measure. There are at least nine commonly expected outcomes for the time and effort management puts into an ethics and compliance program. These outcomes are listed in the text box opposite.

A number of global standards set specific standards for organizations to meet. Among these are the Caux Round Table Principles, UN Global Compact, OECD Guidelines for Multinational Enterprises, Basic Guidelines for Codes of Business Conduct, and the Principles for Global Corporate Responsibility. From these standards, the responsible business can derive outcomes for its ethics and compliance program.

Measurable Program Outcomes

The Outcomes Evaluation Worksheet (RBE Worksheet # 16) below can be used by evaluators to develop a plan to collect and analyze information. For each outcome, governing authorities and managers need to identify specific measurable, achievable, realistic, and time sensitive indicators.

  • For an outcome of fewer violations of standards and procedures, the indicator might be instances of observed violations and failure to meet stakeholder expectations.
  • For the outcome of increased employee commitment to the organization, indicators might be employee turnover and whether employees would recommend that a family member join the organization.
  • For the outcome of fewer violations of organization standards, governing authorities and managers might track help-line calls, management reports, customer complaints, and audit reports. They might survey employees and ask, in a anonymous or confidential questionnaire, how much and what types of violations have they observed. They might have an outside party conduct interviews and focus groups and inquire, in more depth, into what violations occur and why.

The outcomes and indicators selected must meet information needs of governing authorities and managers as well as the reasonable expectations of stakeholders for information.

RBE

Worksheet # 16
Outcomes Evaluation Worksheet
  Interviews Focus Groups Surveys Document Review Direct Observe
Expected Program Outcome:          
Indicator # 1:          
Indicator # 2:          
Inidcator # 3:          

Footnotes

28. Jane Reisman and Richard Mockler, A Field Guide to Outcome-based Program Evaluation (Seattle, WA: The Evaluation Forum, 1994), p. 10. The approach we take to outcomes-based program evaluation is based upon this work, and its companion works, courtesy of the publisher. A source addressing evaluating compliance programs is Lori A. Tansey, Gary Edwards, and Rachel E. Schwartz, "Compliance Program Modification and Refinement" in Compliance Programs and the Corporate Sentencing Guidelines: Preventing Criminal and Civil Liability Jeffrey M. Kaplan, Joseph E. Murphy, Winthrop M. Swenson eds. (West Group, 2002), ch. 15. A source addressing the process of responsibility reporting is AccountAbility AA1000S, stakeholder framework. See AA1000S, developed by AcccountAbility in the United Kingdom, available at <http://www.accountability.org.uk>, accessed May 23, 2003.

29. Linda K. Trevino and others, "Managing Ethics and Legal Compliance: What Works and What Hurts," California Management Review vol. 41 (Winter 1999): pp: 131-151, at p. 141.

30.Kenneth W. Johnson, "The role of culture in achieving Organizational Integrity, and managing conflicts between cultures," available at <http://www.Ethics-Policy.net/ quest_5.html>, accessed 27 May 2003. An instrument to measure culture to these characteristics canbe found in Business Ethics: A Manual on Managing the Responsible Business in Emerging Market Economies (Washington, D.C: Department of Commerce, 2004), RBE Worksheet # 4.

31. Global Reporting Initiative, available at <http://www.globalreporting.org>, accessed May 23, 2003.

32. Accountability, "AA1000S," available at <http://www.accountability.org.uk>, accessed May 23, 2003.

D. Developing a Data Collection Plan

There are a number of classic data collection methods for evaluators to consider in collecting data for the relevant context scan, organizational culture-tracking, and process and outcome evaluation. As indicated in the table below, these include: interviews, focus groups, surveys, document review, and direct observation. Each has its own strengths, weaknesses, and resource demands.(33) The primary concern is to develop a cost-effective collection plan that encourages employees and agents to be forthcoming and give evaluators, governing authorities and managers, and stakeholders a clear picture of what is going on in the organization.

Commonly Used Data Collection Methods
Surveys
Standardized written instruments that contain several questions about the issues to be evaluated. These questions can include a combination of types of questions, e.g., single, direct questions, series of questions about the same topic and unstructured open-ended questions. You can conduct surveys by mail, in person, over the telephone, through the Internet, or in a centralized activity as part of an event. Surveys are usually considered an efficient data collection strategy.

There is a limit, however, to how many questions one can expect employees to answer accurately. And many large organizations survey there employees so often that "surey fatigue," is often and issue. Since the question set is necessarily limited, program evaluators need to be alert to setting the right proportion of culture, program activity, output measure, and program outcome questions. They also need to be careful with the use of demographic questions that might compromise assurances of anonymity or confidentiality

Interviews (including focus groups)
A series of questions--typically semi-structured or unstructured--conducted in-person or over the telephone. Focus group interviews are an approach that takes advantage of small group dynamics to conduct interviews with a small group of people (usually eight to 12 individuals). You can use interviews when you want in-depth information and are particularly appropriate for investigating sensitive topics. Interviews are generally limited to about one hour per person. Focus groups are generally about an hour and a half in length.
Document review A review of organization records that provide both descriptive and evaluative information of the program process and its outcomes. These reviews can focus on the frequency with which specific behaviors occur. They generally require that a responsible person identify and accumulate the documents, and often require some degree of interpretation.
Direct observation
First-hand observation of interactions and events that provide descriptive or evaluative information. Observations are usually guided by predetermined protocols or observation guides to focus the information you gather. Observations will be valuable if you face situations in which self-reports or existing data may not be accurate or in which professional judgment is helpful.


It is beyond the scope of this article to discuss issues of validity, reliability, and cultural sensitivity, or how to construct questions for surveys and interviews, but there are excellent discussions available, including the work upon which this subsection is based.(34)

Footnotes
33. The table that follows is an adapted table from Reisman and Mockler, A Field Guide to Outcome-based Program Evaluation, p. 41. Courtesy of the publisher.

34. The Evaluation Forum, Seattle, Washington has an excellent series on program evaluation: Jane Reisman and Richard Mockler, A Field Guide to Outcome-based Program Evaluation (Seattle, WA: Organizational Research Services, Inc. and Clegg & Associates, 1994; Jane Reisman and Judith Clegg, Outcomes for Success (Seattle, WA: Organizational Research Services, Inc. and Clegg & Associates, 1999); and Marc Bolan, Kimberly Francis, and Jane Reisman, How to Manage and Analyze data for Outcome-Based Evaluation (Seattle, WA: Organizational Research Services, Inc., 2000). See also, Writing@CSU Web site, available at <http://writing.colostate.edu/references/ research/survey/index.cfm>, accessed May 23, 2003.

E. Benchmarks and Baselines
   
When the data are collected and analyzed, they may be used in a number of ways:

   1. To give governing authorities and managers a picture of the organizational culture, program processes, program outcomes, and ethical conduct of the organization
   2. To establish "baseline" data for comparison with subsequent data
   3. To compare the findings with other similarly situated organizations, but often first-in-class organizations; known as "benchmarking"

The first usage is helpful for governing authorities and managers because it gives them a sense of how their employees and agents view the fundamental workings of the organization. Particularly valuable is comparing how managers, supervisors, and workers answer the same questions. Often, the answers are so different as to make one wonder if they might not work for different organizations. This data also permit comparison between different plants and locations. For large, complex enterprises (LCEs), regional differences can be explored.

Where program evaluation is done on a regular basis--every one to three years, for example--the data serve as "baseline data." Once a baseline of organizational culture, processes, and expected program outcomes is established, governing authorities and managers can compare later data with the baseline to allow the organization to detect patterns and identify trends over time.

A final use is to compare the organizational culture, program processes, or expected program outcomes with other organizations to establish "benchmarks" of practices or conduct. Benchmarking can be an effective practice where it compares program processes, such as code of conduct formats or business ethics office organization or helpline procedures. For analyzing organizational culture or expected program outcomes, however, benchmarking is very difficult to do well, for a number of reasons.

First, very little data is available publicly beyond large national surveys for program outcomes. Moreover, organizations that do evaluate their programs for organizational culture and outcomes seldom share the data with the public. Ethics and compliance programs are so fundamental to the very identity of an organization that it is would be difficult to make meaningful comparisons that take into account differing organizational culture, program process, and expected program outcomes.

Moreover, the real question is, "Why would governing authorities and managers want to spend valuable resources comparing themselves to other organizations, since the goal of a responsible organization is to meet the reasonable expectations of their stakeholders, not to compare favorably to some other organization having different stakeholders?

NBES

For example, how much comfort should governing authorities and managers derive from finding that their employees observe half the amount of observed misconduct found in the Ethics Resource Center's National Business Ethics SurveySM 2003(35) described in the text box opposite?

A favorable comparison alone does not help the organization learn what it is doing right, or even whether the data is accurately collected. For example, where a culture of distrust is the case, employees will often refuse to answer a question about misconduct or answer it incorrectly.

This being said, the ERC is developing an Ethics IndexSM to (1) meet the need for an organization, especially as part of a plan for periodic program evaluation, to be able to compare its program performance to national, industry, and best-in-class organizations, (2) encourage sharing of best practices among industry groups and organizations similarly situated, and (3) provide benchmarks to aid organizations in arguing that their programs are effective before judicial and regulatory authorities.

Footnotes

35. Joshua Joseph, ERC National Business Ethics Survey 2003: How Employees View Ethics in Their Organizations (Washington, D.C.: Ethics Resource Center, 2003), pp. 27-28.

F. Reporting Program Performance
   
As they develop a plan to evaluate ethics and compliance program performance, governing authorities and managers need to determine whether the evaluation is intended for internal consumption only or for wider distribution. Since the organization cannot report on performance that it has not evaluated, governing authorities and managers must determine the outcomes of legitimate interest to stakeholders and the methods of evaluation and reporting that stakeholders will trust. For ethics and compliance programs intended to meet requirements of the Federal Courts in sentencing and as a basis for arguing against prosecution of the organization where an employee has violated Federal law, the report should address all requirements of the 2004 FSGO-- including risk assessment, ethical culture, and program evaluation--not just the minimum program requirements.

1. Reporting to External Stakeholders


Coupled with the increased activism of civil society, there is an emerging trend for organizations to report more about their impact on society to more people. One term for this is "triple bottom line reporting." Triple bottom line reporting requires organizations to evaluate their social and environmental performance to the same degree they evaluate and report economic performance.(36)

Reporting on an organization's performance and impact on society is becoming more common--and expected. Beyond publicizing its role in the economic, social, and environmental evolution of its community, expanded reporting requires the organization to integrate social and environmental considerations into its strategic and operational decision-making. Considering what outcomes to measure and report, what indicators to measure, how to analyze the data, and how to report it can produce synergies that can be quite energizing--in the long-run--for the organization.

Process Principles

2. Building Reporting Credibility

There are no generally accepted standards for reporting on ethics and compliance program performance, especially if it embraces the topics of social and environmental performance. A number of international initiatives are underway to develop such standards, but it will take years to develop a consensus, if one is even possible.

The AA1000S framework is intended to standardize evaluation reporting processes and assurance.(37) It does not provide a prescriptive framework for the resolution of conflicts between an organization and its stakeholders (and conflicts between its stakeholders), but it does provide a process for organizations to begin engaging their stakeholders in order to find common ground and build trust.

AA1000S is based upon the foundation principle of accountability to stakeholders. From that principle, a number of evaluation principles and processes flow. These principles are listed in the text box. Though AA1000S does not have outcome standards per se, the stakeholder engagement process itself will affect the organization itself and its community.


3. Reporting Format

Program evaluation is of most value to an organization and its stakeholders when presented as a high quality, useful format. In drafting the evaluation, it is important to remember that many people will see program evaluation as a threat, since it may reflect adversely on their performance. Evaluators should expect to meet such resistance unless the organization has developed an organizational culture of continuous learning and sharing of information.

There is no established format for the evaluation report. A typical format might include the following sections:

Model Report Format

   1. Executive Summary. A one-to-four page summary of the key points of the report. Since many people will read only the executive summary, it is important to include all of the important points. It should include basic information on the purpose of the evaluation, key findings, any recommendations, and contact information. Refer to the body of the report for other information.

   2. Purpose. Explain why you conducted the evaluation. What are the broad questions the evaluation is trying to answer? Who requested or initiated it?

   3. Background. Provide readers with adequate background information about the program's structure, history, and goals and objectives. What do they need to know in order to understand the evaluation?

   4. Methodology. Explain the evaluation design, including what data collection tools and sampling method you used. (Include data collection instruments as attachments)

   5. Summary of Results. Start with the bottom line: What are your summary conclusions? How would you answer the key questions the evaluation set out to answer?

   6. Principal Findings. Provide more detail on the findings that support your summary conclusions. This section will probably include charts or tables illustrating your findings.

   7. Considerations or Recommendations. Draw from your findings their implications for the program, the organization, or its stakeholders.

Attachments. As appropriate


4. Reporting Content

The content of an evaluation report is driven by the purpose of the evaluation. The Global Reporting Initiative (GRI) sets forth an exhaustive set of elements for organizations to report on as described in the text box that follows. Few organizations, even among the largest multinationals, issue such an exhaustive report, but the GRI framework is useful for even the small to medium enterprise (SME) as a checklist for planning purposes.

GRI Report Content

1. Vision and Strategy - description of the reporting organisation's strategy with regard to sustainability, including a statement from the CEO.

2. Profile - overview of the reporting organisation's structure and operations and of the scope of the report. (22 items)

3. Governance Structure and Management Systems - description of organizational structure, policies, and management systems, including stakeholder engagement efforts. (20 items)


4. GRI Content Index - a table supplied by the reporting organisation identifying where the information listed in the Guidelines is located within the organisation's report.

5. Performance Indicators - measures of the impact or effect of the reporting organisation divided into integrated, economic (10 core, 3 additional), environmental (16 core, 19 additional), and social performance indicators (24 core, 25 additional).

For organizations reporting on the effectiveness of their ethics and compliance program in preventing and detecting criminal the report should address every element of the 2004 FSGO plus the elements of the specific risks the employees and agents face in day-to-day operations.

Footnotes

36. An excellent collection of corporate social responsibility reports can be found at the CSR Wire Web site, available at <http://www.csrwire.com/csr/ index.mpl?arg=a>, accessed May 23, 2003.

37. Accountability, "AA1000S," available in <http://www.accountability.org.uk>, accessed May 23, 2003.

38. Reisman and Mockler, Field Guide, adapted courtesy of the authors.

G. Organizational Learning

The question, "How should we monitor, track, and report our performance as an organization, and continuously learn from it?" most tests the resolve of the responsible organization. Often governing authorities, managers, and supervisors alike are not confident or courageous enough to want to know how they are really doing. Middle managers, in particular, seem to consider program evaluation as a threat. Above all, this willingness to take a hard look at a valuable program and accept responsiblility for and learn from mistakes is an aspect of organizational culture that can perhaps best be measured by the response of governing authorities, managers, and other employees and agents to a program evaluation.

In summary, the responsible organization measures its ethics and compliance program performance for at least five reasons:

  • To provide accountability to stakeholders,
  • To monitor and track changes in the organizational culture,
  • To improve program quality,
  • To allocate resources toward more or less intensive programs, and
  • To be able to make the case to a sentencing Federal Judge that its program was "effective," within the meaning of the 2004 FSGO.

The responsible organization can use the worksheets described in this article to develop a plan to monitor, track, and report its performance as an organization. The plan should address the quality of the evaluation process itself as well: how well it was planned and executed, whether it secured the intended information, how well the information secured was used, and what the impact of the evaluation process was on all stakeholders.

In the final analysis, however, the responsible organization benefits through learning how to deal with the myriad changes confronting it. Through an ethics and compliance program, the organization becomes adept at organizational learning. It learns how to constructively influence its relevant context, develop its organizational culture, improve its performance, and contribute to the social capital of its community. Program evaluation is an essential part of the learning process.

Read Part 5 of the series