63651 The World Bank JUNE PREMnotes 2011 NUMBER 11 Special Series on The Canadian Monitoring and Evaluation System Robert Lahey Performance measurement, monitoring, and evaluation have long been part of the infrastructure within the federal government in Canada. With more than 30 years of formalized evaluation experience in most large federal departments and agencies, many lessons can be gained, not the least of which is the recogni- tion that the monitoring and evaluation (M&E) system itself is not static. The Canadian government has a formalized evaluation policy, standards, and guidelines; and these have been modified on three oc- casions over the past three decades. Changes have usually come about because of a public sector reform initiative—such as the introduction of a results orientation to government management, a political issue that may have generated a demand for greater accountability and transparency in government, or a change in emphasis on where and how M&E information should be used in government. This chapter provides an overview of the Canadian M&E model, examining its defining elements and identifying key lessons learned. Evaluation in public sector management in Main Characteristics of the Canada dates back to 1969, with the initiation Canadian M&E System of formalized and centralized evaluation prac- The structure of the Canadian M&E system is tices. The centrally led approach was replaced characterized by three defining elements: in 1977 with the first governmentwide evalua- tion policy that established the model upon 1. Internal evaluation units in most federal which the practice of evaluation still functions departments, with central leadership: in the government. The introduction of evalu- The Canadian M&E system distinguishes ation was inspired by the notion of “letting the itself from many other countries by its managers manage”; that is, allowing deputy “departmental delivery–central leader- ministers1 of federal government departments ship” structure, where rule setting is done to assume greater responsibility and be ac- by the central agency (the Treasury Board countable for the performance of their pro- Secretariat [TBS]2) and evaluations are grams and the prudent use of public funds. conducted by internal evaluation units The model is based on a strong central man- established in each federal department. agement board that oversees and holds To assist in rule setting, capacity building, deputies accountable. One of the mechanisms and oversight of the system, a Centre of to do this is performance evaluation. Excellence for Evaluation (CEE)3 was es- FROM THE POVERTY REDUCTION AND ECONOMIC MANAGEMENT NETWORK tablished within the TBS. As results-based ments and standards of practice for both monitoring and reporting became in- monitoring and evaluation have been creasingly popular (starting in the 1990s), built into administrative policies of gov- relevant policy areas were established in ernment, developed by the central agen- TBS to provide guidance to department- cy, and rolled out for all government de- al managers and oversight in these areas. partments and agencies. The formalized This central leadership structure guides policies and guidelines help clarify the departments and agencies with perform- government’s expectations and the roles ance measurement and reporting aspects and responsibilities of all key players in of results-based management and, sys- the M&E system. This also reinforces the temwide, the results orientation of gov- use of oversight mechanisms to monitor ernment. the health and use of M&E across gov- 2. An emphasis on both monitoring and ernment. Embedded as they are in ad- evaluation as tools of performance meas- ministrative policies, these policies and urement: The Canadian M&E system re- guidelines have been adjusted and im- lies on both ongoing performance moni- proved as more experience has been toring and the conduct of planned gained with the M&E system. evaluations as tools to measure program and policy performance. Both are recog- Roles and Responsibilities of the nized as key tools to support good gover- Key Players nance, accountability, and results-based The Canadian M&E system has two key focal management. Within individual govern- points for the delivery and use of M&E infor- ment departments and agencies, the mation: the TBS that sets the rules and the in- deputy head has some flexibility in re- dividual government departments that meas- sourcing these tools to be appropriate to ure the performance of their programs and the size and needs of his or her organiza- policies. tion. The expectation is that both tools are used in managing a department and CEE: The Governmentʼs Evaluation helping a deputy achieve the organiza- Policy Center tional goals for which he or she is being Whereas the TBS plays a strong role in both held accountable. It falls upon individual the practice of evaluation and performance program managers to put in place the monitoring within departments, the CEE with- necessary results-based monitoring sys- in the TBS acts as the government’s evaluation tems and upon the internal evaluation policy center. This unit, which currently em- unit to plan for and carry out evaluations ploys 14 staff, plays a variety of roles in support that generally provide deeper under- of the evaluation function across the system: standing of program performance. Con- • community development to assist in ca- siderable time and effort has been ex- pacity building, including development pended by the central agency to provide of competency profiles for evaluators, a appropriate guidance to both technical community development strategy, an in- experts and program mangers across gov- ternship program, and evaluation tools. ernment. • leadership and “champion” for evalua- 3. A well-defined foundation setting the tion by establishing evaluation networks rules and expectations for performance and providing guidance related to M&E measurement and evaluation—policy, practices, including maintaining an up- standards, and guidelines: The require- to-date Web site; 2 PREMNOTE JUNE 2011 • operational oversight and quality control use of evaluation) and reflects this in an an- via monitoring standards and quality of nual assessment of each deputy minister. evaluation practices in individual depart- A critical part of the evaluation infra- ments and systemwide; structure in a department is the internal eval- • facilitating the use of evaluation results uation unit, led by the “head of evaluation.” by linking departmental evaluations with This position plays a pivotal role in ensuring TBS program analysts to help ensure that that the government’s policy requirements results information is used in program and the priorities of the deputy head are re- funding decisions and the broader Ex- flected in departmental evaluation work. To penditure Management System; and help ensure independence, the position gen- • governmentwide evaluation, which is a erally reports to the deputy head or at least feature of the 2009 evaluation policy. has unencumbered access to the most senior official in the department. TBS Policy Centers for Performance Deputy heads are also required by TBS Monitoring and Reporting policy to develop a corporate performance The TBS provides formal guidance and support framework (the so-called Management Re- to departments in developing department sources and Results Structure [MRRS]) that and program-level performance measurement links all departmental programs to expected frameworks and ongoing performance moni- outcomes. This articulation of program archi- toring systems. Additionally, TBS oversees an- tecture serves as the basis for performance nual performance reporting, including a re- monitoring and reporting, and its develop- view of each departmental performance report ment is watched closely by the TBS to ensure required of all federal departments and agen- adherence to the MRRS policy. Performance cies before the report is sent to Parliament. monitoring is an ongoing responsibility of in- Another important player in the M&E dividual program managers, although evalua- system is the Office of the Auditor General tion specialists often support the development (OAG), which periodically monitors and re- of monitoring systems. In theory, ongoing ports to Parliament on the functioning of var- performance monitoring provides much of ious aspects of the M&E system. the data needed for program evaluation; in practice, however, this does not always happen. Organization of M&E in a Government Department Scale and Cost of Canadaʼs M&E System All major government departments and Virtually all large departments and agencies agencies are required to dedicate resources (a total of 37) have a stand-alone evaluation for evaluation, at a capacity appropriate to unit (as do most midsize agencies), resulting the size and needs of the organization. In in some 550 evaluation professionals current- addition, each department must put in place ly working in the Canadian federal public a senior-level evaluation committee chaired service. by the deputy minister, annual and multi- Given the flexibility allowed by the gov- year planning for evaluation, a departmen- ernment’s policy, departmental evaluation tal evaluation policy reflective of the govern- units range in size from 1 individual to 60 ment’s policy, and the mechanisms needed evaluators, with contract budgets for hiring for delivering credible evaluation products. external consultants ranging anywhere from The TBS/CEE monitors departments on all 0 to more than $8 million. The average size is of those aspects (including the quality and now on the order of 12 professionals. The JUNE 2011 PREMNOTE 3 salary for internal evaluators has historically over the past decade, however, a government- represented some 25 percent of all funds imposed requirement to evaluate all “grant spent on evaluation in a department, al- and contribution” programs prior to their though this number is likely rising as a result funding renewal has resulted in more evalua- of the 2009 evaluation policy. tions being used to assess program effective- Spending on evaluation governmentwide ness, with TBS analysts and Treasury Board has risen substantially over the first decade of ministers employing them in decisions around the 2000s. It was $32 million in fiscal 2005/06 future program funding. The formal require- (still below the estimated resource need of ment to table an effectiveness evaluation at the $55 million), but it has continued to rise time of discussions around funding renewal sharply—most recently in response to the has helped institutionalize the use of evalua- greater demands put on departments by the tion information in program funding decision 2009 evaluation policy. making. Currently, federal departments and agencies are on track to evaluate all grant and How Is the M&E System Used? contribution programs over a five-year cycle, Many Uses and Users for M&E Information representing coverage of some $43.8 billion, M&E is not viewed as an end in itself. The in- or 40 percent of all direct government pro- tent of M&E in the Canadian system is to pro- gram spending. In all cases, TBS program ana- vide results information that will serve a vari- lysts use the M&E information as part of their ety of needs and users at different levels decision making on future funding and pro- throughout the system—at an operational or gram renewal. program level, at an individual department The 2009 evaluation policy is bringing level, at a governmentwide level, and in a leg- M&E closer to funding decisions on all direct islative context. In broad terms, much of this government program spending ($112.3 bil- need has been driven by various public sector lion in fiscal 2010/11). This is happening as reforms and, most recently, the government’s part of a broader governmentwide expendi- management agenda, Results for Canadians, ture management requirement faced by all as well as its efforts around renewal of the Ex- deputy heads to carry out a strategic expendi- penditure Management System. As such, ture review every four years to ensure pro- M&E is seen and used as both a manage- gram value for money, effectiveness, efficien- ment/learning vehicle and a means to sup- cy, and alignment with government roles and port accountability in the design and delivery priorities. Deputy heads are using M&E infor- of government policies, programs, and servic- mation drawn from formalized evaluations es and the use of public funds. and other sources to make these determina- tions. Faced with frozen budgets across gov- Informing Decision Making with ernment, the M&E information is becoming M&E Information even more important as an input to budget In the middle of the first decade of the 2000s, planning and decision making regarding the the CEE reported that some 230 evaluations need to change, improve, or replace programs. were being completed each year. Historically, the prime focus for a large proportion of these Reporting and Accountability to Parliament evaluations was for internal management pur- On a corporate level, M&E information is a poses—primarily for program improvement— required input to a number of important and to support performance accountability to documents that regularly inform ministers or external audiences (such as Parliament, parlia- Parliament on government and program per- mentary committees, or the TBS). Increasingly formance: submissions to Treasury Board 4 PREMNOTE JUNE 2011 ministers, memoranda to the Cabinet, de- not represent a master plan for M&E. Rather, partmental performance reports tabled in they reflect the government’s long-term com- Parliament each year, and reports to parlia- mitment to build a results orientation into mentary committees. Because the OAG’s public sector management and to recognize mandate does not include assessing the effec- that performance monitoring and evaluation tiveness of government programs (OAG au- are a critical tools to make that happen. dits focus on issues of efficiency and econo- my), departmental evaluations and results Checks and Balances to Support the reporting in the above documents represent Independence and Neutrality of the an important use of M&E information in gen- Evaluator eral and evaluation in particular in keeping An internal evaluation function could poten- elected officials abreast of government per- tially be criticized for not having the nec- formance and of how well programs are essary independence to “speak truth to pow- meeting their objectives. er.” To deal with this challenge, the Canadian model has put in place infrastructure and Incentives to Help Promote and oversight mechanisms aimed at ensuring that Drive the Use of the M&E System internal evaluations of departmental pro- Formal Requirements for Using M&E grams or policies are credible and objective. Information in Government Some of these elements are at the level of the A number of centrally driven administrative individual department, and others are en- policies introduced over the 1990s and forced centrally. All are intended to reinforce 2000s have served as key drivers for M&E. the independence and neutrality of the eval- Some have had a direct impact on building uation process and the reporting on the find- M&E capacity in departments, including the ings, conclusions, and recommendations of evaluation policy (most recently renewed in the evaluation study. A listing of some of the 2009), MRRS Policy (2005), the Federal Ac- main checks and balances is provided in box 1. countability Act (2005), and the Policy on The 2009 revision of the evaluation Transfer Payments (updated in 2008). Oth- policy dropped the word “independence” ers serve broader needs but have also gener- and replaced it with “neutral,” defined as “an ated demand for systematic and credible attribute required of the evaluation function performance information. This would in- and evaluators that is characterized by impar- clude the government’s 2009 Results for tiality in behavior and process.” In part, the Canadians management agenda, govern- rationale for this change was to ensure that ment requirements that departments annual- evaluators would not operate at arm’s length ly report to Parliament via the report on plans from key stakeholders (as an internal auditor and priorities and the departmental perform- might operate to maintain independence). It ance reports, the Results-Based Management formally recognized that stakeholders—in- and Accountability Frameworks Policy (2000), cluding managers whose programs are being the Management Accountability Framework evaluated—need to be involved in the con- annual assessment of departmental perform- duct of the evaluation during both design ance (2003), the strategic expenditure re- and implementation. views (2007), and Expenditure Management System (2007). A Strong Set of Oversight Mechanisms to Although these have all served to drive Reinforce Credibility and Quality Control the development of M&E in Canada in one Oversight mechanisms in the system serve as way or another over the last 15 years, they do a “challenge” function and, in the process, JUNE 2011 PREMNOTE 5 Box 1. Checks and Balances to Support the Independence and Neutrality of Internal Evaluation • Deputy head of a government department is required by the governmentʼs evaluation policy to establish an internal evaluation function that is both robust and neutral. • The policy requires that the head of evaluation has “direct and unencumbered access” to the deputy head of the individual department or agency to help ensure independence/neutrality/impartiality in the conduct and use of evaluation results. • The governmentʼs Directive on the Evaluation Function outlines the principle that “heads of evaluation, as primary departmental experts in evaluation, have final decision-making authority on technical issues, subject to the decision-making authority of deputy heads.” • Each department has in place a senior-level departmental evaluation committee that plays a variety of roles regarding evaluation planning, conduct, and follow-up (including assessing the performance of internal evaluation). • The head of evaluation is encouraged by the governmentʼs policy to make use of advisory committees, peer review, or (as appropriate) external review panels (independent experts, for example) for the planning and conduct of individual evaluation studies. • The governmentʼs evaluation policy stresses the “neutrality” of both the evaluation function and the evaluator (“impartiality in behavior and process”). This is specifically defined in the policy document. • Standards for evaluation identify four broad requirements intended to ensure that evaluations produce re- sults that are credible, neutral, timely, and produced in a professional and ethical manner. • Operational oversight is provided through ongoing monitoring by the TBS/CEE. • The performance of each department and deputy head is formally assessed each year by the TBS through the Management Accountability Framework process, which includes assessing M&E in the department. • The OAG carries out an oversight role through periodic audits of the implementation of the governmentʼs evaluation policy and the quality of performance reporting. As an independent body reporting directly to ßParliament, the OAG reports represent a public disclosure of information that reinforces both independence and transparency. Source: Authorʼs compilation. provide quality control and help reinforce uation planning and conduct in all depart- the credibility of the system. Oversight in the ments, including coverage and quality of indi- Canadian model is implemented at three lev- vidual studies. Performance measurement els: (1) an individual evaluation study, (2) at and monitoring, in general, is monitored by an organizational level, and (3) at the whole- the TBS. This occurs at the time of the develop- of-government level. ment and approval of a department’s MRRS, The oversight role is carried out by both the basis for its corporate performance report- the OAG and the TBS/CEE. This oversight ing; and during the annual review of depart- role provides an additional incentive to help mental performance reports, which are sub- drive a well-performing M&E system in de- mitted to the TBS by each department prior partments and across the system. to their tabling in Parliament. At an operational level, TBS monitors de- Additionally, since 2003, the TBS annual- partmental M&E initiatives at various points— ly assesses each department and deputy head planning, implementation, and reporting against a number of criteria (including the phases. The CEE, for example, monitors eval- use of results and performance information). 6 PREMNOTE JUNE 2011 This is the annual Management Accountabil- requirements and adjust as needed before ity Framework process that is linked to com- the cross-government rollout. This approach pensation received by deputy heads. This for- was used to introduce the concept of corpo- malized framework is an important way to rate performance reports, with several adjust- provide a vehicle for a dialogue between the ments to the guidelines and directives intro- central agency and senior departmental offi- duced through the late 1990s and early 2000s. cials that could point to areas where improve- The fact that the government has had ments may be needed. four versions of its evaluation policy over the At a whole-of-government level, the OAG past 34 years is a testament to a willingness to conducts periodic performance audits that move from the status quo. This has been monitor the effectiveness of M&E implemen- made easier by the fact that the governmen- tation across the full system. These audits twide requirements for evaluation are largely could include a systemwide implementation based on administrative policies rather than audit of the government’s evaluation policy embedded in legislation. or the quality of results measurement and re- porting. Results of these audits are reported Transparency as an Underlying Value of the directly to Parliament and generally receive M&E System high media exposure, particularly if the find- To be effective, there needs to be an enabling ings point to issues with the effectiveness of environment for M&E, both in organizations government policies and/or the need for and across the whole system. This rests in part change. Such audits highlight the import- on a willingness to carry out performance ance and role of M&E in public sector man- M&E of government programs in full public agement and they catch the attention of legis- view. lators, both at the time of reporting and in In Canada, transparency is a critical di- follow-up discussions that may take place in mension underlying the government’s M&E the Public Accounts Committee. The inde- system. The 2009 evaluation policy makes of- pendence of the OAG and the transparency ficials accountable for “ensuring that com- of its public reporting to Parliament are key plete, approved evaluation reports along with elements for the external auditor. management responses and action plans are made easily available to Canadians in a timely The Sustainability of the manner.” Public disclosure laws have played M&E System an important role in increasing the accessibil- One of the defining characteristics of a suc- ity of M&E studies to the general public, in- cessful M&E system is its sustainability. The cluding the media. Additionally, OAG reports longevity of the Canadian model, with more have increased public focus on performance than 30 years embedded in the federal public and results of government programs. Added sector, has likely been influenced by four fac- to this is the increasing access to and use of tors discussed below. departmental and central agency Web sites where M&E information is made accessible to Flexibility and Willingness to Learn the general public. and Adjust Flexibility and avoiding a one-size-fits-all ap- Ongoing Commitment to Capacity Building proach has been a hallmark of the M&E mod- An adequate supply of trained human re- el in Canada. Along with this flexibility has sources with the needed skill sets is critical to been recognition of the need to learn and ad- sustain an M&E system. Such capacity devel- just as required and a willingness to pilot new opment is an ongoing issue for the Canadian JUNE 2011 PREMNOTE 7 system, given the large number of profession- be making those decisions based on good in- al evaluators working in government (cur- formation.” rently more than 550). Several sources have traditionally been re- Lessons Learned from 30 Years lied on for human resources training and de- of M&E Development in Canada velopment, including community develop- ment initiatives led by the CEE and workshops, Lesson Learned: Drivers for M&E seminars, and professional development and M&E should not be considered as an end in networking opportunities generated by pro- itself. Potentially, there are many drivers for fessional associations and the private sector. M&E that may be political, operational, asso- Recently, a network of universities across ciated with a major reform agenda, and/or Canada started offering evaluation certificate brought on by fiscal measures. Whatever the programs, providing more in-depth training case, it is important to understand who the needed to work as an evaluation practitioner. key audiences are for M&E information, what Substantial efforts have been made over their needs are, and what questions must be the last three years to establish a recognized answered. Some lessons relating to the set of competencies for evaluators and an ac- drivers for M&E that have come from the creditation program. The Canadian Evalua- Canadian experience are provided in box 2. tion Society4 recently introduced the “cre- dentialed evaluator” designation as a means Lesson Learned: Implementing the M&E to define, recognize, and promote the prac- System tice of ethical, high-quality, and competent Implementation of M&E is a long-term and it- evaluation. The CEE is also working to ad- erative process—and not one that is without dress the issue of further professionalizing costs. As such, senior level commitment and evaluation. “champions” at both senior and operational levels are important elements to ensure sus- Central Commitment to Accountability and tainability through the long period of devel- Good Management Practices opment and implementation. Eventually, the The origins for the Canadian M&E system goal is to move M&E beyond the point of be- were linked to a desire to strengthen good gov- ing a “special project” to a point where it is a ernance and accountability. The broad set of normal part of doing business and of the or- government initiatives put in place is a testa- ganization’s management practices. Box 2 of- ment to this central commitment that has fers some lessons on implementing the M&E been sustained through various changes of system from the Canadian experience. government over the past 30 years. This strong support for M&E can be summed up Lesson Learned: Building M&E Capacity in the words of the auditor general, speaking In considering training needs, it is important recently before the Public Accounts Commit- to consider not simply technical training but tee: “The evaluation of effectiveness is ab- also the M&E training and orientation for solutely critical to making good decisions nontechnical officials (that is, the users of about program spending so that we can know M&E information). Additionally, building ca- whether programs are actually getting the re- pacity must address an often-ignored area— sults that were intended . . . this is even more data development and establishing credible critical in the current economic times we are databases. The national statistics office (in going through, because government does Canada, Statistics Canada) should play a cen- have to make difficult choices, and it should tral role in data development, data warehous- 8 PREMNOTE JUNE 2011 Box 2. Lessons Learned Drivers for M&E • Building and using M&E capacity requires more than resources and technical skills; it requires a political will and sustained commitment. Central leadership and a plan are very important. • M&E information is not an end in itself; it needs to be linked to particular management and decision-making needs, particularly in the context of public sector reforms or government agendas. • To be effective, it is important to build a capacity to do evaluation and gather performance information and to develop a capacity to use M&E information within organizations and across the system. A supply of good evaluations is not enough; a reasonable demand for evaluation is key. • The capacity to use M&E information relies on the incentives in the system for managers to demand such information and actually use it as part of their normal operations. This could take the form of sanctions for not complying or rewards for meeting requirements. • Internal infrastructure on its own is likely insufficient to sustain an M&E system. A number of formal require- ments associated with its use (at both a departmental level and the central level and in the context of both management and accountability) will force program and senior managers to take the time and effort to invest in M&E development. • Managing the expectations about the role of evaluation is important in avoiding unrealistic expectations. Evaluation can and should inform decision making, but it is generally only one of many sources of information. Questions about the performance of government programs generally do not have simple yes/no answers. Implementing an M&E System • Across organizations, there needs to be sufficient communication and fora for information sharing about the role of M&E and how it can help management so as to link the demand for and the supply of M&E informa- tion; that is, to ensure that what gets produced is what is needed and delivered in a timely way. • A formal policy document is a useful basis for clarifying roles, responsibilities, and accountabilities of key players—deputy heads, evaluation specialists, program managers, and central agency officials. • The distinction between the “M” and the “E” need to be clarified, including what each contributes to results- based management and what each requires regarding capacity building. • A central agency champion for evaluation in government can play a key role in the M&E system. In Canada, the CEE serves as the policy center for evaluation, provides guidance, leads and promotes capacity devel- opment, and provides oversight to help ensure quality control. • In developing the M&E system in Canada, a number of requirements have been phased in by the central agency, under the general philosophy of “try, adapt, learn, and adjust.” This allows for a period of learning and an ease of adjustment, if needed; and it recognizes that building an M&E system is long term and iterative. • In establishing internal evaluation units in departments and agencies, some flexibility is important to take account of the unique circumstances associated with each organization. Recognizing that one size does not fit all, deputy heads in Canada are given some flexibility in implementing the governmentʼs evaluation policy—although all are equally accountable for the performance of their individual organizations. • Oversight by the national audit office is important in giving broad and public exposure of how well the M&E system is being implemented and whether adjustments are needed. • It is important from the outset to think in terms of years, not months, in getting to a mature M&E system. continued JUNE 2011 PREMNOTE 9 Box 2. Lessons Learned, continued Building M&E Capacity • Building an adequate supply of human resource capacity is critical for the sustainability of the M&E system. Additionally, “growing” evaluators requires far more technically oriented M&E training and development than can usually be obtained in one or two workshops. • Both formal training and on-the-job experience are important in developing evaluators. Two key competen- cies for evaluators have been determined to be cognitive capacity and communication skills. • Developing communication skills for evaluators is important to help ensure that the message of evaluation resonates with stakeholders. “Speaking truth to power” and knowing how to navigate the political landscape are both critical. • Building a results culture within organizations requires program and senior managers to have enough un- derstanding that they trust and will use M&E information. This likely requires a less technical form of train- ing/orientation on M&E and performance-based management. • There are no quick fixes in building an M&E system; investment in training and systems development is long-term. A mix of formal training (given by the public sector, private sector, universities, and/or professional associations) and job assignment and mentoring programs is likely needed as a cost-effective approach. • In introducing an M&E system, champions and advocates are important to help sustain commitment over the long term. Identifying good practices and learning from others can help avoid fatigue of the change process. • Evaluation professionals have the technical skills to advise and guide program managers on the develop- ment of appropriate results-based performance monitoring systems, starting with the performance meas- urement framework and identification of relevant indicators and measurement strategies. • Ongoing performance monitoring (the “M”) and the conduct of periodic evaluation studies (the “E”) should be complementary functions that together form the basis for an appropriate and cost-effective performance measurement strategy. • Data quality is critical for the credibility of an M&E system, and it likely requires implementation of a long- term strategy to develop sufficient data to populate results indicators. Key support can come from the na- tional statistics office and officials responsible for information management and information technology. Source: Authorʼs compilation. ing, oversight, and quality control associated In particular, details can be obtained on the with data capture and public surveying. Key following topics: lessons regarding M&E capacity building are • key policies and procedures supporting outlined in box 2. the M&E system in Canada; • roles and responsibilities of key players Further Reading within the M&E system; A more detailed description and discussion • performance of the Canadian M&E sys- of the Canadian M&E system can be found in tem and assessments by the OAG. Robert Lahey’s The Canadian M&E System: Lessons Learned from 30 Years of Development Notes (Evaluation Capacity Development Working 1. In the Canadian system, the deputy minister Paper Series No. 23, Independent Evaluation of a government department (or deputy head of Group, World Bank, Washington, DC, 2010). an agency) is the most senior nonelected official in 10 PREMNOTE JUNE 2011 charge of and accountable for the department and ing M&E capacity appropriate to their circumstances. its programs. He or she reports to a minister, an He has written and presented extensively to the interna- elected politician. tional M&E community, and is a member of Canada’s 2. Information is available at http://www.tbs-sc Evaluation Credentialing Board. Lahey has been recog- t.gc.ca/tbs-sct/index-eng.asp. nized by the Canadian Evaluation Society for his contri- 3. Information is available at http://www.tbs-sc bution to the theory and practice of evaluation in Cana- t.gc.ca/cee/index-eng.asp. da. The views expressed in this note are those of the 4. The Professional Designations Program is author. available at http://www.evaluationcanada.ca/site.c gi?s=5&ss=6&_lang=EN. Acknowledgments For their comments, the author thanks About the Author Richard Allen (consultant, Public Sector Gov- Robert Lahey was the founding head of the TBS Centre ernance Group), Philipp Krause (consultant, of Excellence for Evaluation, the Canadian government’s Poverty Reduction and Equity Department evaluation policy center. He has headed evaluation units [PRMPR]), Gladys Lopez-Acevedo (senior in a number of departments and agencies over a 32-year economist, PRMPR), and Jaime Saavedra career in the Canadian public service. Since establishing (acting sector director, PRMPR). his own consulting firm in 2004, he has advised many countries and organizations around the world on build- This note series is intended to summarize good practices and key policy findings on PREM- related topics. The views expressed in the notes are those of the authors and do not necessarily reflect those of the World Bank. PREMnotes are widely distributed to Bank staff and are available on the PREM Web site (http://www.worldbank.org/prem). If you are interested in writing a PREMnote, email your idea to Madjiguene Seck at mseck@worldbank.org. For additional copies of this PREMnote, please contact the PREM Advisory Service at x87736. PREMnotes are laid out by UpperCase Publication Services, Ltd. Prepared for World Bank staff