View separate pages

Introduction

The provision of advice to the government has always been one of the most important functions of the Public Service. In March 1997, the then Minister of State Services expressed some concerns about aspects of advice. The Minister commented on:

  • the inability of the Public Service to define clearly the outcomes the Government seeks to achieve and to put forward sound policy solutions for the Government's consideration
  • inadequate human resource capability in some departmental policy units
  • lack of attention to implementation issues
  • the counter-productive and debilitating consequences of departmental patch-protection.

After further discussion, the Minister and the State Services Commissioner agreed to initiate a project which would:

  • investigate ways in which structures, systems, human resources and management practices impact on the quality of policy advice
  • advise the Minister and the Commissioner about ways in which the quality of advice can be improved.

The State Services Commission's (SSC's) project on improving the quality of policy advice involved, among other things, interviews with Ministers, senior officials and managers to establish why a large and costly policy advice system apparently does not provide Ministers with the information they need to make sound decisions.

The project identified five main contributing factors:

1 lack of clarity in Ministers' statements about some desired outcomes and policy directions

2 insufficient incentives for active co-operation by departmental chief executives on cross-cutting policy issues

3 significant variations in standards of leadership, and in the performance of departmental policy units

4 substantial under-investment in capability development in the past and currently, resulting in shortages of highly capable policy managers and advisors

5 significantly inadequate and/or ineffective use of information, research, evaluation and consultation techniques as inputs to policy development.

The first two points were partly addressed in work in 1998 on strengthening strategic management 1 , which contributed to the new Strategic Priorities and the establishment of Ministerial teams. The third and fourth points are being addressed in the SSC's current work programme.

This paper discusses the final point on the quality of policy inputs: the use of information, research evaluation and consultation - and also raises another issue the need to enhance central coordination mechanisms as a means to improve the quality of policy advice.

This paper is in five sections:

Section I: Overview. Key issues concerning inputs to policy advice, and summaries of the following four sections

Section II: Better information and research as inputs to policy advice

Section III: Options for encouraging more effective evaluation of the impacts of policies on desired outcomes

Section IV: Encouraging significantly better consultation as an input to policy advice

Section V: Options for enhancing central coordination mechanisms to improve the quality of policy advice.

1 State Services Commission, A Better Focus On Outcomes Through SRA Networks, SSC Occasional Paper No.3, Wellington, 1998.

SECTION I: OVERVIEW

The New Zealand policy machine: an unbalanced cycle

The New Zealand policy machine can be described as a sequence of closely inter-related and inter-dependent processes in which all activity is geared towards progressive improvement of outcomes:

Achieving good results - that is, well thought-out and well-implemented policies that very substantially achieve their purposes - depends on thorough and competent performance at each step:

  • evaluation that scrutinises the impacts of policies in place and assists to identify new policy issues
  • analysis that is searching and disciplined, and leads towards design of practical delivery instruments
  • coordination that ensures both efficient processes and productive dialogue among agencies and advisors
  • decision-making by Ministers that is well-informed and confident
  • implementation that puts necessary new architecture and instruments in place quickly and efficiently
  • consultation that keeps policy analysis and design in touch with the community's concerns and interests, without undue risk to the government's priorities and interests.

Initial work within this project has shown that some of the steps outlined above dominate New Zealand policy processes. Policy analysis and design of delivery instruments - based largely on theoretical frameworks - process coordination, and the design and management of implementation appear to have absorbed most of the attention and time of Ministers and officials. Other processes have suffered as a result. Particular gaps exist in areas such as evaluation, issues identification (including anticipation of emerging problems), the notion of long-term and forward-looking research-based policy analysis, public consultation, and strategic analysis and management.

This report looks at mechanisms for better balancing the New Zealand government policy machine. It focuses on policy inputs: information and research, external consultation, and evaluation (which is a stage in the policy process itself as post-implementation (ex post) review, but also provides valuable front-end policy-relevant information), as well as on coordination mechanisms, of which internal (inter-departmental) consultation is a key tool.

Since the report on the initial phase of the Improving the Quality of Policy Advice Project, there have been changes in the environment surrounding policy development. These changes, which are having, and will continue to have, a positive impact on the quality of advice tendered to Ministers, include the changes to the strategic management system (especially the new Strategic Priorities and the establishment of Ministerial teams and related institutional support) and changes driven by the strategic social policy deliberations. This report elaborates on these changes and their anticipated effects. It would be useful to review the new teams' impacts on the quality of advice after they have been in operation for about two budget cycles.

The following paragraphs summarise the key messages contained in Sections II through V.

Key messages - Section II: information and research

The problem

There is some dissonance between Ministerial perceptions that advice presented to them is frequently not underpinned by robust information and research, and policy managers' perceptions that this is not a particular problem. Significant numbers of Cabinet papers are turned back for more information, and independent assessments raise concerns about the quality of information, indicating that the problem is real.

Factors influencing the quality of information and research

The quality of advice and the information inputs underpinning it will always be open for improvement. The policy advisor's task is to sift through all relevant information for quality data and to draw logical and intelligent conclusions from this. Multiple and increasing sources of information - from primary research to the wealth of secondary sources accessed through the internet - are making the task more complex. The project has identified a number of factors impacting on this process and the resulting quality of advice:

  • very short time frames inhibit in-depth research: if organisations do not have robust information and research capabilities to anticipate demands they will never be ahead of the game, and the quality of their advice will suffer
  • short-term incentives in the public management system encourage a focus on the production of short-term outputs at the expense of longer-term capability: in information and research this becomes a false economy, as without longer-term research and information strategies policy agencies are not in a state of readiness for short-term demands for policy advice
  • a shortage of policy advisors and managers highly skilled in information management and use: particular skills shortages exist for analysts who can bridge the science/policy divide, that is, who can understand and use scientific data (physical and social). Many departments contract out highly technical research. In-house staff need to be capable of managing these contracts
  • information is typically generated in departmental silos as there are few incentives to share information and sources: this leads to particular problems with information gaps and overlaps, and makes it especially difficult to produce advice on issues that cut across departmental boundaries
  • New Zealand lacks the external think tanks that in other jurisdictions germinate ideas and examine policy issues to inform government policy deliberations and become actual policy proposals. Crown Research Institutes exist as a potential source of research (although social science research remains problematic), as do universities, but they usually only produce research for government when explicitly contracted to do so. This leaves a gap in the forward-looking, environmental scanning, non-intervention specific research. This 'ground work' has to be within departments, yet, as noted above, there is little scope and few incentives to do so.

Solutions in progress to improve information and research

Several processes under way should enhance the quantity, quality and use of information and research as inputs to policy advice:

  • the applied social science research review: eight priority areas for cross-portfolio, long-term applied social science research have been identified. If funding issues are successfully resolved, this may begin to help close one of the current gaps in information required to address some of the most intractable policy issues, in particular in the social policy area
  • role of Statistics New Zealand: The Government Statistician has responsibility for the coordination of official statistics, and is working to identify gaps that can only be filled through directly surveying the population
  • higher central agency expectations: an expectation related to research and evaluation was included in the SSC's letter of expectation for the 1997/98 departmental performance assessment round. Results suggest that departmental performance is patchy, although mainly 'sufficient'. Over time, performance should improve as the expectation gets translated into action.
  • Ministerial teams and strategic priorities: Ministerial teams and supporting lead agencies are likely to demand and generate more information and research that is cross cutting and linked to strategic priorities. This should help to close the gap in provision and incentives for cross-portfolio and outcome-based information.

Conclusions

Whether or not advice is backed by quality information, the brevity required in the presentation of advice, and the fact that advice, particularly in Cabinet papers, is generally not referenced with information sources, means that there is no mechanism to assure Ministers that the assertions in advice are more than informed guesswork. Better referencing of material in Cabinet papers, in particular, would enable Ministers and their advisors to cross-check, thereby improving the transparency and contestability of advice.

Key messages - Section III: evaluation

The problem

All policy advice implies a view on causality: advice needs to identify intended outcomes, the risks that they might not be achieved, the risks of unintended outcomes, and how it will be known when and if the policy or intervention has succeeded. Without such robust intervention logic and evaluation criteria, advice is clearly sub-optimal.

The findings from this study indicate that, with a few notable exceptions, very little good quality outcome evaluation is undertaken. While a good deal of evaluation activity occurs - this was confirmed by the 1997/98 departmental performance assurance round - most tends to be focused on process, implementation, and efficiency, and used for internal management purposes. Very little is focused on outcomes or is systematically integrated into policy processes. Evaluation criteria are typically not built into advice at the outset, which makes future review problematic. While the system provides good information to Ministers about what is being produced and how efficiently, little is known about the actual effectiveness of interventions. As argued previously - for example in the SSC Occasional Paper No. 3, A better focus on outcomes through SRA networks - this dearth of outcome evaluation constitutes a gap in the government policy machine.

Why the dearth of outcome evaluation?

This study identifies both demand and supply side reasons why outcome evaluation is not a strong feature in the public management system, including:

  • low Ministerial demand for evaluation - there is little in the way of incentives for Ministers to demand outcome evaluation; indeed there seem to be stronger incentives (in purchasing, and in budget arrangements, for example) for a short-term focus
  • historically poor outcome specification
  • the lack of any central-agency champion for evaluation
  • the lack of demand for outcome information in current accountability frameworks;
  • funding difficulties
  • methodological hurdles
  • poor evaluation capability in departments (mirrored in the private sector).

Changes under way

There is evidence that the tide is already turning, and that capability for and supply of outcome evaluation is increasing and likely to increase, particularly as a result of demands driven by:

  • the Ministerial teams, which are a new strategic management arrangement, the establishment of which implies an increase in demand for evaluation. In order, among other things, to prioritise between competing departmental outputs and other activities, Ministers are likely to need good intervention logic prior to implementing policy advice (ex ante intervention logic) and ex post evaluation results
  • the communication of expectations from the SSC that departments should have a robust research and evaluation capability. Other SSC projects in progress (for example, the accountability project) are also likely to have an impact on the demand for and supply of outcome evaluation.

Options for encouraging more outcome evaluation

Increasing demand for outcome information is the best mechanism for increasing the supply of outcome evaluation and encouraging enhanced capability in departments. However, existing supply problems will not be solved overnight. The burden will be on departments to respond to sharpening demand. This may mean buying expertise and training, and could bring pressure for additional investments.

Given the costs of evaluation, significant methodological difficulties, and the need to ensure that the evaluation that is conducted generates useful and high quality information, a blanket mandatory approach to evaluation should not be attempted. Some central-agency guidance on what and when to evaluate would provide a useful support to departments in developing an appropriate evaluation capability.

Conclusions

Overall, the lack of outcome evaluation does constitute a gap in the quality of advice tendered to Ministers. Whether or not actual ex post evaluations are carried out (and more of this should occur), it is important that all policy advice should include robust intervention logic and evaluation criteria (see section III). Further work on benchmarking, and training and development for policy advisors, is also being undertaken.

Key messages - Section IV: consultation

The problem

In the face of strong pressures to solve a broad array of complex policy problems, Ministers must have access to comprehensive and timely information and advice about:

  • what works and does not work among existing policies and interventions
  • what needs to be changed and why
  • what alternatives are available, and with what short- and long-term consequences and implications.

Wellington-based advisors do not have ready access to all perspectives on these matters. At times they are even isolated from the perspectives of staff working in operations in their own departments, when operations have been separated from policy functions. As a result, advice tends to be based on theoretical frameworks, extant information and current practice, and the advice of 'dependable' sectoral interest groups. This leads to the risk that policy development can be out of touch with actual conditions in the community. To manage this risk departments 'go directly to the people' - through various forms of consultation at various stages of the policy process - to solicit on-the-ground information, views, and reactions. Community consultation is the most practical means to get in touch with the community, but it does have costs and risks.

Since consultation was not included in the initial Improving the Quality of Policy Advice project definition, information was not sought from policy managers on the extent or methods of consultation utilised by departments in providing policy advice. Therefore, Section IV raises only general issues about consultation.

Costs and risks

These include delay and administrative overload, raised community expectations, consultation fatigue, the potential to raise opposition and undermine policy proposals, and the hazard of unrepresentative views influencing advice. The overall costs and risks can seem daunting, but could probably be reduced if departments were geared up to consult in the course of their general business, rather than in elaborate set-piece projects. It is unlikely many departments currently have the capability for this form of consultation.

Capability-building

This raises two issues:

  • building into operational programmes the time to conduct consultation, and the resources to meet costs
  • providing for appropriate skills competency development.

These would be likely to call for additional investments.

Conclusions

The question arises whether there is a systemic preference for quick-moving policy processes, and if so, whether this limits the scope for community consultation. Notwithstanding that, consultation is likely to become an increasingly important input to policy development. Departmental capability will accordingly need to be addressed. Consultation was a theme of the 1998/99 departmental performance assessment round, and this will provide empirical information to identify risks within the existing consultation processes.

Key messages - Section V: coordination

The problem

The SSC believes that one of the tools designed to improve policy coordination - that is, the centrally mandated interdepartmental consultation requirement - can potentially dull the freshness and directness of departmental advice. The way in which coordination mechanisms are currently applied tends to focus on improving the quality of process rather than the quality of advice. Furthermore, in practice, policy can be 'over-coordinated' to the point where synthesis can overwhelm analysis, so that policy advice reflects the lowest common denominator rather than presenting Ministers with a range of contestable and robust policy options.

Cabinet management and support

Officials committees supporting Cabinet committees comprise one of the main coordinating instruments. While these officials committees undoubtedly perform very important functions in controlling the quality and sequencing of advice to Ministers, they do have a number of weaknesses. The most significant of these is the fact that their function rarely moves beyond quality control and some limited lateral coordination to take in the 'big picture'. While departments complain about excessive emphases on process and quality control, many perpetuate this by continuing to submit inferior papers - relying on the committees to iron out flaws in argument and presentation.

Mandatory interdepartmental consultation

Interdepartmental consultation on policy proposals is a necessary and important part of the policy process, particularly in a disaggregated system such as New Zealand's (which has a relatively large number of government agencies). It ensures that the implications of a proposed policy, including implications for the business of other agencies, is exposed and addressed, and it allows different perspectives and community networks to be tapped. The process falls down where there is no clear ownership of a policy proposal; where consultation becomes a substitute for, rather than an input into, policy analysis; where short time frames and compliance behaviour mean that consultation becomes a matter of 'sign-off' rather than an opportunity to have real input into a policy proposal; and where conflict between agencies takes precedence over contestability of ideas.

Guidance, education, and training

To a surprising extent, some policy advisors work in the dark - that is, with little induction into the art of coordination and preparation of policy analysis, and little guidance and support in their work. This results from relatively high turnover of staff (there is a tendency to 'poach' rather than 'grow' staff, and the relative lack of training and development systems reflect this), uneven competence at policy manager levels, the lack of consolidated guidance and advice, and the absence of a solid service-wide professional policy-advice culture. The upshot is that a significant number of advisors appear to learn their trade through trial and error. The error side of that equation poses a risk. However, a reasonable amount of guidance material is, in fact, available, although not in a coherent, centralised form. Such material is produced by different organisations and dispersed in various products (eg. Cabinet Office guidance material including the Cabinet Office manual, the SSC's own Policy Advice Initiative booklet).

Conclusions

A great deal of confusion surrounds the whole business of coordination of policy advice: its purposes, the best mechanisms, and where responsibilities lie for various aspects. The new Ministerial teams are likely to redress things in their areas of activity. Their impacts on the quality of policy advice will need to be assessed after an appropriate period. However, the bulk of government's day to day business will by-pass them. Improvements can be made in the ways officials committees are constituted and operate. But the greatest returns are likely to come from encouraging better systems in departments to manage and ensure high quality policy advice.

Themes and Overall Conclusions

Five key messages have emerged from the study:

  • a prevailing short-term focus in policy development: the strongest impulses in the public management system are towards action: policy makers react to major problems, formulate quick solutions to them, take decisions, implement these and then move on to the next set of problems. Policy development processes tend, therefore, not to easily accommodate inputs that are reflective or require long lead-times, such as longer-term research and evaluation
  • uneven demand for high-quality inputs to policy development: Ministers and central agencies generally expect departmental policy advice outputs to be based on high-quality inputs. However, where gaps become evident between expectations and performance, Ministers and central agencies respond in a variety of ways: in some cases, sharp incentives will be applied to tackle the problem squarely, with individual Ministers demanding better performance from their chief executives; in others, alternative suppliers will be sought; and in others no action will be taken - less than adequate performance will simply be tolerated, or Ministers will, for various reasons, fail to appreciate that improvements are required from their chief executives. Only the first response can be expected to generate improvements
  • uneven departmental capability: the general view - that a few departments provide consistently high quality advice and a few consistently weak advice, while most provide advice that is of mixed quality (sometimes good, sometimes poor and otherwise merely adequate) - is supported by the departmental performance assessments. Anecdotal evidence indicates that a number of factors contribute to consistently strong performance, for example: high standards of leadership and commitment, sustained commitment to high quality inputs, strong commitments to freedom and frankness, and rigorous quality controls
  • persistent complexity in coordination: there are few, if any, major policy issues that do not overlap several sets of Ministerial and departmental interests. This, together with the fact that modern governments tend to set very extensive and ambitious reform agendas, generates a mass of concurrent policy development activity that can be extraordinarily difficult to sequence, prioritise, and coordinate. The Ministerial teams have been set up to tackle the problem in a number of sectors, but it will not be possible to judge their effectiveness for some time yet, and in any event a substantial part of government business proceeds outside their coverage. Various coordinating mechanisms, including the officials committees, are in place, but sequencing, prioritising, quality controlling, and providing effective support to Cabinet committees remain a persistent problem. In practice, centrally mandated interdepartmental consultation often ends up as substantially a signing-off exercise, rather than an opportunity for a range of contestable policy options to be debated
  • persistent short supply of highly competent policy managers and advisors: Few departments can claim to have a good supply of skilled policy managers and advisors. Some are chronically short of them. Few have the resources to do much of a practical nature to solve the long-term supply problem. Advisors learn on the job, with the quality of their training substantially dependent on the skills and abilities of their managers - who also learn on the job. There is little in the way of formalised training, and while a reasonable amount of technical guidance is available it is not easily accessible.

This study has strengthened a view that problems with the quality of policy advice - the reasons why the government policy machine is unbalanced - boil down to questions of demand and supply. The demand end of the system does not press anywhere near hard enough for evaluation that scrutinises the impacts of policies already in place, analysis that is searching and disciplined, coordination that ensures productive dialogue among agencies and advisors, or consultation that keeps advice in touch with reality and practicality. Policy advice without these inputs is bound to be deficient - and sometimes seriously so.

The supply end of the system tends to respond to the things that are most in demand: that is, advice delivered in time-frames set to meet the needs of a policy machine that is always running at speed. Where they have the capacity to do so, some departments seek to develop evaluation, research, and consultation capabilities, but these are not yet solidly established. There seems to be a distinct risk that departments will be bypassed if they cannot match the pace of the policy machine.

There is little in the existing arrangements that can be expected to significantly improve things: performance, or lack of it in some cases, is bound to stay the same unless the demand end of the system is sharpened and the supply end responds by investing in capability development. Ministers have interests in both demand (purchase) and supply (capability), but purchase interests, as in other areas of the New Zealand public management system, have always tended to dominate. Changing this imbalance is a continuing challenge for both individual departments and the central-agency stewards of the overall system.

Section II: Better Information and research as inputs to Policy Advice

Introduction

This section describes problems related to the generation and use of information and research in policy advice and current activities to address those problems, and suggests a further avenue for improvement.

The problem: Ministers dissatisfied - advisors unaware

There is evidence that Ministers are being asked to make decisions on the basis of insufficient information. Ministers interviewed in the course of the research expressed such concerns. 2 A survey of the minutes of two Cabinet committees in 1997 found that the committees requested further research or information on forty-four out of 240 specific Cabinet committee papers. Even where Cabinet papers are not turned back, interviews with senior central agency officials and a review of Cabinet papers suggest that the information contained in the papers for Ministers is not always of a high quality, transparent, or contestable. 3

Paradoxically, in interviews for the Improving the Quality of Policy Advice project, Public Service policy managers, on the whole, did not appear to recognise that Ministers were concerned about the quality of the information contained in Cabinet papers. These different perceptions raise the question of how Ministers' demands are communicated to departments. Ministers may need to be more explicit in their feedback to departments. Chief executives should seek assurance that their customers are well served. Indeed, most departments do have client satisfaction feedback mechanisms, albeit of varying quality and utility.

While the information and research issue could be brushed aside as simply a question of perceptions, there is enough evidence to suggest that they reflect real problems impacting on the generation and use of information and research, and its translation into policy advice. These are discussed below.

2 Interviews conducted by the SSC with three Ministers (State Services, Finance, and Customs) and twenty central agency officials and chief executives in December 1997.

3 Dr W Smith, Science into Policy: An evaluation of the use of science in policy formulation, a contract report for the Ministry of Research, Science and Technology, June 1998.

The role of information in the policy process

Information is the raw ingredient which, after being put through an analytical process, becomes the foundation for policy advice. The quality of policy advice depends on high quality information, which in turn depends on the substance and integrity of the sources from which it is drawn. Information used throughout the policy process - problem definition, analysis, options, decision-making, implementation, and evaluation - is derived using a range of methods:

  • trawling secondary material, e.g. through literature searches and the Internet
  • tapping expert knowledge, both local and international
  • extrapolating from existing domestic and international research
  • utilising existing data sets and statistics 4
  • public consultation, and consultation with stakeholders and client networks
  • purpose built evaluation or meta-evaluations
  • primary (quantitative and qualitative) research.

It is the task of the policy advisor to sift through all relevant information and to draw logical and intelligent conclusions. The quality of the ensuing advice is influenced by constraints in resources and timing, the ability to understand the information generated, and the availability of high quality research 5 and data. If these influences are not properly managed, the value added by the policy advisor and the quality of advice tendered to Ministers will suffer.

4 Including those provided by Statistics New Zealand

5 The Ministry of Research, Science and Technology uses the following definition of research (adopted from the OECD Fransacti manual, 1993):

The effective supply and use of information: conditions and constraints

The provision of information-rich policy advice is the responsibility of departmental chief executives, and their policy managers. Their decisions about the use of information in policy advice are critical. At present, these officials face several disincentives and constraints, at both individual and system levels, to both generate and use high quality information. 6

Time constraints

Officials noted that Cabinet papers are prepared increasingly under conditions of urgency. Time constraints may limit policy analysts' capacity to consult with researchers, to obtain and use their knowledge and insight. Equally, time limitations may constrain the use of research where existing information is unavailable. There is widespread agreement among policy managers that policy research is hindered by the demand for fast solutions. The SSC's review of departments 7 suggests that most departments perceive themselves as having in place arrangements to 'get ahead of the game', by anticipating future advice needs and developing proactive research strategies. Monitoring the effectiveness of those arrangements is part of the SSC's ongoing role.

Skills and capabilities

On the whole, policy analysts and graduates were well trained in the use of information at a general level and could be up-skilled in certain areas where necessary (e.g. statistical analysis). Policy managers, however, found it difficult to recruit specialists, such as staff with the capacity to bridge the science/policy divide, or who could better integrate social science research into the policy process. Few departments have significant in-house capability with highly technical data or research, preferring to employ a limited number of specialists and experts, and contract out for most advanced research. While contracting out is not a limitation to producing information, and can bring efficiencies of scope, it is important that departments have enough specialist expertise in the science infrastructure and dynamics of scientific research to enable contracted research to be translated - with a knowledge of its limitations and uncertainties - into timely and clear policy.

Coordinating information resources 8

Cross-cutting policy advice requires cross-cutting information and research. Officials working within their departmental silos often need to coordinate their information needs to ensure they have broad information on which to base their policy advice. For example, advice on strengthening families or reducing disparities between Māori and non-Māori requires information sharing between departments. A lack of coordination between departments can result in information gaps, or the costly duplication of information gathering e.g. service delivery agencies each have overlapping administrative data sets, with significant time-series gaps. While there is evidence of ad hoc coordination - such as networks across the Public Service of those working to provide high quality information in policy advice (e.g. chief scientists, research units, and officials working in research intensive projects) - coordination of research between departments is often poor because of the weak incentives to consider whole-of-government perspectives. Horizontal coordination between departments on whole-of-government or sector issues can be problematic because of the systemic emphasis on vertical relationships between Ministers and their departments.

Long- term and strategic research

The sub-optimal production of long-term cross-cutting research is a related government wide problem. At present, no one agency 'owns the problem'. As Allen Schick commented: "certain actions and outcomes fall between the cracks ... in New Zealand because managers know precisely what they are responsible for." And though that "problem is not unique to New Zealand, it is aggravated by the purchaser-supplier split and the sharp focus on outputs". 9

Crown Research Institutes were set up to have the capability to do longer-term research - although there remains a gap in social policy, arguably the most difficult sector for government, with the demise of the Planning Council and the failure of the New Zealand Crown Research Institute for Social Research and Development Ltd. Two problems appear prevalent with the use of Crown Research Institute research in policy. First, Crown Research Institutes and departments are having difficulties aligning research programmes and needs, respectively. 10 Second, research in the Crown Research Institutes specifically commissioned by departments brings with it the departments' short-term focus.

For individual departments, current budget arrangements and annual purchase agreements encourage an emphasis on short-term outputs, at the expense of longer-term capability. However, without some investment in long-term research - e.g. data series to support social science research - policy agencies might find themselves ill-equipped to meet the short-term demands for policy advice. Departments making such an investment might need to trade off short-term outputs in favour of longer-term research. Long-term research may, therefore, undercut the current performance of a department. Unless Ministers have the foresight to purchase long-term research as an investment in its own right, chief executives will be placed in the invidious position of choosing between focusing on yearly performance (and thereby running down the department's long-term capability) and investing in long-term research (and jeopardising their own performance review). There is a balance to be struck.

Despite improvements reported in the recent departmental performance assurance round, there still seems to be too little emphasis placed on future capability. The expectations communicated by central agencies - that departments are to plan long-term research - may provide incentives to mitigate against the short-term results focus. Again, monitoring whether these expectations are being met should be done.

External research 'feeder'

The problem for government of ensuring high quality information in policy advice is compounded in New Zealand by the absence of a research capacity outside the government sector. Although the universities do produce policy-related information, there are relatively few organisations outside the Public Service that can produce robust information that can be used by government. New Zealand does not have much in the way of think-tank organisations, such as the Brookings Institution, American Enterprise, or RAND Corporation found in the United States or the Institute of Policy Studies and the Tavistock Institute in the United Kingdom, where many new and independent ideas can germinate before forming part of government policy deliberations.

6 The conditions and constraints were collated from: Dr W. Smith, op. cit.; D. Stewart, Monitoring the Technical Content of Policy Advice to Government, a Report for the Minister for Research, Science and Technology, 1995; interviews conducted by the SSC with twenty central agency officials and chief executives in December 1997, and with forty policy managers in the first half of 1998; Draft Report from Ministry of Research, Science and Technology, Research, Science and Technology, Identifying Present Practice, Present Issues and Priorities for Action, 1998; SSC Ownership report based on the Research and Evaluation Expectation for the 1997/98 departmental performance assurance.

7 Collation of Public Service departmental performance assessments for the year ended June 1998, State Services Commission, December 1988.

8 These comments are derived from interviews conducted by the SSC with policy managers during the first half of 1998.

9 Allen Schick, The Spirit of Reform: Managing New Zealand's State sector in a time of change, State Services Commission, August 1996, p. 73.

10 Draft Report of Ministry of Research, Science and Technology, Research, Science and Technology in Support of Public Policy; Identifying Present Practice, Present Issue and Priorities for Action, 1998.

Work already under way to improve the quality of information and research in policy advice

None of the above issues are new. Indeed a significant number of projects and processes are already under way to address them, which should help to improve the quality and use of information in policy advice. Their objectives and likely impacts are outlined below.

Strengthening the capability to produce long-term research

Long-term social science research

Government recently agreed to eight priority areas for cross-portfolio, long-term applied social science research. These eight research areas have been identified to generate evidence based research to underpin some of the most challenging public policy issues. They are expressly designed to counteract the incentives in the public management system which mitigate against a whole of government perspective.

Central-agency expectations

In 1997/98 the SSC introduced a requirement for departments to report on their provision for research and evaluation. This requirement was designed to facilitate incorporating future policy advice into its expectations of departmental performance. It arose from a Cabinet directive to the SSC and Treasury to ensure that guidelines for the preparation of performance and purchase agreements made appropriate provision for research and evaluation. Departments were asked to demonstrate that they have:

  • appropriate provision and use of research and evaluation in support of policy advice and programmes
  • integrated a research and evaluation strategy with their overall business strategy.

The SSC now believes that the integration of research and evaluation into departments' business is improving.

Strengthening coordination

Strategic priorities: Ministerial teams

During 1998 the SSC sought to ensure that strategic management was strengthened. The subsequent establishment of Ministerial teams and their related departmental arrangements was aimed at ensuring stronger coordination across the Public Service. Departments will face demands to produce information related to outcomes identified in strategic priorities. Some additional cross-agency research might be identified and commissioned by the lead agencies supporting Ministerial teams.

Role of Statistics New Zealand

The Government Statistician has the task of coordinating the official statistical system across government. This includes work on integration of statistics (e.g. survey protocols, protocols for matching administrative data sets to provide new statistical databases) and work on a social statistics strategy (including putting in place the infrastructure for effective coordination of social statistics).

Ministry of Research, Science and Technology projects (including Foresight)

The Ministry of Research, Science and Technology is currently engaged in a project that seeks to enhance systems for ensuring that government decision making is effectively informed by scientific and technical advice. The project will seek to identify limitations on knowledge inputs to the development of policy advice across the government, and establish practical options for resolving these problems. This project, known as 'the Foresight Project' will develop a set of priorities for government's investment in research, science, and technology. By consulting over a period of several years, the project is creating incentives for a series of strategic, ongoing conversations among departments and other stakeholders in the New Zealand science and technology sector, about their current and future information needs.

Whole-of-government Information Strategy

The document, Leading the Public Sector into the Information Age 11 outlines how greater coherence could be achieved among the government's information assets to bring about further efficiency and effectiveness gains while improving Public Service responsiveness.

11 Draft State Services Commission Occasional Paper, forthcoming.

Conclusions

If the range of problems in the supply and use of information and research highlighted above paints a discouraging picture, it should be remembered that, as The Hawke Report 12 observed, there will always be room for improvement in the quality of policy advice, and its underpinning information inputs. Ministers and senior officials will always find that policy advice could have included more information; and been better, given more time, more resources, and more expertise. A competitive policy market should encourage policy managers to continuously improve their policy advice.

Senior officials interviewed as part of this Improving the Quality of Policy Information project believed that information in policy advice has improved and is improving over time. Further improvements are in order. But there are trade-offs that have to be balanced: quality of the information vs timeliness; and quality of information vs cost of the skills required to use highly technical information. In terms of potential levers to improve information and its use there is a further trade-off between managerial autonomy and central control. Solutions need to be consistent with New Zealand's devolved public management system and departments' responsibility for managing their own capability and performance.

Clearly some Ministers are dissatisfied with some departmental information inputs. However, a number of projects are under way that will provide incentives and signals to policy managers to improve the information inputs into policy advice that should address at least some of the concerns of Ministers. It would be unwise to impose any new central directives on the use of information before departments have had time to react to the signals and incentives created by these new projects. The SSC will be in a position to judge the effectiveness of these projects each year through the 'research and evaluation' component in its departmental performance review.

One additional measure is also in order. Cabinet papers are the end product of policy advice, yet the information they contain is not able to be tested effectively for quality by either Ministers or by other departments. Good referencing of information in Cabinet papers would provide reassurance to Ministers that bald assertions are backed up by robust information, and would enable others - e.g. departments, senior officials - to contest that information and its use. Such a measure would also establish an effective audit trail for future policy development.

12 Ministerial Review, Drawing on the Evidence: Social Science Research and Government Policy (The Hawke Report), Ministry of Research, Science and Technology, Wellington, 1995, p. 41.

Section III: Options for encouraging more effective evaluation of the impacts of policies on desired outcomes

Introduction

This section describes the current state of outcome evaluation in the Public Service, and provides options for encouraging more effective evaluation of the impacts of policies on desired outcomes. It updates a previous paper published as "Looping the Loop; outcome evaluation and other risky feats". 13

13 SSC Occasional Paper No. 7, June 1999.

What is evaluation?

There are as many definitions of evaluation as there are books written about it. Those definitions range from a narrow interpretation based on quasi-experimental research designs to broad conceptualisations that include, among others, applied social science research, monitoring, or even various types of audit. For the purposes of this paper, outcome evaluation means any systematic attempt to measure empirically the impact of some government intervention on desired outcome(s). For outcomes, the definition in the Public Finance Act 1989, Section 2 is adopted: "the impacts on or consequences for, the community of the outputs or activities of the Government". In terms of policy advice, evaluation has both ex ante and ex post dimensions. The ex ante dimension is reflected in good intervention logic in policy advice; the ex post component means testing the impacts of an intervention or mixes of interventions after the fact, and feeding that back into future policy advice.

This section includes evaluation of 'small' outcomes that are in the domain of one portfolio as well as broad outcomes, reflected in the Strategic Priorities (and formerly in strategic result areas). It does not deal specifically with the level of formative evaluation, that is, evaluation focused on implementation and process issues, except where this is relevant in terms of evaluation capability.

Why is evaluation important to policy advice?

All policy advice implies a view on causality. It is legitimate therefore to ask that this view be made explicit; for advice to identify intended outcomes, the risk that they might not be achieved, and the risk of unintended outcomes (in either the same or another portfolio area) occurring instead (the intervention logic), and how it will be known and can be tested when and if the policy or intervention proposed has succeeded or failed (evaluation criteria). Ex post, evaluation is an important tool for identifying policy successes and failures; for confirming the merits of an intervention, or for identifying where some adjustment or even termination is in order. Evaluation should help to identify where resources are not being put to the best use and could be redirected. As such, it is an essential input to good quality policy advice and can provide information that offers a stronger basis for new policy. As the Department of Labour evaluation guidelines argue:

    "The advice given to Ministers on policy, and the decisions made by managers on how to implement policies and programmes, can be (and frequently are) based on pure a priori reasoning or intuition. But a priori reasoning and intuition, on their own, are not adequate. Having a process for systematically testing whether a policy or plans behave as predicted is likely to help us develop better policy and plans so that, next time around, performance is improved. This, in a nutshell, is why evaluation is important". 14

Whether or not an actual evaluation is carried out ex post, robust intervention logic and evaluation criteria are important ingredients in quality policy advice.

14 Evaluation Guidelines, New Zealand Department of Labour, Labour Market Policy Group, 26 October 1995, p.4.

What is the current state of outcome evaluation?

It is difficult to assess the amount of outcome evaluation that occurs in the New Zealand Public Service. Interviews with policy managers (and reported in Section II above) suggested that very little occurs. The dominant adjective used by those managers to describe the level of outcome evaluation was 'modest', although most were unable to say how much, and few had an actual evaluation strategy or guidelines. During the departmental performance assessment round, chief executives claimed that a significant amount of evaluation was being undertaken. This study indicates that, while there is evidence of a good deal of evaluation activity, most tends to be focused on process, implementation, and efficiency, and used for internal management purposes. Very little is focused on outcomes, nor is it systematically integrated into policy processes.

This lack of evaluation as an input to policy advice was confirmed by a sample of Cabinet papers from April 1998. Those papers were reviewed by the SSC to assess the extent to which policy proposals presented to Cabinet included a proposal to evaluate results or at least included an intervention logic that was transparent enough to allow evaluation to be carried out at some later point. That research showed that only seven per cent of proposals included evaluation criteria. 15 Evaluation is typically not built into the policy advice at the outset, thereby making future review problematic.

There are some differences across the Public Service in the approach, extent, and focus of evaluation activity. Some departments have developed a specific evaluation capability. For example, the Labour Market Policy Group in the Department of Labour has a dedicated evaluation group (and a requirement that evaluation is an integral part of all new policy development, as well as a three year evaluation plan of existing activities), as does the Social Policy Agency in the Department of Social Welfare, and the Department of Inland Revenue. Some agencies include evaluation as part of a broader research capability (for example, the Ministry of Education, the Department of Internal Affairs and the Ministry of Justice). Te Puni Kokiri (the Ministry of Māori Development) has an evaluation and monitoring unit aimed at evaluating services provided to Māori by other agencies. The environment and transport sectors have strategies to develop outcome information including through evaluation and indicators. Other agencies conduct occasional intervention specific evaluations on an ad hoc basis. The Education Review Office is the only organisation dedicated to evaluation (albeit of the performance of educational institutions, not overall educational outcomes).

The relative dearth of outcome evaluation constitutes a gap in the quality of policy advice tendered to Ministers, both individually and collectively. While the current system delivers to Ministers robust information on what is being produced and how efficiently, little information is supplied on the effectiveness of interventions in achieving policy goals. More and better evaluation against outcomes would help to narrow this gap.

15 Unpublished report, "Review of sample of Cabinet papers to determine the level of evaluation proposed", State Services Commission, 1998.

Why the dearth of outcome evaluation?

There are both demand and supply side reasons for why evaluation of outcomes is not a strong feature of the New Zealand public management system.

Low Ministerial demand leads to poor supply

Ministers frequently do not demand evaluation information and in some cases are reported to have discouraged it (in purchase agreements negotiations). When resources are scarce more outputs are preferred over investment in evaluation. There are few incentives for Ministers to set their own policies up for future scrutiny, including the fact that they are not required to report on the achievement of outcomes. The Public Finance Act 1989 implies that Ministers specify outcomes from departmental outputs ex ante, which they do in the form of outcome statements in the Estimates, but there is no corresponding ex post reporting required. Moreover, these statements only relate to departmental outputs and not to significant other expenditure such as transfers, nor to taxation, or regulatory interventions (although regulatory impact statements have a similar role in relation to the regulatory interventions). This lack of Ministerial interest in evaluation contrasts markedly with the European scene where, as the public management/evaluation expert Christopher Pollitt has recently argued, Ministerial demand has created somewhat of an evaluation boom (although he suspects this is because they have discovered the marketing uses of evaluation information; he terms this its utility as political 'baubles'). 16

In the absence of demand, departments typically do not supply evaluation, although this could be seen as an omission in terms of their responsibility to provide free, frank, and full advice, not only to current Ministers, but also to future Ministers. This responsibility is implied in:

  • constitutional convention
  • legislation (section 32(b) of the State Sector Act 1986 and section 9 (2) of the Official Information Act 1982)
  • official documents (including the SSC's The Policy Advice Initiative, which includes under 'coverage' as a measure of policy advice performance "...the regular evaluation of the impacts of government spending and regulation on the outcomes desired by the Government..." (p 10); and the Public Service Code of Conduct (p11) which notes as a responsibility of public servants the provision of "comprehensive advice to Ministers."

The responsibility to provide free, frank, and full advice implies a need to monitor the impacts of policies, including evaluation, in order to inform that advice. There may be a tension between this responsibility and Ministers explicitly choosing not to purchase evaluation, although the implications of this tension vary according to the level of specification of policy advice in purchase agreements. Evidence suggests that there is a good deal of variation in how detailed Ministers are in specifying what 'policy advice' they want to buy.

The incentives on Ministers to purchase for the short term (annual purchase agreements, political cycles), coupled with the relative level of specification and control over what policy advice is purchased, and the seeming lack of awareness on the part of the Public Service of what constitutes free and frank advice all influence the low level of outcome evaluation.

Poor outcome specification

It has been difficult, if not impossible, to evaluate policies and programmes where the desired outcomes have never been clearly identified, or where they are subject to changing policy settings. A paper released in October 1997, A Better Focus on Outcomes through SRA Networks 17 described problems with the variable quality and specificity of Strategic Result Areas, and the fact that the Public Service has not been particularly good at disaggregating those (albeit poor quality) broad outcomes into more operational policy objectives. The SSC's Strategic Result Areas analysis work 18 has shown how difficult outcome specification and disaggregation is. The new strategic priorities and ongoing work to further clarify desired outcomes (including through the development of outcome indicators) should go some way towards improving this situation. This work should also help to get Ministers into the habit of better specifying desired goals (whether Strategic Priorities or other outcomes), and advisors to engage in dialogue about they might be achieved.

Outcome/output dichotomy and accountability frameworks

The accountabilities in the Public Finance Act 1989 have been interpreted so that departments have focused almost exclusively on outputs, without corresponding attention to outcome specification and intervention selection. With justification, central agencies have been loath to see outcome measures used for accountability purposes for fear that this would distract departments from their outputs focus, and have held that departments should not be held accountable for things that are outside their control (as outcomes are). Several attempts were made over the last ten years to solve the outcome dilemma - including setting up, in 1995, a group of senior officials from the SSC and Treasury to look at how outcomes could be built into the strategic management system - but all got stuck on the need to maintain a focus on outputs, and the anathema of tying accountability to outcomes. There was an implicit agreement that explicit contracting mechanisms linked to accountability frameworks - what Allen Schick criticised as 'accountability by specification' 19 - were the best way to ensure that the public sector delivered on government's desired goals.

A new balance between outputs and outcomes is being sought. This perhaps reflects a maturation of the New Zealand public management system; the outputs focus is now well entrenched and fears have subsided that integrating outcomes more explicitly into the system (either as a requirement of quality policy advice or included specifically in accountability frameworks) would undermine that outputs focus. There has been a growing realisation that organisations should make explicit the links between their own activities and desired government outcomes. New arrangements included in the forthcoming 1999/2000 chief executive performance agreements and the key result areas assessment processes (outlined below) are part of building this outcomes focus.

Disincentives in Budget arrangements

There have been limited cross-portfolio incentives for reprioritisation (which would indicate the need for evaluation to see what is working, what is not, and where reallocation could occur). Within portfolios most scrutiny of the merits of interventions occurs in relation to new initiatives, while the existing stock of policies and interventions roll over as part of existing baselines. Recent attempts to align strategy and budget, in particular those announced by Cabinet in relation to Strategic Priorities and the role of Ministerial teams, indicate that teams will have a role in identifying low-priority areas within the baselines of departments included in those team networks. These processes should help to further encourage reprioritisation and the need for evaluation to support it.

Funding issues remain. Evaluation - particularly empirical research - is costly, and often requires longer time frames than can be accommodated in current budget practices (despite multi-year baselines). Anecdotal evidence from interviews with policy managers suggested that some proposals for evaluation had been quashed by Ministers on the advice of the Treasury. Some flexibility and more of a medium-term focus are required. Clearly, evaluation needs to be built into the cost of policy advice so that it is not seen as a dispensable extra.

Lack of an evaluation champion

Central agencies themselves have over the last decade given mixed messages to the Public Service about the desirability of conducting outcome evaluation. This partly reflects the fear that a focus on outcomes would distract departments from their 'proper' focus on outputs.

Unlike the Australian situation, where the Department of Finance and Administration has played a leading role in creating the expectation that programmes should be evaluated (albeit usually with a focus on efficiency rather than effectiveness), the Treasury has not encouraged evaluation. A recent publication by a senior manager at the Treasury questioning the merits of evaluation suggests that at least some elements of the Treasury remain sceptical about its benefits. 20

The SSC's own Reviews Division, which conducted some evaluation itself, was disbanded in the early 1990s, due to a number of factors including the belief that the information generated by those reviews was information already known by the relevant portfolio Ministers.

Also unlike their overseas counterparts (for example the US Congress to which the US Government Accounting Office provides detailed evaluation results), the New Zealand Parliament has not shown much interest in outcome evaluation information. This remains true despite advice from the Audit Office for Parliament to probe the intended and actual impacts of interventions in the context of Estimates examinations. The Office of the Auditor-General is currently conducting a survey of the Public Service to ascertain the level of impact evaluation and is likely to report to Parliament suggesting that more should be undertaken. The Office has consulted with the SSC in relation to this work.

Methodological issues

With some justification, departments cite problems with establishing causality - the extent to which it is difficult to show, let alone provide robust measures of, the level of causality between policy advice, ensuing policy decisions and later outcomes - as a reason for not evaluating, particularly in the social policy area. A multiplicity of factors impact on the shape of final outcomes, many of which are even outside the control of government.

Information gaps also limit some evaluation. For example some time-series analysis is limited by a lack of historical data, while there is a need for outcome indicators in key areas such as social policy (see Section II on research and information). Other methodological issues make evaluation problematic. For example, experimental or quasi-experimental research conditions are difficult to create in social policy areas; the use of control groups can raise ethical issues. 21 The choice of method depends on the type of intervention to be evaluated, the variables to be included, and the complexity of environmental factors. Evaluators need to be aware of the limitations of the various approaches, as do policy managers whose expectations tend to be even less realistic.

However, just because evaluation is difficult and methods are not perfect does not mean it should be relegated to the too hard basket. There is some difference of opinion on how certain results should be. Causality of one hundred per cent is almost always impossible to determine. But decision makers might not need this level of certainty. If the correlation between an activity and the desired outcome are high and if plausible alternative explanations can be ruled out then there is a reasonable basis to argue that the intervention was effective. Several policy managers admitted that their attempts at evaluation against outcomes were rudimentary but that the information it provided was better than none at all. In the final analysis, building evaluation criteria into policy advice forces Ministers and advisors to think about the causal linkages and impacts on outcomes, and enables future reviews of those policies and their underlying assumptions to be tested ex post. The choice about whether to actually evaluate, and how, should be made on the basis of full robust information and understanding about the type and quality of information that can be generated by evaluation. Dialogue between policy people and evaluators is essential.

Poor evaluation capability

There is a shortage of skills and capabilities in the public and private sectors both to carry out evaluation and to manage external evaluators. Judith Aitken, Chief Executive of the Education Review Office - who, having statutory responsibility for evaluating the performance of educational institutions, has confronted this skills gap perhaps more than anyone - has characterised the state of affairs as:

  • Our evaluation skills are not only fairly amateur; they are generally lacking in professional rigour
  • We can locate very few training opportunities for those interested in evaluation
  • We can find almost no academic courses in evaluation....
  • In the Public Service we lack evaluation-specific skills, techniques, institutions, professional codes and, most of all, professional practitioners. 22

There is some evidence that improvements are already occurring, albeit slowly in this domain. Increasingly, staff of government departments are playing an active role in, for example, the Australasian Evaluation Society, which runs workshops and disseminates information on actual evaluations and theoretical and practical issues arising. A professional (pilot) evaluation training course was run this year under the auspices of the Australasian Evaluation Society, and was heavily oversubscribed. The Master of Public Policy programme at Victoria University of Wellington has an elective course on monitoring and evaluation for the first time in 1999. Massey University's Diploma in Public Sector Management also will include a module on public sector evaluation from early 2000.

Structure of government

The disaggregated State architecture might have also influenced the level of evaluation. When separate functions are carried out by separate agencies (policy/operations, purchaser/provider split), the responsibility for evaluating effectiveness or outcomes is also split. In theory, this is a positive split as operations (including crown entities and other providers) would be responsible for evaluating policy implementation, and policy Ministries the impacts or outcomes. A cascade of evaluation is required. But this in turn requires some explicit commissioning on the part of policy managers and good coordination between policy functions and operations in order to get the full picture of overall impacts of policy on outcomes; a picture which would in turn inform future policy development. This may not be happening as well as originally envisaged.

The disaggregated machinery of government (MoG) also means that achieving some outcomes - those that cut across portfolio boundaries, such as good health status or reduced disparities between Māori and non-Māori - are beyond the scope of any one agency. It also means that mixes of interventions (from different agencies) are not evaluated (although there is some evidence of interdepartmental collaboration on specific evaluations, for example between the Department of Labour and the Social Policy Agency in the Department of Social Welfare). If there are few incentives to evaluate within a portfolio, there are even fewer incentives for chief executives to collaboratively review the combined effectiveness of various policies aimed at the same broad outcome. And as Alex Matheson recently noted, the agencies supporting the strategic management process (that is the central agencies) "...have so far failed to provide the external impact information and analysis that will support the assessment of the national benefit derived from current strategies and the development of new strategies". 23

The Strategic Result Areas (SRA) network model 24 was designed to address some of these strategic management and coordination issues. The support structures around Ministerial teams (in particular the role of lead agencies) might provide the mechanism to ensure that evaluation is targeted and occurs at all relevant points: implementation, individual interventions aimed at 'small' outcomes, and using those results to review mixes of interventions aimed at broad outcomes or strategic priorities.

16 Pollitt, Christopher, "Evaluation in Europe: boom or bubble?" Evaluation, Vol. 4(2), 1998, pp. 214-24.

17 State Services Commission, A Better Focus On Outcomes Through SRA Networks, Op. cit.

18 Ibid.

19 Allen Schick, The Spirit of Reform: Managing the New Zealand State sector in a time of change, State Services Commission/The Treasury, 1996, p.84.

20 Peter Bushnell, "Does Evaluation of Policies Matter?", Evaluation, Vol. 4(3), 1998.

21 Ethical issues are raised in a number of areas related to evaluation, including the process of conducting an evaluation. See, Guidelines for the Ethical Conduct of Evaluations, Australasian Evaluation Society Inc. 1998.

22 Judith Aitken, "The way we carry out the Queen's business: big issues to be dealt with over the medium term: information and evaluation", in Future Issues in Public Management, State Services Commission, 1997, p 44.

23 Matheson, Alex, "Governing Strategically: the New Zealand experience", Public Administration and Development, Vol. 18, p.361, 1998.

24 State Services Commission, A Better Focus On Outcomes Through SRA Networks, Op. cit.

Avenues for improvement: options for encouraging more outcome evaluation

Options for encouraging more outcome evaluation also fall into demand side and supply side categories. However, the focus for improvement should be on demand factors as increasing demand for outcome information is likely to see a corresponding development in evaluation capability. Recent developments - strengthening the strategic management system within the State sector through better specified Strategic Priorities and the roles of Ministerial teams - will potentially increase the demand for evaluation. Indeed, those arrangements will arguably fail if not supported by good outcome evaluation. Evaluation is a critical success factor in these reforms. However, it should also be stressed that skills and capability gaps will not be closed overnight.

Options for encouraging more evaluation are outlined below. In the interests of completeness, the report includes not only new options, but also the likely impacts of recent initiatives related to Ministerial teams and strategic priorities, and measures likely to stem from ongoing SSC projects.

Increasing Ministerial demand for evaluation information

The role of Ministerial teams implies an increase in demand for evaluation. Ministers will need to demand good ex ante intervention logic to enable them to prioritise between competing departmental outputs and other activities, and similarly to commission evaluation to support those decisions and to subsequently enable future reprioritisation. While recent developments apply mainly to strategic priorities, the processes, systems, and capabilities needed to meet the information requirements of those changes should mean that good practice becomes obvious and, as a result, permeates day-to-day decision making in departments. While initial demands will fall directly on lead agencies, those lead agencies will need to ensure that good intervention logic is supplied in policy advice from supporting agencies and to commission evaluation from those agencies (and other providers). Non-lead agencies will need to respond to these demands if they are to prove the merits of their outputs and the links to strategic priorities.

The role of teams in ensuring well-specified Strategic Priorities and developing outcome indicators will provide a stronger basis on which Ministers can demand outcome evaluation. Outcome indicators in particular, will give agencies a tool for tracking outcome progress, against which they can assess the effectiveness of their own activities.

In addition, some sort of ex post outcome reporting arrangements/requirements on Ministers would significantly enhance incentives to demand outcome information. This might be an avenue to revisit once systems for managing the achievement of Strategic Priorities - including the development of outcome indicators - are more entrenched. This revisit might form part of an overall review/evaluation of the implementation and impacts of Ministerial teams and their surrounding structures and processes.

More demands from Parliament

As noted above, the Office of the Auditor-General is undertaking work to encourage Parliament to demand more impact evaluation - referred to in this paper as outcome evaluation -raising awareness of the benefits of such evaluation and highlighting examples of successful evaluations. Depending on how this work progresses, and how it is received by Parliament, Ministers might find that more outcome information is demanded of them, particularly in select committee processes.

Increased central-agency expectations

State Services Commission : The SSC is building expectations of evaluation and a focus on outcomes into the preparations of the chief executive performance agreements for 1999/2000 and the assessments of draft Key Result Areas (KRAs). A memorandum foreshadowing material to be included in the pro forma was sent to chief executives on 10 November 1998. It gave chief executives early notice that there would be additional emphasis on several themes including:

  • clearly articulating the logic which links departmental outputs and objectives to the Overarching Goals and Strategic Priorities of the Government
  • a strong focus on outcomes (and their evaluation) and the contribution which departmental outputs make to their achievement.

This builds on the letter of expectations for the SSC's review of departmental performance 1997/98 and subsequent additional guidance sent to chief executives which included specific guidance on research and evaluation. It noted that the SSC expected departments to:

  • recognise the need for policy advice to be adequately supported by appropriate research and evaluation results
  • be able to demonstrate integration and linkage of a research and evaluation strategy with their overall business plan...
  • make appropriate provision for research and evaluation (including cross-portfolio, long-term programmes) so that adequate information is available when future policy options are considered and existing and new programmes reviewed
  • demonstrate the use and application of this research and evaluation in the development and review of policies and programmes. 25

Results from the departmental performance assessments round, consistent with information collected for the Improving the Quality of Policy Advice project, showed significant variation in the importance that departments attach to evaluation.

After being removed from the 1998/99 expectations letter - to reduce the risk of 'assessment creep' 26 - the research and evaluation expectation has been reinstated. The need for evaluation and good intervention logic in policy advice will be an ongoing theme in the SSC's relationship management with chief executives and their agencies.

The Treasury has expressed concerns (mainly internally) about the lack of ex post review in the New Zealand system but to date has not embarked on any specific work related to encouraging more outcome evaluation.

Department of Prime Minister and Cabinet: In her letter of 9 November to the leaders of Ministerial teams, and copied to all chief executives, the Prime Minister noted that:

    "My firm expectation is that the lead agencies, working with the Department of the Prime Minister and Cabinet, will take a leadership role in ensuring that Ministerial team leaders are properly supported, that quality advice is available and that appropriate coordination is taking place".

The Department of the Prime Minister and Cabinet is aware of the need for better intervention logic and more effective outcome evaluation and will promote this in their policy advice quality control function (related to strategic priorities, and the Department of the Prime Minister and Cabinet's support to Ministerial teams and to policy advice generally).

In the course of this research, the possibility of including in their guidance material on what constitutes good quality policy advice, and the need for explicit intervention logic and evaluation criteria in all policy proposals was discussed with Cabinet Office. This discussion was part of a broader look at how the elements of good policy are communicated or at least made accessible to policy analysts throughout the Public Service. It might form part of the development of a website including Cabinet Office material as well as the SSC's Policy Advice Initiative booklet. Strategies for communicating guidance material for policy analysts and managers was discussed above, in Section I: Overview.

Another SSC project relevant to demand for outcome evaluation

A further project, the Improving Accountability Project is likely to suggest mechanisms for including outcome information in an enhanced accountability regime for departments. Any requirement to provide outcome information will stimulate demand for evaluation as a tool for generating that information.

Comment

The above mechanisms would promote the need for evaluation through the application of evaluation criteria in new policy initiatives (policy advice guidance materials, the advisory role of the Department of the Prime Minister and Cabinet) as well as encouraging review of the existing stock of policies (Ministerial teams and priority setting, SSC expectations and the departmental performance assessment process, and an enhanced accountability framework).

25 Reporting against the 1997/98 performance agreement, State Services Commission letter to chief executives, 29 April 1998.

26 As explained in, Expectations for the State Services Commission's review of departmental performance 1998/99, letter to chief executives, 14 July 1998.

Evaluation guidelines - when to evaluate

In stimulating demand for evaluation, it is important not to suggest that everything should be evaluated, or that evaluation is the only tool for generating information for Ministers on whether government activities are contributing to the achievement of outcomes. Australian experience - characterised as 'evaluating everything that moves' - paints a picture of evaluation overkill (interestingly, the Australians have recently moved away from this blanket approach). 27

Evaluation might be the fall-back option once less expensive alternative mechanisms for testing the merits of activities have been considered. For small projects, the results might be based on good cost/benefit analysis. Given the size of the New Zealand population it might be legitimate to use evaluative information generated in other jurisdictions for some types of outcome. For others, unique local conditions might necessitate purpose built research. Sometimes existing statistics might be indicative. A lot of information (e.g. demographic and mortality data) is collected for other purposes but might be used as indicators of progress in moving towards certain outcomes (that is, it will indicate improved outcomes but will not provide information about the impacts of specific interventions).

Some guidelines as to what and when to evaluate would be useful. The SSC could work with the Treasury and the Department of the Prime Minister and Cabinet (with input from selected agencies with specific on the ground expertise) to develop some framework to guide agencies. What to evaluate might depend on a range of factors. For example, the fiscal dimension (is a lot of money being spent?), whether it is a new or potentially controversial type of intervention (level of risk, learning potential), whether an activity might be considered near its 'use-by date' (has been in existence for more than five years without review), and most importantly, the extent to which the policy relates to government's strategic priorities. Timing issues should also be well understood. Pressure to get the 'good news' before the impacts of an intervention could realistically be verified should be resisted. Australian experience suggests that targeted outcome evaluation is preferable to a mandatory approach. An evaluation strategy for each sector or managed by lead agencies on behalf of Ministerial teams might be in order.

Whether or not an actual evaluation is carried out, policy advice should include the discipline of providing evaluation criteria including robust intervention logic. The key questions are, what would be the gap in information in the absence of evaluation, and is it worth investing in evaluation? Some guidance material might help departments to answer these questions.

27 The Australians have moved away from mandatory evaluation of every programme every three to five years to ad hoc strategic reviews and a requirement that all new policy proposals include evaluation criteria. Their previous system lacked focus and seemed to encourage a compliance mentality (resulting in more but not necessarily higher quality evaluation), and was very agency based. There was little, if any, evaluation of policy settings or mixes of interventions. The Australians are now working on mechanisms for building incentives to evaluation into chief executive reporting arrangements. On the upside, previous mandatory evaluation has created an evaluation culture. The Australians are facing the same methodological issues as New Zealand especially in terms of developing an outcomes focus, and have identified similar skills and capability gaps. For an opinion on the evaluation strategy and its impact on resource allocation, see Michael Di Francesco, "The Measure of Policy? Evaluating the evaluation strategy as an instrument for Budgetary control", Australian Journal of Public Administration, 57, Australia, March 1998.

Supply side issues

As noted above, current evaluation capability gaps will not be solved overnight. The onus will be on departments to respond to increasing demand for evaluation information by developing an evaluation capability that is appropriate to their particular role and functions. As has been the case in the past, some may integrate evaluation in their broader research and policy development capabilities, while others will choose to develop a specific evaluation function that is integrated but separate. 28 Agencies might have to buy in overseas expertise in the short term; a strategy employed by several agencies in the past (for example, the Department of Social Welfare and the Department of Labour both brought in overseas experts to established a specific evaluation capability). Supply side levers to encourage more outcome evaluation are limited relative to demand side levers, especially in a devolved management regime where chief executives have primary responsibility for the management of their departments, including the capability to provide high quality policy advice.

Conclusions

This study highlights an under-supply of outcome evaluation in the New Zealand Public Service. There are a range of reasons for this including:

  • low Ministerial demand for evaluation
  • historically poor outcome specification
  • the lack of any central agency champion for evaluation
  • the lack of demand for outcome information in current accountability frameworks
  • funding issues
  • methodological hurdles
  • poor evaluation capability.

However, there is evidence that the tide is already turning and that capability for, and supply of, outcome evaluation is increasing and likely to increase, particularly as a result of demands driven by:

  • Ministerial teams and their need to generate outcome information
  • the communication of expectations from the SSC that departments should have a robust research and evaluation capability. Other ongoing SSC projects (for example, the accountability project) are also likely to have an impact on the demand for and supply of outcome evaluation.

The SSC believes that increasing demand for outcome information is the best mechanism for increasing the supply of outcome evaluation and encouraging enhanced capability in departments. Given the costs of evaluation, significant methodological difficulties, and the need to ensure that any evaluation conducted generates useful and high quality information, a blanket mandatory approach to evaluation should be avoided. Some central agency guidance on when to evaluate would be useful to departments in developing their evaluation capability.

Overall, the lack of outcome evaluation does constitute a gap in the quality of advice tendered to Ministers. Whether or not actual ex post evaluations are carried out, all policy advice should include robust intervention logic and evaluation criteria. This would ensure that Ministers are provided with information about the intended outcomes from any intervention, the risk that they might not be achieved, the risk that unintended outcomes might eventuate instead, and when and how it will be known whether the intervention has succeeded or failed.

28 For example, the Labour Market Policy Group has a separate evaluation unit which acts in an advisory capacity to other policy and operations units and helps to manage evaluations. The Social Policy Agency has an internal evaluation unit but almost all of the evaluation work is contracted out to external evaluators.

SECTION IV: ENCOURAGING SIGNIFICANTLY BETTER CONSULTATION AS AN INPUT TO POLICY ADVICE

Introduction

The term 'consultation' covers two distinct aspects in the policy development process in New Zealand:

  • consultation among agencies with interests and responsibilities in the development of a particular policy or set of policies, in order to work through to agreement as to the best possible advice to be presented to Ministers
  • consultation with people and institutions in the community by agencies with interests and responsibilities in the development of a particular policy or set of policies, as an input to the formulation of their advice.

The first aspect is covered in Section V, which deals with enhancement of central coordination mechanisms, of which inter-agency consultation is a fundamental element. The second aspect is the subject of this section.

For a number of reasons consultation was not initially ranked as a key issue in the work on improving the quality of policy advice. However, as the work progressed, the importance of coordination as an input to good policy advice became increasingly apparent. Because it was not covered specifically in the earlier consultation with Ministers, senior officials, and others in the policy community, there is not much in the way of information upon which to make quantitative judgements about how much consultation is done, the forms in which it is done, how well it is done, how much it costs, and so on. This part accordingly looks at the subject in fairly broad terms.

Distinctions: consultation versus negotiation versus communication versus citizen participation

There is some confusion about what consultation actually means. Consultation implies a dialogue, where substantive input is sought by government or its agents from the public, or from particular sectors of it, either to define the dimensions of an issue or to comment on proposed policy options. Negotiation is a rather more sharply focused dialogue, where the object normally is to hammer out agreement on a particular set of issues. Communication, on the other hand, is where governments want to educate the public on policies and goals, particularly where support or compliance are sought (as with public health campaigns, for example).

Consultation is an input to policy development and analysis: a tool for the collection of ideas and reactions to ideas before key decisions are made. The products of consultation are not policy advice, but inputs to policy advice: it is not a substitute for analysis, and governments still have to make the final decisions.

Why is consultation important?

It is not long since government policy development was by definition cloaked in secrecy. Key sectoral interests - more often than not 'part of the establishment' themselves - might have been asked for their reactions to particular initiatives the government had in mind, but examples of structured and open consultation processes in the formulation of major policies were rare fifteen or twenty years ago.

Public views and submissions on major questions were certainly canvassed from time to time - for example in various commissions of inquiry - but these tended to be preliminaries to actual policy development, which generally continued to take place behind closed doors.

Closed door policy making may have reflected a fundamental view at the time about the role and responsibilities of governments. Governments were elected to govern according to their articulated policies and their general philosophical disposition and to make decisions on behalf of the community: the only consultation that really mattered took place every three years on election days. Governments 'knew best' and, supported by a large State sector reaching deep into the community, delivered a multitude of programmes to be willingly consumed by citizens and commercial and sectoral interests alike.

Over the last fifteen years, however, governments have tended to think far more critically about how much the State does and how it goes about its business, drawing into question the scope and scale of most interventions. This, and the need to respond to an array of rapidly-developing domestic and external changes, generate extensive and dynamic policy agendas.

These days, Ministers are confronted by a broad array of complex policy problems (Treaty of Waitangi issues, for example) among which deeper implications and important inter-relationships are not always easily discerned. The pressures on them to solve particular problems are intense and the time available to get things done is always restricted. Governments have no more than a Parliamentary term, in effect, in which to identify and get to grips with each of their priorities, hammer out solutions, implement these, bed them in, and if possible allow some sort of evaluation of their effectiveness. The incentives to hasten through preliminaries and get on with the job are powerful. In these circumstances, Ministers need, in order to make decisions that are rational, durable, and politically sound, to have access to all information relevant to the particular matters under consideration: what are the drivers behind the presenting problem, what is good and bad about any existing interventions; what if anything needs to be changed and why; what alternatives are available with what short- and longer-term consequences and implications; and so on.

The fact is that only so much of this information is available in Wellington, where most of the Public Service is now concentrated. Efforts can be made to overcome this limitation in a number of ways. Advice can be formulated on the basis of:

  • theoretical frameworks
  • extant information and current practice
  • information obtained from 'establishment' organised sectoral interest groups
  • research, including analysis of international experience and academic studies
  • information obtained directly from people and interests in the community; that is, consultation.

The survey work underlying this report showed that advice is developed on all of these bases. The indications are, though, that the first three tend to predominate where timing is an important consideration - which it almost always is. Community consultation is generally a time-consuming and costly business. It also has dangers and pitfalls. There is little doubt, however, that it provides a means to acquire information and insights of extreme relevance to policy analysts and policy-makers that are less likely - in some cases highly unlikely - to emerge through other processes. Such information and insights relate to:

  • fundamental factors driving particular problems (e.g. rural housing)
  • capabilities of sectoral interests and local communities (e.g. abilities to run schools)
  • practical implications of proposed solutions - whether or not they will actually work, hidden costs and risks, and likely winners and losers (e.g. superannuation policies)
  • secondary and down-stream implications and compliance costs (e.g. GST collection)
  • the factors shaping entrenched opposition to particular policy initiatives (e.g. the Fiscal Envelope for Treaty of Waitangi settlements)
  • significant cultural perspectives.

The risks that lie in policy advice formulated with inadequate regard to these matters are obvious. The practical question is how to build effective consultation into policy development processes without compromising governments' capacities to implement their programmes within very constrained time-frames, and without adding new risks.

Symbiosis?

Some consultation is in progress all of the time (analysis of information from this year's (1998) departmental performance assessments will better indicate the amount). At operational levels it has become part of the way business is done - for example in fisheries, social service delivery and environmental business, where the government frequently operates in partnership with industry, local government, community interests, and iwi. Consultation requirements are written into some law - the Resource Management Act 1991, for example. However, consultation is sometimes done for risk management purposes as much as anything else: to flush out challenges to emergent policies.

Having said that, Ministers and officials must have learned through the long programme of reform that policies developed behind closed doors, or with inadequate regard to practical operational matters and to public acceptance, often carry high risks of failure. While no data is available to demonstrate the point, there is evidence of a stronger inclination to test ideas in one way or another.

This wish to test ideas has been matched by increased public pressure for involvement in policy processes. Similar trends have been evident in other countries, as attested by a growing body of literature, much of it focused on 'citizen participation' and 'public interest' perspectives. The OECD attributes these trends to a variety of factors including "reports of a decline in confidence in government and its institutions and a reduction in traditional forms of civic engagement...what Robert Putnam (1995) labels a decline in the 'social capital'...". 29 What may be more significant in the New Zealand context is the fact that citizens have access to considerably more information now than even ten years ago.

29 PUMA Public Management Occasional Papers: No 17: Consultation and Communication, OECD, 1997.

Costs and pitfalls

The OECD categorises the costs of consultation as delays and administrative overload, raised expectations and opposition, and tapping unrepresentative views. 30 They can be looked at in these ways:

  • Delay and administrative overload: Identifying and informing interest groups, seeking their views, building the results into the overall analysis, and feeding-back require substantial investments of time and effort. As well as that, the possibility always exists that consultation will bog down in differences about process or about the substantive business, or about both. Unexpected secondary issues can emerge to side-track and even swamp the original inquiry. Even carefully calibrated and managed timelines and processes can be overwhelmed by masses of information: the scale of community response cannot easily be predicted or controlled.
  • Raised expectations: Any consultation process tends to raise and widen expectations within the community, for example:
    • if we are asked what we think about this matter we will expect to see our views reflected in the policy decisions that emerge
    • if you consult others you had better consult us
    • if you consult on this matter you will be expected to consult on all others as well.

      The point is that consultation is not a one-way process: respondents in the community invest time and effort and take risks as well. Even so, it is quite plainly impossible to reflect in final policy decisions every opinion expressed through consultation. Decisions remain the government's to take, having regard to a broad range of administrative and political considerations, and this point always needs to be clear from the outset.

      A further important point is that governments have perfectly sound and legitimate reasons for keeping some phases of policy formulation out of the public arena. Soliciting views about particular options at particular times might easily set hares running against the interests of both the government and the community.

  • Raised opposition: Consultation can provide a focus for the mobilisation of resistance - most especially, perhaps, if the government is suspected to be attached to a particular position or using 'consultation' to build support for a favoured position. Opposition can ultimately be a positive thing - it is better to become aware of opposing views in the early stages rather than after key policy decisions have been taken - but it can nevertheless create difficult public presentation challenges, especially where the scale of opposition is greater than expected or comes from unexpected quarters.
  • Unrepresentative views: Organised lobbies and sectoral interest groups are now more numerous than ever. While their views and inputs are entirely legitimate, the risks are always present that the more articulate, organised, and well-resourced will dominate consultation processes, that they will overshadow the views of less vocal 'majorities', and that their opinions will be taken to represent those of larger sections of the community than their actual constituency. This becomes particularly problematic where a highly organised lobby has interests and objectives that do not fully coincide with those of the larger group they are taken to represent - or worse yet, actually conflict with them. Both New Zealand and international experiences suggest that tapping into the views of sections of the community that are not well-organised or articulate is one of the most difficult aspects of effective consultation.

Altogether the potential costs and pitfalls of consultation can appear daunting. Where significant benefits cannot be readily demonstrated, the inclination may often be to rely instead on theoretical frameworks, extant information, 'establishment' sectoral groups, and so on. Costs and risks must be less if agencies already have established consultative networks, through which ideas can be tested as a matter of normal business rather than in elaborate set-pieces. Departments are in touch with the sections of the community with which they work most closely, but their capacities to turn business relationships into consultative networks for policy development purposes are not clear.

30 Ibid.

Building effective consultation into policy development processes: practical issues

If departments were expected to establish consultative networks for policy development purposes, they would need to be able to cope with time and cost requirements, and to build up the competencies needed to consult effectively.

  • Time and costs requirements: The point that effective consultation cannot be hurried is well understood. While a few judicious telephone conversations might be enough to inform some policy debates, more comprehensive consultation is likely to be needed where significant changes are under consideration.
  • will not be surprising to find that most departments do not currently have the slack to build in much additional consultation activity - either through their own projects or through private sector contracts. The requirement to do more would be likely to call for trade-offs among current commitments, or for new investments.
  • Capability requirements: It is clear that public consultation requires some particular competencies: negotiation, communication, presentation, survey techniques, dealing with particular groups - Māori and their institutions - and data interpretation. Certainly, consultation is an area in which amateurs could wreak considerable damage.

Some cultural questions

New Zealand's approach to public management is seen by various commentators abroad as idiosyncratically highly adventurous: the tendency is to formulate solutions to problems substantially on the basis of theoretical frameworks, and to move as rapidly as possible towards implementation. This predisposition towards action probably has less to do with any deep cultural characteristics than with the quite marked time constraints that apply here. These constraints are clearly an important factor in the New Zealand approach to policy development.

Our ways contrast with more studied and cautious public policy development and implementation processes, in which elaborate public consultation may be a feature at every step, that appear typical in a number of other countries - and notably in Switzerland, Sweden, Finland, and the USA.

The fact that New Zealand's approach has in a number of cases - not all, by any means - resulted in very significant changes from long-standing policy settings is often acknowledged abroad. The point that more conservative approaches do not always result in similarly decisive break-throughs is also noted.

A deeper question that consultation raises is whether the Public Service is politically and administratively ready to open up aspects of policy development processes to the community. Consultation seems to have at least one of the qualities of Pandora's box - once the lid has been removed it may be extraordinarily difficult to put back.

Conclusions

Public consultation is often seen as intrinsically good - a means of improving the responsiveness of government policy by involving the public in its development. Consultation has the potential to improve the quality and effectiveness of policies, to enhance responsiveness to citizens, and to strengthen the legitimacy of final decisions. It may have spin-off benefits in reversing declining confidence in government processes. There are, however, associated costs in time, risks of policy inertia, the unreasonable raising of public expectations, and the risks of capture by vocal but unrepresentative groups.

Interest groups and lobbies abound. Furthermore, high levels of accessibility and generally more open governmental processes are matched by strong public interest in policy matters across a wide spectrum. Similar patterns are evident in other countries.

While there is a lack of empirical evidence to support such a conclusion, anecdotal evidence suggests that departments need to improve their skills in consultation processes themselves, in running private sector contracts for such processes, and in making effective use of information generated through consultation.

Possible responses to these risks seem to be either to become very much more accomplished in the business of consultation, or to enter into it as little as possible. It is difficult, however, to see a reversal in emerging trends towards increased public consultation. Policy development processes that do not take account of first-hand knowledge of problems and of the implications and practicality of proposed solutions clearly carry risks of failure. Consultation - communicating directly with people and institutions with such first-hand knowledge - is the logical way to bring this information into policy development. There is, in fact, little in the way of practical alternatives.

Consultation is expected to become an increasingly important input to policy development processes. Ministers may want to see it employed more, in ways that minimise its risks and pitfalls. If that is so, work will be needed to improve departmental capabilities including training and developing policy advisors.

Section V: Options for enhancing central coordination mechanisms to improve the quality of policy advice

Introduction

This part examines the coordination mechanisms currently used in the policy advice development process. It asks whether these mechanisms are contributing to the quality of policy advice and how they can be altered to improve the quality of policy advice.

The policy making process incorporates a number of mechanisms for coordinating the development of policy advice before it is presented to Ministers. Perhaps the most contentious of these is the centrally mandated system of interdepartmental consultation required by the Cabinet, and further refined through the standing officials committees to Cabinet committees.

The scoping work for the Improving the Quality of Policy Advice project suggested that the system for ensuring policy coordination can significantly dull the freshness and directness of some departmental policy advice. Coordination mechanisms can skew the incentives for production of high quality policy advice by focusing advisors on quality process over quality analysis. Policy advice can emerge from inter-departmental coordination processes agreed but bland. In some cases quality and frankness are sacrificed in the interests of achieving consensus. The problem is not a lack of coordination, but how consultation is performed: properly, all relevant views should be consulted, and conflicts resolved where possible and highlighted where not, but absolute agreement is not always necessary, possible or desirable.

Definitions

    Coordination is an end-state in which the policies and programmes of government are characterised by minimal redundancy, incoherence and lacunae. 31

Bridgman and Davis state that coordination is a 'virtue' 32 , whilst Boston 33 sees it as a procedural value that embraces a range of goals and concerns. At the governmental level such goals and concerns include:

  • the avoidance, or at least minimisation, of duplication and overlap of policy work
  • the minimisation of policy inconsistencies
  • the quest for coherence and cohesion and an agreed ordering of priorities
  • the minimisation of conflict, both bureaucratic and political
  • the promotion of a comprehensive or 'whole government' perspective against the constant advocacy of narrow, particularistic or sectoral perspectives.

Good policy coordination in government is dependent on the satisfactory functioning of at least three kinds of relationships:

  • the horizontal one between Ministers
  • the vertical one between Ministers and Public Service policy advisors
  • the horizontal one between officials in departments. 34

When one or more of these relationships breaks down, policy coordination will suffer. Boston maintains that the State sector reforms of the 1980s led to silos or departmentalism which strengthened the vertical relationship (they were also intended to improve the contestability of advice), but at the same time weakened the cross-government and cross-Ministerial, or horizontal, forms of coordination.

A first step to achieving policy coordination is "requiring agencies to consult within government, since this allows other departments to offer suggestions about the appropriateness of a new policy proposal and draws the proposal into the framework of existing programs administered by those agencies". 35 This part deals with interdepartmental consultation - that is, where departments consult with others to incorporate their views into advice. Other forms of consultation - with the public, interest groups, and so on - are covered in Section IV above.

31 B. Guy Peters, "Managing Horizontal Government: The Politics of Coordination" in Public Administration Vol.76, Summer 1998, p.296.

32 Peter Bridgman and Glyn Davis, Australian Policy Handbook, Allen & Unwin: Australia, 1998, p.77.

33 Jonathan Boston, "Country Report: The Problems of Policy Coordination: The New Zealand Experience", Governance, Vol. 5 (1), 1992, pp. 88-103.

34 Ibid.

35 Peter Bridgman and Glyn Davis, Australian Policy Handbook, Op. cit., p.78.

What are the existing central coordination mechanisms?

Analysis of the research data to date 36 has identified the following as important existing coordination mechanisms:

  • Cabinet management and support
  • Cabinet committees
  • standing officials committees to Cabinet committees
  • ad hoc Ministerial/officials' committees
  • the system of inter-departmental consultation, as mandated in the Cabinet Office Manual
  • the strategic management system including the newly established Ministerial teams.

Cabinet management and support

Cabinet committees are a device for coordinating policy development and management at the highest levels. These committees are the engine-rooms of Cabinet, where problems are thrashed out and Ministers can satisfy themselves as to regard for both principle and practical implications.

Officials committees 37

The principal tasks of the standing officials committees to Cabinet committees are to assist Ministers, and the Cabinet committee chair in particular, to manage the mass of work coming before them efficiently and effectively. 38

Strengths

While this paper will not describe the structure and functions of the current committees, it is important to note that the five now operating function in quite different ways. They have different workloads and face different demands and operational preferences from Cabinet committee chairs.

Much of what the officials committees do in fact, is assist in improving the quality of policy advice in individual papers. They seek to add value in reviewing draft Cabinet papers by ensuring: 39

  • the quality, structure, and coherence of papers and checking that the wider impacts of proposals (including their costs and benefits) have been properly considered
  • that all relevant agencies have had an opportunity to express a view, interdepartmental disagreements are resolved where possible, and/or remaining disagreements are clearly set out for Ministers' consideration
  • that submissions comply with Cabinet Office requirements as to length, structure, presentation, and consultation.

The officials committees undoubtedly perform worthwhile functions: the Ministers chairing Cabinet committees would have heavy organisational burdens without them.Weaknesses

A key weakness with the policy process lies not with the officials committees, but with the fact that a good deal of departmental policy work is so inadequately thought through and/or presented that the committees become almost wholly absorbed in detailed remedial work. Central agency representatives suggest that some departments put up papers they know to be inadequate on the assumption that the officials committee will compensate for their own ineffectual quality assurance systems. The capacities of officials committees to act as strategically-sensitive coordinating mechanisms are constrained when they are required to function as extensions of departmental internal quality control processes.

Officials committees are said to be seen as bureaucratic, nit-picking, and overly authoritarian by some Ministers, and are certainly seen in those terms by some departments as well. They are considered to focus too much on detail and too little on the big picture or strategic overview. To complicate matters further, departmental representatives on the committees can face some perplexing role conflicts: are they there to represent their departments, or to represent the collective interest?

On the other hand, a senior policy manager with considerable relevant experience expressed the view to us that standing officials committees with predominantly central agency membership may lack the grasp on sectoral policies needed to do more than mere quality control. Doubts and criticisms flow in both directions.

Officials committees have always been informal bodies, established at the discretion of the Prime Minister. They are not covered in the Cabinet Office Manual, nor is consultation with them a formal Cabinet Office requirement. This informality, lack of standardisation, and lack of formal sanction may be perceived as a problem. Each operates in a different manner, with little consistency as to the outcomes they are expected to achieve. While each committee has a terms of reference, they lack a formal institutional status. The terms of reference reviewed in the course of this study seem also to be focused on operating processes rather than on addressing desired outcomes. Defenders of the ways that the committees currently operate see, on the other hand, this informality and individuality to be a strength that enables the committees to be flexible and responsive as they face the realities of a highly dynamic operational setting.

Ministers, including Cabinet committee chairs, occasionally insist on taking inadequate departmental papers to Cabinet committees over the concerns of officials committees. On some occasions, Ministers by-pass officials committees altogether - as they are quite entitled to do - and take papers to Cabinet with minimal consultation. This seems to typify the ambiguities surrounding the roles and functions of the committees.

In summary, the problems with officials committees seem to be:

  • departments treating the committees as default quality control systems
  • focus on detail more than strategy
  • ad hoc and individualistic committee processes which may contribute to confusion as to roles
  • unclear Ministerial expectations about the roles of the committees, and potential role conflicts for departmental officials on committees.

Solutions

The critical success factors for well-functioning officials committee are:

  • a high-calibre chair
  • effective quality assurance processes in departments, ensuring that fundamentally sound papers come to the committee
  • clear expectations from Ministers, and especially from the chair of the Cabinet committee being served, as to the role of the committee
  • high-calibre members: people sufficiently senior to speak for their departments and with abilities to comprehend strategic 'big picture' perspectives and context.

It may be that the officials committees' current somewhat unclear mandate and role mean they are not achieving all the improvements to policy advice that are possible. Both mandate and role are in need of review. The committees may be able to play more effective roles in improving coordination and quality if they were formalised along the lines of Cabinet committees. A clear role may allow them to focus more on strategy and less on basic quality control.

  • seems to be an educative role for the Department of the Prime Minister and Cabinet in informing Ministers more fully about the committees and the overall policy processes. This educative role should also extend to officials appointed to or working with the committees. Induction sessions for both Ministers and officials could be useful.

Ad hoc interdepartmental working committees

Various ad hoc interdepartmental committees have been established in addition to the standing officials committees to Cabinet committees. There are two types:

  • committees tightly focused on certain projects, with a limited life
  • standing committees, with no identified final goal.Z

Strengths

The primary strength of these ad hoc committees lies in their capacity to improve horizontal coordination between departments. This allows for a cross-fertilisation of ideas between departments - and other agencies as appropriate - and increases the scope for advice to incorporate a wide range of views. These committees have also in a number of instances - employment, for example - enabled progress to be made on difficult issues that cut across the work of several agencies.

A criticism of the institutional separation of policy and operations functions is that it leads to policy development uninformed by practical operational experience. Ad hoc interdepartmental committees provide a means to strengthen links between policy and operations, providing membership is wide enough (much delivery is now in Crown entities).

Weaknesses

Committees can very easily get bogged down in detail: attempts to negotiate agreement can become hopelessly cyclical with the same ideas turned over and over. Peter Bushnell 40 alludes to a nostalgic myth associated with interdepartmental committees - "that better advice would result if only we went back to more joint working parties". He says that this overlooks their ineffectiveness in the social policy area, and adds that they were often more trouble than they were worth in the economic area: considerable time could be wasted hammering out what was ultimately seen to be lowest common denominator advice.

Solutions

The crucial ingredient seems to be a highly skilled chair who ensures that all parties stay focused and involved, that all relevant views are exposed, and that no particular individual or agency perspectives dominate.

The mandated system for consultation 41

    What's the purpose of consultation - to reach a level of agreement among officials or to improve the quality of the advice? 42

Interdepartmental consultation is a difficult but necessary part of the policy advice process, particularly on cross-cutting public policy issues. Consultation is about achieving clarity on particular issues across agencies, but its purpose has become somewhat distorted. Consultation is not about trying to maximise agreement among officials to a set of recommendations. Nor is it about drafting by committee. It should be a way to obtain feedback, information, and comment as inputs to high quality advice. Consultation needs to happen early and intensively in a policy development process, rather than as a mere ritualistic adjunct.

Strengths

Consultation ensures that a number of things happen in the policy process:

  • almost all policy proposals have implications for other government departments (and sometimes, other agencies): consultation allows all affected parties to comment on how a particular proposal will impact upon their business. If there are differences of opinion over a policy proposal, consultation allows many of those differences to be resolved by officials. The Prime Minister has said that it is preferable that papers for Cabinet and committees contain split recommendations only if Ministers have been unable to resolve the issues informally 43
  • population-based and other sectoral departments are able to comment on how a particular proposal impacts on a particular sectors and interests
  • financial or economic implications are exposed, and along with them the responses and views of the Treasury.

Weaknesses

A number of risks attend the processes of consultation as part of policy development:

  • the department responsible for the paper can lose ownership of it
  • consultation can become a substitute for, rather than an input into, policy analysis
  • focus can shift from seeking to generate a good product to seeking to operate a good process, that is, compliance with process can take precedence over substance
  • inertia can result if very large numbers of departments need to be consulted on a particular issue.

Solutions

As with officials and inter-departmental committees, the crucial ingredient in inter-departmental consultation process is highly skilled leadership. Demand for people with the abilities to run effective consultation outstrips supply. A later phase of this project will examine issues related to the training of policy managers and advisors. A particular future focus should be the training of advisors and managers in inter-departmental consultation processes.

Strategic management system

The purpose of strategic management processes is to focus departments on the strategic objectives of the government. In recent years, this has been approached through the enunciation of Strategic Result Areas and it has now been extended through Overarching Goals and Strategic Priorities.

Strengths

Most policy managers agree that the strategic management system is useful in prioritising the work programmes of departments. The Strategic Result Areas at the very least raised, departmental awareness of needs for strategic coordination and longer horizons.

Weaknesses

Incentives for coordination, such as joint Key Result Areas, have not been widely adopted. Policy managers have commented to the project team that there is no incentive to share Key Result Areas.

In an occasional paper in 1998, the SSC stated that in the strategic management system "the incentives for Ministers to address single portfolio issues are much stronger than the incentives for them to operate collectively". 44 A theme that emerged in the survey work is related to the absence of one of the critical factors in the strategic management system - that of Ministerial willingness to engage in 'strategic conversation'. The Ministerial teams have been designed to tackle these matters.

Solutions

The work on Strategic Result Areas networks has also sought to address the incentives chief executives face. In a circular of 9 November 1998 to the Ministers in charge of the teams, the Prime Minister stated that the lead agency within each team will ensure "that appropriate coordination is taking place". The SSC's subsequent letter to all chief executives 45 said that in the chief executive performance agreements for 1999/2000 additional emphasis would be placed upon "inter-agency co-operation, coordination and regard for the collective interest in pursuit of the Government's objectives". The purpose is to strengthen the perceived weak horizontal relationship between departments.

After only a short time, anecdotal evidence indicates that the impact of the teams has been variable. Those that seem to be working best at officials level are those where agencies are accustomed to coordination and networking. As this initiative is very new it is difficult to assess the impact of the teams on the coordination of policy or strategy. It would be appropriate to evaluate the impact of the changes in late 1999, when the teams will have worked through a budget round and will have made progress in 'networked' policy projects.

36 Much of the anecdotal comment in this paper is based on the twenty problem scoping interviews with senior public servants, consultants and Ministers, and the forty structured interviews with senior policy managers.

37 In the preparation of this section we spoke to the chairs of two officials' committees.

38 The fourth Labour Government did not continue officials' committees, but the 1990 National Government established high level officials' committees to drive strategy development and oversee reviews.

39 These features are common across a series of committees' terms of reference the project team studied.

40 Bushnell, Peter "Policy Advice - Planning For Performance" in Public Sector Vol 14, pp.14-16, 1991.

41 Required by the Cabinet Office Manual and the CAB100.

42 Quote from a senior policy manager during problem scoping interviews.

43 Rt. Hon. Jenny Shipley Management of Cabinet and Cabinet Committee Business, 16 February 1998.

44 State Services Commission, A Better Focus On Outcomes Through SRA Networks, Op. cit., p.4.

45 State Services Commission file-reference CE 1998/020.

Guidance material

There are a number of sources of advice for departments and policy staff: the Legislative Advisory Council's guidelines, Cabinet Office staff briefing material, the CAB 100, the SSC's 1991 Policy Advice Initiative, and a variety of departmentally-produced material. Some of this is very good but is not easily accessible.

Consolidated guidance would undoubtedly be extremely useful to advisors coming to grips with the system. The SSC will be further investigating the merits of this option.

Conclusions

Policy coordination remains a complex and contentious business. The Ministerial teams may bring considerably more clarity in their respective sectors. Much of the government's business remains outside their coverage, however, and other mechanisms will accordingly be needed to ensure effective coordination. The principal device is officials committees supporting Cabinet committees. It appears that these do not work as well as they could or should. While myriad practical constraints surround their operation, and while benefits can sometimes accrue from their present flexibility and informality, the benefits that could flow from greater formality need to be investigated.

Beyond this, the strongest impressions relate to the slowness of the system to generate improvement: officials committees seem to wage the same battles over and over again, departments seem to have the same complaints about the system, some seem never to learn from experience, and individual policy advisors in departments appear to struggle in unnecessary darkness. This cycle can best be broken through information and education.

  •  

      Research and experimental development comprises creative work, undertaken on a systematic basis in order to increase the stock of knowledge, including knowledge of people, culture and society, and the use of the stock of knowledge to produce new applications.

      A basic criterion for distinguishing research is the presence of an appreciable element of novelty, and the resolution of scientific and/or technological uncertainty.

Last modified: