TURNKEY PUBLIC MANAGEMENT TOOLKITS--Guides and templates to help managers and consultants assess the delivery of programs and services in the public sector.
TURNKEY PUBLIC MANAGEMENT TOOLKITS--Guides and templates to help managers and consultants assess the delivery of programs and services in the public sector.
Cart 0

How to Make Spending Reviews Less Problematic--12 Key Questions to Ask

Michael Kelly

How can we become better at doing spending reviews?

Not a day goes by without one reading about funding for a program being reduced and consternation from stakeholders about the anticipated cutbacks in services and resources.  Even when the proposed changes appear logical and reasonable, there is overwhelming angst and disagreement.  Given programs do need to evolve over time, hopefully for the better, is there not a way to update programs in a way that creates less stress? 

In some cases, program changes do not appear to be fully thought out, creating more anxiety with stakeholders.  Either the program changes have not been fully developed, or government agencies have chosen not to fully share plans with the public. 

Often the reviews are conducted under major time pressures, and there may be a tendency to cut corners and move too quickly without adequate evidence.  In the worst case scenario, the changes have resulted in major disruptions and cost overruns.  All the stakeholders lose out (the client communities served, employees, and political and administrative proponents of the program changes). 

That being said, there is no magic solution.  Spending reviews (also called comprehensive reviews, expenditure reviews, baseline assessments, resource reviews) have an underlying objective to reduce or reallocate limited resources (or change the funding mechanisms).  They can be difficult and challenging.  Some stakeholders will be adversely affected.  Too often, there are no clear winners.  Everybody will have an opinion.

Spending reviews require a systematic, evidence based and balanced approach that recognizes the many complexities of implementing program changes. Proponents should be guided by twelve basic questions when doing program spending reviews. 

1.    Is program delivery aligned with client needs and strategic priorities?

What is the scope of activities carried out? What are the key objectives and desired outcomes?  Is the scope of activities appropriate?

  • Understand the historical evolution of the program and the external environment/context. Confirm the program objectives in the short and long term, desired outcomes, applicable legislation and policies, the history of the program, overall sector trends, and key challenges in the way ahead.

  • Ensure the program activities are aligned with client needs. Spending reviews differ from program evaluations where the primary focus is on the achievement of program outcomes (although they also address cost-effectiveness to some degree).   Assuming there is a need for the program, program evaluations help to identify gaps in meeting program needs and areas where program activities are not contributing the achievement of program outcomes.

  • Ensure the program activities are aligned with the strategic directions and priorities of the organization and government-wide. Assess whether resources are directed to supporting the strategic priorities.  Identify for each activity the main tasks and approximate full-time equivalents (FTEs) and expenditures, and the strategic priorities that they support.  All too often, organizations change their priorities but there is a time lag in realigning activities and resources in support of these priorities.  Managers are always surprised when they learn the majority of their resources are still supporting lower level priority activities that just continued on as before despite major shifts in strategy.

2.    Do stakeholders and clients have major issues or concerns?

Stakeholder interviews, client surveys and program evaluations can be used to assess stakeholder satisfaction levels.  Selected stakeholder interviews can also go a long way to identifying major issues or concerns of stakeholders.

  • Develop a stakeholder map identifying key partners and clients, and the interrelationships. Identify changes in service expectations. 

  • Assess whether the organization is meeting program or client service standards. Identify stakeholder concerns about existing service issues or gaps.  To the extent possible, relate the costs of services to the expectations of clients and stakeholders and the extent to which their needs and expectations are being met. 

  • If feasible, hold validation workshops with stakeholders to discuss potential program changes. Ideally, consult with stakeholders throughout the review to obtain feedback and validate opportunities identified. 

3.    Is the demand for the program increasing, decreasing or stable?

What is the projected demand in the way ahead (i.e., baseline forecast)?  What are the key workload drivers?  Is the nature and scope of the services delivered changing?  Is the workload increasing, decreasing or stable?

  • Assess the scope of clientele. Not to be confused with workload, the clientele defines the community supported. Scope of the clientele could include, for example, the number and type of clients served (e.g., public, beneficiaries of grants and contributions, other government agencies); the number and scope of interactions supported; the geographic dispersion of the operations; and the breadth and complexity of the services provided.

  • Ensure resources are aligned with the scope of the clientele and workload. Review historical and projected workload trends and whether staff resources and expenditures are aligned with the workload trends of the activities carried out.

4.    Are resources aligned with the highest risks?

Risks change over time.  Resources and mitigation measures may have been successful in addressing risks historically but have not been redirected to address new emerging risks where mitigation measures in place are limited or nonexistent.

  • Assess whether existing resources are aligned with the highest risks. Update the risk profile and the potential impact of each risk, likelihood, whether the risks are increasing, decreasing or stable, mitigation measures in place and gaps that exist, and the overall priority of the risks.

  • Assess the acceptable level of risk. Assess the implications of changes to service delivery on achievement of the organization’s mandate, and reach agreement to the extent possible on acceptable risks, particularly with respect to security and health and safety.

5.      Does the program have the right mix of skills and competencies required?

Are there gaps in the skills and competencies required?   How many vacancies are there?  Has staff retention been an issue?

  • Identify the skill requirements for the major activities and gaps or issues that may exist in terms of competencies at the organizational level. Comparability of classification levels is sometimes an issue that is addressed as part of the benchmarking. 

  • Examine the vacancy rate, and whether there have been challenges in recruiting staff. For example, in the case of activities with highly technical and specialized positions, the challenge is often not the level of resources available but the high level of vacancies and the difficulty of recruiting talent with the required skills.

  • Assess the mix of staff skills and expertise required. For example, the need for technical skills, the distribution of staff at different levels of the organization, administrative versus operational, technology and digital delivery skills.

6.      Do tools and technologies reflect best practice?

The focus should be on the key systems or technologies that are strategic in terms of decision-making or program delivery.  The information collected from benchmark organizations is typically helpful in identifying the latest technologies.

  • Identify existing tools and systems used, technologies required in the way ahead, and major gaps. Identify key tools such as information systems, technologies, or reference guidelines that are used to support program delivery, and any gaps or issues that may exist where existing tools are out of date or benchmark organizations are using more modern tools.

  • Assess existing and future capabilities of the organization. Consider the full range of capabilities in terms of processes, competencies, tools and systems.  Technology is not necessarily the magic solution but the tools should be up-to-date and helping staff do their work in the most efficient manner, and best position the organization for the future.

7.      Is the organization achieving its performance targets?

Public organizations have made progress in implementing performance measurement systems.  Ideally, performance indicators and information will be available on outcomes and results achieved, meeting program standards, client service, achievement of service standards, process efficiency, employee satisfaction, etc.

  • Examine performance levels achieved in relation to resources with a view to improving performance or shifting resources to or away from non-performing services. Where no indicators or performance information exist, it may be necessary to identify the key indicators based on best practices and conduct a summary assessment (often qualitative) based on available information.

8.    Would other delivery models be more cost-effective?

How effective is the current delivery model?  What should be the key characteristics of the delivery model in the way ahead?

  • Compare with best practices. Identify best practices in other jurisdictions, compare delivery processes to these best practices, and develop options for consideration.   This information can be collected from a number of sources, including web search, interviews with benchmark organizations, interviews with stakeholders, the results of previous studies, and literature review.  Given that best practices can cover a very broad range of topics that may be of limited interest to the program, the research should be focused on specific issues that are most relevant to the organizational context and challenges that are identified as part of the spending review. 

  • Identify potential delivery models. For example, greater use of digital technology, partnerships, centralization/ decentralization, use of external resources, the extent to which staff are embedded within sectors/branches, and leveraging synergies that exist between the various services provided across the organization. 

  • Assess the delivery models. Consider overall principles and key considerations as to how the program/service should be delivered in the future.  The key considerations in determining the appropriate application of technology, control of funding, partnerships, accountabilities and risk sharing, and the level of centralization, consolidation or standardization need to be clearly established when assessing delivery model options.

9.      How does organization efficiency compare with targets and external benchmarks?

A clear understanding and commitment to productivity targets helps focus the organization.  What are the key indicators used to measure the efficiency/ productivity of the program?  How does current performance compare with external benchmarks?  What opportunities exist to improve efficiency?  What are the main barriers?

  • Compare resource levels to other government organizations or similar agencies in other jurisdictions or countries. The overall premise is that the relative cost of the organization should not exceed that of other similar organizations of comparable size and/or with a similar mandate.  Comparing with a large number of organizations can help to minimize the margin of error.  Data sources can include information from benchmarking studies, industry standards, review of web documentation, literature review, etc.  Resource comparisons are essential to establishing consensus on the appropriate resource target for the program.

  • Confirm or establish efficiency/ productivity measures for the program and, to the extent that data is available, compare productivity with internal efficiency standards or targets as well as external benchmarks identified through the benchmarking.

10.  What program delivery improvements are possible?

Identify gaps and opportunities for improvement that should be considered in the way ahead (grouped under major themes); and estimate the resource implications.

  • Identify opportunities for improvement and/or cost-efficiencies, and where well known opportunities have not been implemented, the roadblocks to implementing these improvements. Examine processes at a high level and focus on the key strategic business processes, using existing information on processes as well as improvement initiatives underway.

  • Identify process improvements taking a horizontal view. Include all key elements of the organization at the corporate, branch and area level that are involved in the delivery of the program. Strive to assess the value added of each activity or step of the process; and eliminate, to the extent possible, coordination functions where these do not add value.  Look for process streamlining opportunities through reducing hand-offs, standardization, better information flow, implementing best practices, more cross-functional and horizontal processes, and centralizing where this makes sense.

  • Group the opportunities by theme, for example, mandate issues, program standards, process improvements, delivery model, organizational realignment, competencies, service levels, cost recovery, infrastructure. Estimate the cost implications of each opportunity and potential risks and tradeoffs, and prepare a high level business case. 

  • Confirm and test the validity of proposed opportunities. People will have very different understandings of the same opportunity.  Sufficient time must be allowed for managers, staff and external stakeholders to understand the implications of opportunities identified, consider their merits and feasibility and chances of success.  Therefore the reason for the emphasis on an ongoing opportunity list, detailed opportunity profiles and validation processes.

11.    What is the resource saving or gap compared to existing resource levels?

Estimate the impact of proposed program delivery changes on resource levels at the activity level. Where feasible, develop a resource model to estimate resource requirements based on the level of effort per output and the forecasted workload.

  • Determine the resource saving or gap compared to existing resource levels.   Determining resources required is generally done top-down (e.g., comparisons with other benchmark organizations) as well as bottom-up (e.g., based on activities that need to be carried out, stakeholder expectations, program/service standards, and workload trends).  The top-down comparison can be done in relation to the clientele or sector served, or as a percent target in relation to total resources.   The cost savings opportunities identified then need to be assessed as to whether they are sufficient to meet the agreed upon target resource level, and the risks assessed in pursuing the potential reductions identified.    

  • Assess the financial implications. Are the number of full-time equivalent and expenditures aligned with the scope of activities carried out?  What is the trend in spending?  What proportion of spending is devoted to salaries versus other major objects?  Are high cost items being acquired in the most economical manner?  Has there been lapsing of funds? How accurate are the estimates?  What is the timeframe?  What is the probability of success?

  • Summarize the potential resource reductions or increases by activity, in terms of the impact on the number of full-time equivalents (FTEs) and funding levels. The cost savings/increases can be estimated using an average salary per full-time equivalent including salaries and operating costs.

12.  Have the change management implications been considered?

Implementation is always the biggest challenge as the implications of resource reallocations or reductions become clearer.

  • Engage senior management. Through the chief executive, executive committee and ideally a steering/working group to oversee the work and guide/support the project team carrying out the spending review. Establish a clear decision-making process for senior management (this may involve interaction at the political level).   A clear process should be in place within the senior management structure to review, accept or reject opportunities that are identified.  Ideally, a representative of the senior management team should act as a champion for the spending review to ensure that senior management is engaged, comfortable with the overall approach, and prepared to make difficult decisions when the time comes (and that there are no surprises).  Further, a steering committee comprised of senior management representatives can provide guidance to the project team at key milestones.

  • Hold validation workshops with clients and stakeholders to review proposed program changes and assess their impact. Ensure there is a common understanding of opportunities under consideration and their implications.

  • Consider the change management and communications implications internally. For example, the implications on staff, the feasibility of implementing the options identified, and any barriers to implementation.  Are staff open to the changes or will there be resistance to changes proposed?  A clear implementation strategy and communications plan is necessary to clarify what will be communicated to staff and when.

  • Seek dialogue with managers and staff. Through meetings and workshops to validate the findings and encourage a meaningful discussion about opportunities. Workshops should be held throughout the assessment to obtain employee feedback and validate the opportunities, particularly those that have horizontal implications.  A clear strategy should be developed to ensure a successful and meaningful validation of the assessment findings and proposals, typically through a series of validation workshops with representation from both managers and specialists at various levels of the organization.  Validation does not necessarily mean endorsement or agreement, but rather that the options, pros, cons and risks have been identified and assessed in a comprehensive manner.

In conclusion

Will the program changes be a success if the above questions are addressed?  Not necessarily, but there should be fewer surprises and unanticipated consequences.  If not consensus, one can still strive for a greater understanding by stakeholders of the options considered and decisions made, engagement by employees, and ownership by management.



Older Post Newer Post


Leave a comment

Please note, comments must be approved before they are published