Do you have the information you need as a manager?
As a manager of a public sector agency or organization, do you have the information you need to support program management and delivery? Do you have the necessary information for both ongoing and strategic longer term decision-making? Are there gaps in the information you receive? Or are you receiving too much information but not really the information that you need?
Despite all the advancements in technology, and the use of sophisticated analytics tools, it is important to take a step back and ensure managers are receiving the right information in the first place.
Central agencies rightfully tend to focus on program results and outcomes. Notwithstanding the importance of outcome indicators for public accountability, there are many other types of operating information required by managers. Performance measurement in the public sector also needs to encompass management “dimensions” required for operations, policy, planning and decision-making. Metrics around all the key aspects of program or service delivery can include several dimensions:
- Achievement of targeted program results and outcomes (as noted above)
- Quality of products and services
- Timely and responsive services
- Effective stakeholder relationships
- Client satisfaction
- Meeting demand
- Workload trends
- People/ workplace health
Managers do receive information on all the above dimensions to varying degrees (and many others as well). However, the scope and quality of the information tends to vary by organization. Over use of certain information in the past (e.g., workload, service standards, conformity to rules), changes in government, and shifts in management thinking, have resulted in varying degrees of emphasis on the type of information collected, analyzed and reported.
This checklist is meant to provide a summary review of key types of management information required by public sector managers, while recognizing that information needs will vary greatly depending on the program or service and the strategic objectives of the organization at any given time. Also, no list can be fully comprehensive.
Potential dimensions and examples of management and performance information are highlighted in this chart and discussed briefly below.
Achievement of targeted program results and outcomes
Results indicators are specific to each program or service and typically require a periodic evaluation or survey to measure. The focus is on effectiveness and measuring achievement of expected program results and outcomes. Program results and targets need to be well defined in advance. Results are established based on strategic objectives, logic models, strategy maps, performance measurement frameworks, planning documents (e.g., strategic plan, business plan), management and financial reports, program evaluations, etc. These results and associated indicators are reported publicly, often on a yearly basis. Renewal of program funding is often dependent on outcomes and results achieved, and must be supported by an external evaluation.
Although client satisfaction can be a key measure of quality, other quality indicators can include product/service assessments, error rates, amount of rework, compliance with rules and policies, comparisons with best practices, and external certification or accreditation.
Quality needs to be defined in the context of each program or service. In a public sector context, this can include, for example:
- Effectiveness and consistency of the delivery methods
- Useful and timely advice
- Timely, relevant and reliable information for decision-making
- Extent to which the service supports program objectives
- Conformity with policies
- Quality of the processes compared to best practices
- Competency levels of the delivery staff
- Quality of briefings and submissions.
Quality targets should incorporate client expectations and the intended characteristics or features of the program or service design. In some cases, quality standards may be pre-established externally by government-wide central agency policies.
Data sources can include:
Feedback from clients on the quality of products and services through client surveys, comments, online feedback, complaints.
The results of independent assessments, particularly compliance with government-wide policies and standards.
The number of errors or the amount of rework or corrections. A key data source in the private sector, this is often not tracked in the public sector although the data is available through tracking systems used.
Comparisons with sector or industry best practices.
Contrary to the private sector, quality is not always measured on an ongoing systematic basis in the public sector. To some extent, this reflects the challenge of defining and quantifying the quality of public sector activities that often consist of policy development and advice, program development, or regulating compliance of private sector quality systems. Although typically many process controls are in place, limited resources may be devoted to quality management. However, managers and staff are well aware of the level of quality of products and services. In fact, there is sometimes the perception that the level of quality is too high so there is less need to track it (this can be a dangerous view).
In any case, the challenge of measuring quality in a public sector context should not be a reason for not monitoring quality; the assessment may necessarily be qualitative depending on the indicators used.
Effective stakeholder relationships
Public sector programs and services typically support a broad spectrum of client groups, and are most often delivered through a number of stakeholders, including both internal and external service delivery partners. As clients are often both the recipient of services as well as key partner in the delivery of services, we refer to clients and service delivery partners generically as stakeholders. All to say working relationships and program/service delivery arrangements with stakeholders are critical. Information should be periodically updated on the extent to which stakeholder expectations are addressed and working relationships are functioning well.
A key step is to develop a stakeholder map identifying key partners and clients, the interrelationships between stakeholders, and the expectations of stakeholders. Although this may seem like stating the obvious, confirming who the key stakeholders are is an exercise that should be done periodically as stakeholders and their expectations do change considerably over time.
Stakeholder expectations and satisfaction levels can be monitored through various means, including ongoing feedback obtained as part of service delivery; meetings, workshops, focus groups, interviews and other interactions with stakeholders; and the results of client surveys and evaluations. The organization should be sensitive to changes in stakeholder expectations, update service standards as appropriate, and identify stakeholder concerns about existing service issues or gaps. The organization should also be reviewing service arrangements with delivery partners.
Timely and responsive services
Timeliness and responsiveness are measured through client satisfaction, achievement of service standards and other operational indicators such as response time, throughput time, backlog, availability, number of complaints, etc. Timeliness can include delivery of services or processing of transactions within agreed upon service standards, timely information or advice for decision-making, response to requests/inquiries in a timely manner, processing within statutory deadlines. Targets are typically established through service standards or service level agreements. Targets may also exist with respect to maximum backlog levels as these are often a key indicator of timeliness or capacity issues.
In the public sector, service standards are often published on the agency’s website and performance information may be reported through annual reports. Clients also provide feedback on timeliness and responsiveness through client surveys. Internal data bases provide information on achievement of service standards and process indicators related to throughput time. Actual performance may be based on the percent achievement of standards and/or the actual response or throughput time compared to standard.
A common issue is the accountability for meeting service standards when several organizations are involved in service delivery, and no single organization takes ownership for the whole process. Each organization monitors its own service standards, but of course the external client is only concerned about the total timeline from end-to-end.
Client demand is concerned with defining the size or scope of the clientele and demand for the program or service. Not to be confused with workload, demand information encompasses the key client groups served and their characteristics, the historical trend in the demand for the program or service, and the key factors that will influence the projected demand in the future. More refined analytic techniques can be used to further understand the client personas and experience to help the organization to be more client centric.
In determining the scope of the clientele, any person or organization that receives goods and services within or from the organization is potentially a client. The clientele may be the community supported, for example, the number of clients that are receiving services, the various types of clients, both external (e.g., public, beneficiaries of grants and contributions, other government agencies) as well as internal (managers, employees), and the distinguishing features and characteristics of this clientele.
Factors that influence demand can include, for example, the nature of the service or support expected by clients, complexity of the program/service offerings, changes in the external environment (economic and social), the level of participation or take up, and new technology developments (e.g., digital delivery). Indicators of the level of demand will typically include the number of clients served by type; the number and scope of client interactions supported; and the geographic dispersion of program/service delivery.
Information can be collected on historical trends over the last five or ten years depending on the type of program or service. Managers should forecast changes in client demand for the program or service, estimate future demand, and establish a baseline forecast as this will be critical for determining resource levels or potentially changes in the method of service delivery. Key questions are whether demand for the program or service is increasing, decreasing or stable, as well as any changes in the nature of services or support required to respond to ever changing client needs.
Workload indicators are generally associated with specific activities and/or outputs. For transactional services, workload can be measured by the number of transactions by type of service or activity. Workload indicators can be further broken down by volume or dollar value by type of service or product, delivery method, client, region. This information helps to assess which products and services are generating the greatest level of effort or workload within the department/agency and where to allocate resources.
Again, managers should establish a baseline forecast of future workload based on historical workload trends. Historical yearly workload trends can be tracked over a three to five year period for key indicators, in order to assess the trend in the number of transactions, and the impact of any fluctuations or surges in workload. Key questions are whether workload is increasing, decreasing or stable; changes in the types of services or support delivered; and whether workload fluctuates during the year or from year to year.
Using the baseline forecast as a planning tool, managers estimate yearly baseline volumes by type of service in order to establish required capacity, and support the business case for reallocation of resources or for adjustments to resources when volumes change. Workload information is also used to estimate the unit effort or cost for key transactions and how this compares with sector or industry benchmark standards.
Assessing workload trends can be a challenge where several workload indicators are involved, the workload is not easily measured, transactions require considerable more time than others, or workload information is simply not available.
Efficiency is most often measured by the unit costs of the activities and transactions. Depending on the program or service, efficiency may be measured by cost or level of effort per output, level of output per staff (over a given time period), utilization rate (typically the case for professional services), cost per client, cost per location, or the number of full-time equivalent (FTE) resources or expenditures as a percentage of the overall resources of the organization. Efficiency indicators are most meaningful when assessed in relation to internal targets and/or external benchmarks.
The choice of efficiency indicator sends a strong message regarding the priorities of the organization and requires careful consideration. Of course, the indicator(s) depend on the type of program/service. Recognized industry benchmarks can be useful in establishing the efficiency indicators.
Actual efficiency is measured by monitoring trend workload and resource data, and reviewing trends on a historical basis to assess whether efficiency is increasing, decreasing or is stable. It is important that the efficiency indicators be meaningful and not be influenced by factors outside of the control of program delivery staff.
Organization standards or productivity targets should be established. These standards or targets should be endorsed by senior management and well communicated throughout the organization, recognizing that they can change over time depending on considerations such as the method of service delivery, service standards, technology used, staff competencies, etc.
It is useful to compare an organization’s efficiency with external sector/industry benchmarks. This involves selecting relevant benchmark organizations for comparison purposes, collecting data on benchmark standards in place, and comparing efficiency levels to these external benchmarks.
Public organizations have often been reluctant to implement productivity or efficiency standards. The validity of the indicators and/or data is challenged by managers. Managers, staff (and their unions) resist the idea of targets. Although productivity indicators have their limits depending on how they are used, they can serve as a useful program management tool for both staff and managers if applied wisely.
Information on the overall health of the workplace and morale can be obtained through periodic employee satisfaction surveys, staff retention and turnover data, job vacancies and other potential indicators such as sick leave, overtime, number of complaints or grievances, achievement of training plans. Employee satisfaction and morale is typically measured at a broader organizational level rather than within a specific program or service.
The indicators used to measure employee satisfaction and workplace health are critical and will vary by organization. Workplace indicators that may be relevant in one organization may not be suitable in another. For example, the number of grievances may be a negative or positive indicator depending on the culture or context of the organization. Also new indicators are required in priority areas such as mental health and an inclusive workplace.
Again, trends are critical, therefore the importance of conducting employee satisfaction surveys on a regular basis. More targeted employee surveys or analysis, with a more limited but defined scope, may be conducted more frequently.
A risk is that the data from internal human resource systems such as staff turnover is so general as to not be meaningful. In-depth segmented analysis is required to produce useful information for decision-making. Modern analytics tools can be useful in this regard.
Again, it is important to report and assess the information against agreed upon targets, and to communicate the results to staff. The discussion around the targets is often as beneficial as the actual results.
The information is most useful when compared with sector or government-wide standards. Ideally, these standards should be widely recognized and be validated through benchmarking or literature review.
An assessment of actual results compared to targets and sector benchmarks can then be used by managers and staff to develop action plans to pursue improvement opportunities. Some public organizations produce an annual report on their workforce.
Typical financial information used by managers in the public sector includes actual expenditures compared to budget and forecasts, with a tight focus on the rate of spending and potential lapsing of expenditures beyond the fiscal year. Other information includes the propriety of financial transactions such as conformity to financial rules and policies determined by audits, critical error rate in transactions, or probity in the use of funds.
A common issue within the public sector is the reliability of financial forecasting. The management of available funds throughout the fiscal year is critical therefore the importance of accurate and reliable forecasting by managers.
Cost information is at minimum reported by object. Of particular interest is the trend in overall expenditures, the major cost items, the mix between salaries and operating costs, capital project spending, and lapsing of funds.
Cost information is also ideally available according to other attributes such as activity, client type, delivery method, product/service, location, degree of cost recovery (where applicable), etc., based on activity based costing systems and using the latest analytics tools.
In conclusion, the above list of management information required is not meant to be fully comprehensive and should be viewed as a starting point. You will surely want to add other categories of information that have been missed. If managers are able to use this list to assess their current situation, and determine gaps or opportunities for improving the information they collect, receive and/or manage, then the checklist will have served its purpose.