This was the case in June 2023, when the State Department for Planning issued the requisite rules to be adopted by ministries, departments, and agencies (MDAs), introducing a new section on learning that was tethered to the existing monitoring and evaluation (M and E) framework.
The M and E stages provide insights that enhance the authority’s operations. These lessons are synthesised and shared across the organisation to enhance responsiveness, innovation, and efficiency, achieving the set objective of creating efficient markets for consumers. To support this, the authority has developed a knowledge management framework, which incorporates various principles that the government agencies can adopt to enhance MEL.
First, put in place robust systems for capturing and documenting knowledge from projects and programmes. These are developed through standardised templates, digital tools and platforms to facilitate data collection, analysis, and dissemination.
Thereafter, collate and store reports in a centralised digital repository that is accessible to staff members. In line with the rapidly evolving digital landscape, ensure that the systems can be enhanced using appropriate artificial intelligence and data visualisation tools for efficient analysis and presentation of the data.
Secondly, promote timely and structured information exchange across departments by leveraging knowledge-sharing platforms, including townhall sessions/plenaries and webinars.
At the authority, every staff member, within days of resuming work from a local or international training session, is required to disseminate key learnings to all colleagues in plenary.
A softcopy report is also curated for future reference, with a special focus on actionable insights that can enhance the execution of our mandate. To enhance transparency and accountability in the authority’s operations, MEL insights are also incorporated into the authority’s public communication strategies. Third, the integration of knowledge into policy and decision-making processes fosters a culture that prioritises learning, accountability, and continuous improvement. Senior management must champion knowledge sharing and continuous learning.
This commitment is demonstrated through attending and presenting at knowledge-sharing sessions, moderating discussions, and monitoring the application of MEL recommendations in decision-making.
Finally, agencies should institutionalise evaluation processes by involving diverse stakeholders and, preferably, collaborating with academic and research institutions for independent evaluations and evidence generation.
This would also be enhanced by facilitating public feedback mechanisms to incorporate public insights into MEL frameworks.
Fitting contextualisation is available in a seminal article by Sarah Evans titled Why so many clean water projects fail.
Sarah opines that 60 percent of water projects in Africa fail despite well-laid strategies and objectives, further asserting that this failure is partly occasioned by donors, leaving communities the requisite training on how to maintain and manage the new systems.
This, unfortunately, forces communities to default to the known-their unsafe water sources! Such botched projects highlight the risk of failing to implement a robust monitoring, evaluation, and learning (MEL) system.
This got me thinking. While organisations may deploy the best strategies to meet their objectives, there is inadequate learning borne of evidence-based evaluation. So, let’s look at each of the three elements of a MEL system.
Monitoring entails tracking progress within a pre-determined period by identifying the outputs from each activity.
For instance, the Competition Authority of Kenya monitors activities in its plans every quarter, identifying the immediate outputs during each cycle.
Evaluation, on the other hand, determines the level of impact actualisation.
This process is longer. In the authority’s case, implementation of its five-year strategic plans is evaluated twice: at mid-term (two and a half years in) and at the end of the term.
Evaluation does not present the outcomes of an activity, and enables us to decipher and process the underlying factors supporting each performance metric.