Independent Analysis · Accountability in Development Finance
Analysis
Practitioner analysis grounded in evaluation data. Cold register. Documented facts. No adjectives that the numbers do not earn.
These papers draw on the IEG Project Ratings dataset, the World Bank DPAD Prior Actions database, CPIA and PEFA country assessments, IMF Article IV consultations, and AfDB IDEV evaluation cycles. All underlying data is available on the Data page. The analysis reflects the independent views of the author. It has not been commissioned by or submitted to any of the institutions reviewed.
IDA Reform — The Delivery Record
IDA: Why Is the World’s Largest Concessional Fund Delivering Satisfactory Outcomes on Only 31% of Its Portfolio?
$117 billion below standard in a decade. 31% Satisfactory. The IDA21 Deputies Report told donors the rate was 91%. It was not. The gap between those two numbers is the governance mechanism that has protected IDA’s delivery failure from accountability for four decades.
From IDA1 ($912 million, 1961) to IDA21 ($100 billion, 2025) — all 21 replenishments, the Deputies process, and the leveraging model. Then the IEG record: 31% Satisfactory for two decades, flat. MTI at 11.7% on $26.9bn. DPF in fragile states at 10.8%. Africa receives 70% of IDA resources and records the weakest outcomes. The 91% MS+ figure that every Deputies Report has cited — and why it is not the same as Satisfactory. The case for a Challenge Fund for IDA22: 25% of IDA resources allocated competitively to non-Bank implementing partners, evaluated on the same IEG standard as Bank projects.
Read the Paper Download PDFMDB Institutional Performance — The Six Institutions
Who Is Minding the Ship? The Board That Approves Everything Cannot Be Held Accountable for Anything
53% of projects satisfactory at completion. $84.4M Board cost annually. The Board has never formally held management accountable for development outcomes in 82 years. The Zedillo Commission identified why in 2009. Nothing changed.
13,273 IEG-rated operations since 1973. The S+ rate fell from 82% in the 1970s to 26% by 2005–09. A Board that co-approves every loan cannot independently oversee it. The Wappenhans Report (1992), the Zedillo Commission (2009), the AIIB comparison. Three reforms that would change the architecture — none requiring a quota review.
Read the PaperThe Ship That Does Not Keep Score
Zero project ratings in 149 countries. The IEO produces thematic evaluations — valuable, but not a rating of whether the IMF’s work delivers results. You cannot fix what you do not measure.
The IMF’s evaluation architecture compared to the four MDBs. What IEO does. What it does not do. Why the absence of project ratings is not a technicality — it is the accountability gap.
Read the Paper Analytical NoteNigeria and the Africa-Wide Emergency Funding: The $3.4 Billion RFI and the Evaluation That Could Not Name It
$3.4bn disbursed in weeks. No performance conditions. The IEO Sub-Saharan Africa evaluation named 19 countries. Nigeria — the largest recipient — was not among them.
COVID emergency lending, institutional self-protection, and the accountability gap that the largest single-country RFI in Sub-Saharan Africa exposed. The Idris arrest. What the IEO evaluation omitted and why it matters.
Read the Paper Nigeria Note Africa-Wide AnalysisThe Cold War Across 19th Street: The Fiscal Case for Ending IMF–World Bank Mandate Duplication
$750M–$1.1bn per year in wasted capacity. Same Finance Ministers. Same street. Same week. 35 years of concordats. Nothing has structurally changed.
Five domains where both institutions do the same work. Five country cases where contradictory advice cost governments real money. The 35-year timeline of failed coordination. Six actions that would actually change the architecture.
Read the Paper Analytical NoteThe Homework It Does Do — And Why the Score Has Not Moved: Isomorphic Mimicry and AFRITAC
Five regional centres. $200M in technical assistance. Not rated. Schick’s seven basics: payroll, procurement, internal control, cash, basic reporting. SSA PEFA says these are not being done.
What AFRITAC teaches and what the PEFA data shows countries still cannot do. The form-function gap in IMF technical assistance. Why the score has not moved despite two decades of capacity building.
Read the Paper Analytical NoteEurope’s Chair: Who Sits at the Table, Who Decides, and What It Would Take to Change It
The IMF Managing Director has always been European. The World Bank President has always been American. Both are customs, not rules. A custom you can only change by deciding to be embarrassed by it.
The governance arrangement that has persisted since 1944. What it costs in legitimacy. What changed at the WTO in 2021. What the Spring Meetings could look like if the shareholders chose to act.
Read the Paper Analytical NoteIFC Fragile States: 11% Satisfactory and the Additionality Problem
11% satisfactory in fragile states. The PSW: $2.5 billion to go where the private sector would not — used to replace capital the IFC would have deployed anyway. That is a refund, not additionality.
Croupiers in Washington, tables in Kinshasa. The IFC’s mandate in fragile states is the most important and the least delivered. The additionality claim does not survive the data.
Read the Paper Analytical NoteWorld Bank · IDA Private Sector Window · April 2025
Rethinking the IDA Private Sector Window: From Internal Allocation to Competitive Deployment
206 projects. $6.18 billion. 83.5% through IFC-managed facilities with no competitive allocation. IFC both originates and assesses the additionality of its own transactions. Non-PSW IFC commitments in PSW-eligible countries fell during IDA18. The five sectors most linked to job creation receive 15.4% of PSW resources. Financial intermediaries in Central Asia receive 67% average subsidies; Africa receives 37%.
The structural design flaw that no governance restructuring in IDA21 has resolved: the window is open, but only one institution holds the key.
AfDB Results: The 94% Problem — Five Named Cases and a Validation Architecture That Cannot Correct Itself
94% satisfactory, every year. Adjusted plausible range: 61–75%. IDEV validates a sample and produces no independent rating. The gap is unpublished.
The task manager who wrote the rating was wrong, and kept it. Medupi. The smile that does not change. Five cases where the documented evidence and the official number tell different stories.
Read the Paper Analytical NoteADB Results: The Best Evaluation Architecture in Development Banking — and a 12-Point Gap That Is Not Closing
IED validates 100% of PCRs. Publishes the management-IED gap. Signs its name to the disagreement. The gap: 12 points sovereign. Not closing.
Nine Japanese presidents in fifty-nine years. A rule you can change at a meeting. A custom you can only change by deciding to be embarrassed by it. The best oversight architecture of the five — and what it has and has not delivered.
Read the Paper Analytical NoteIDB: Evaluation Architecture and the Rating Gap — Sixteen Years of the DEF and a 28-Point Management-OVE Divergence
Management: 81% satisfactory. OVE: 53%. Same projects. Same year. A decade. Capital decisions made on the management figure.
The most comprehensive self-evaluation history in development banking. The only institution to evaluate its own evaluation framework — and find the cultural change did not occur. Four named cases. Six failure patterns. What shareholders should require before the next capital decision.
Read the Paper Analytical NoteWorld Bank DPF — Policy Without Performance
Policy Without Performance: Isomorphic Mimicry and the DPO Incentive Trap
MTI S+ rate: 27.5%, down from 41% (2005–09) to 17% (2015–19). SSA CPIA flat at 3.1 for 18 years. 11,628 prior actions. The form-function gap.
Governments adopt the outward forms of reform — laws gazetted, portals launched, strategies approved — to trigger disbursements, while the underlying administrative capability remains absent. 1,551 evaluated DPF operations. 20 years of IEG lessons the Bank has not absorbed. Five reform proposals.
Read the PaperPaper Triggers by Global Practice: The Evidence Record — Companion Annex
Five GPs. Forty-plus prior action templates. Each with the disbursement trigger, the documented function gap, and the specific IEG project ID and rating. MTI, Governance, Energy, FCI, Social Protection.
The catalogue that MTI will not believe without specific evidence. Every prior action template that exemplifies the form-without-function pattern, sourced to named IEG ICRRs. The argument made concrete, operation by operation.
Read the AnnexWorld Bank PforR — Designed to Fail
Designed to Fail: The SOML PforR Case Study (Nigeria P146583)
$500 million. The world’s largest PforR at approval in 2015. Rated Moderately Unsatisfactory by IEG, efficiency Negligible. The management ICR rated it Moderately Satisfactory throughout. The fiduciary assessment approved the programme knowing procurement fraud had occurred in the implementing ministry. 83% of first-year disbursements — $52.9 million in state performance grants — sat on the balance sheet as a single unaudited line. Nigeria will repay $387.6 million over 38 years. The Bank earns its spread regardless of outcome.
How a PforR can be designed to fail: the fiduciary gap, the managed rating, and the institutional immunity that insulates the institution from every financial consequence of its own design failures.
Read the PaperTanzania
The Richmond Reckoning
What happens when a courageous Speaker of Parliament — Hon. Sam Sitta — appoints a young MP — Harrison Mwakyembe — to Chair a Parliamentary Committee to enquire into an irregular procurement in the energy sector?
The enquiry report led to the resignation of the sitting Prime Minister and two other Ministers — the first time this happened in Tanzania.
Read the EssayPFM Reform & PEFA
PEFA & Sub-Saharan Africa
SSA average: 2.3/4.0. Joint lowest globally. Scores highest on budget documentation (de jure compliance). Scores lowest on external audit (functional accountability).
The form-function split in PFM reform made visible. Where scores have improved in SSA — and why those improvements do not mean what they appear to mean. Schick’s seven core indicators. The basics-first case.
Read the PaperPEFA at the Crossroads: Service Delivery, Core Controls, and the Cost of Losing the Thread
PI-30 External Audit: 1.9 out of 4.0 — the lowest of all 31 indicators. PI-21 Cash Management: 2.2. PI-24 Procurement: 2.1. Every one of Schick’s seven foundational indicators is below the adequate performance threshold in Sub-Saharan Africa. Meanwhile the PEFA Secretariat is promoting PEFA Climate, PEFA Gender, PEFA++, and PEFA SDGs. The framework has lost the thread.
The teacher is absent. The medicine is not on the shelf. The child is not learning. No climate budget tag will fix this. Written by a member of the original PEFA indicator design team.
Read the PaperBrowse by Topic
All papers on this page reflect the independent analysis of Parminder Brar. They have not been commissioned by or submitted to any of the institutions reviewed. The empirical foundation is publicly available evaluation data — IEG, OVE, IED, IDEV, IEO — interpreted through the lens of 20 years of field experience in the institutions being assessed.
The platform framing is reformist, not prosecutorial. We can do better. The evidence says so. The evaluation offices say so. The question is whether the institutions will act on what their own evaluators find.