The Survey That Did Not Ask the Right Question
A Response to ODI’s “Reforming Multilateral Development Banks: Perspectives from Client Countries” (March 2026)
Parminder Brar · mdbreform.com · May 2026
| 73% “Very effective” — ODI survey |
4% Transport S+ — IEG record |
$51bn Below standard — seven sectors |
0 Times IEG data shown to respondents |
ODI Global has published what it describes as “the most comprehensive, comparative assessment currently available” of multilateral development banks — a survey of 650 government and MDB officials across 125 countries, supplemented by 250 interviews in 12 country case studies. The report is well-executed, carefully presented, and genuinely useful as a map of what client-country officials think about MDBs.
The problem is that what client-country officials think about MDBs and what the evaluation data shows about MDBs are two entirely different things. The report captures the first. It does not engage with the second. And the gap between the two is where the accountability problem lives.
The Perception Problem
The ODI survey asks government officials how relevant they find MDB financing, how effective they perceive individual MDBs to be, and how well MDBs support their country’s priorities. Three-quarters of respondents rate MDB functions as relevant. About 60 percent or more rate the large MDBs — World Bank, AfDB, AsDB, IDB — as “very or extremely effective.” Nearly half think MDB coordination is working “well or very well.”
These are perceptions. They are real. They matter. But they are not outcomes.
The Independent Evaluation Group — the World Bank Group’s own independent evaluator — maintains a database of 10,542 rated projects. The IEG data shows the following for Sub-Saharan Africa between FY2015 and FY2026, measured by the Bank’s original Satisfactory standard and weighted by committed dollars:
Transport: 4% Satisfactory. $10.8 billion below standard.
Energy: 15% Satisfactory. $18.3 billion below standard.
Water: 25% Satisfactory. $3.8 billion below standard.
Health: 29% Satisfactory. $6.0 billion below standard.
Education: 31% Satisfactory. $4.6 billion below standard.
FCI: 36% Satisfactory. $2.8 billion below standard.
Agriculture: 38% Satisfactory. $5.0 billion below standard.
Not a single sector delivers more than 38 cents of every committed dollar to Satisfactory outcomes in Africa. The total: $51 billion below standard across seven sectors in a decade.
The ODI report does not reference this data. It does not cite the IEG outcome ratings. It does not mention commitment-weighted performance. It does not ask its 650 respondents whether they are aware that the transport portfolio in Africa has delivered 4 cents on the dollar, or that the energy portfolio behind Mission 300 delivers 15 cents.
Asking the Client What They Think of the Doctor — Without Showing Them the Test Results
The ODI methodology is a client satisfaction survey. There is nothing wrong with client satisfaction surveys. They measure something real — the quality of the relationship, the responsiveness of the institution, the perceived alignment with country priorities.
But a client satisfaction survey that does not present the client with the outcome data is measuring something incomplete. It is the equivalent of asking a patient how they feel about their doctor without showing them their blood test results. The patient may feel well served. The test results may say otherwise.
The ODI survey finds that 73 percent of government respondents rate the World Bank as “very or extremely effective” at providing finance. The IEG data shows that 70 percent of committed resources in Africa went to projects rated below Satisfactory. Both findings are true. They describe different things. The ODI report presents one. It does not acknowledge the other.
What the Report Gets Right — and Where It Stops
The report does surface several findings that align with the evaluation record.
Processing times. 44 percent of government respondents say the project cycle is “very or extremely long.” This is consistent with IEG findings on implementation delays and the Bank’s own recognition that disbursement lags erode project effectiveness.
Weak institutional capacity. 51 percent cite weak capacity at the national level as the main challenge to building project pipelines. This is the finding that appears in every IEG sector evaluation — the Bank can design projects but cannot build the institutional capacity to implement them.
Coordination failures. Only 48 percent of government respondents think MDBs coordinate well. Africa scores worst, with 21 percent rating coordination as “poor or very poor.” This is consistent with the RAP 2022 finding that only 18 of 45 countries had good “One Bank Group” performance.
Declining perceived effectiveness. The report notes that perceived effectiveness “has remained stable or decreased for most MDBs and for most functions” since 2021. This is significant and underreported in the report’s own framing.
But the report stops at perception. It does not cross-reference these findings against the evaluation evidence. It does not ask: if 73 percent of government officials rate the World Bank as effective at financing, and the IEG record shows 4 percent Satisfactory in transport and 15 percent in energy, what does that gap mean? Is it that government officials do not see the IEG ratings? Is it that the MS+ metric — Moderately Satisfactory and above, the Bank’s preferred headline — creates a false impression of success? Is it that the sovereign guarantee structure removes the incentive for governments to scrutinise outcomes?
These are the questions the report should have asked. It did not.
The Ten Recommendations — and What Is Missing
The ODI report concludes with ten recommendations for MDB shareholders and management. They include: leverage the combination of functions across MDBs; make use of MDB headroom; ensure country presence; provide tailor-made technical assistance; support low-carbon transition; clarify local currency options; follow through on reforms; boost coordination; invest in project preparation; streamline the project cycle.
These are sensible operational recommendations. None of them addresses the central problem documented in the evaluation record: the majority of resources committed to Africa over the past decade went to projects that did not achieve their development objectives. The recommendations assume the delivery model works and needs operational refinement. The IEG data says the delivery model is producing below-Satisfactory outcomes across every major sector.
The report recommends “follow through with implementation of reforms to boost operational effectiveness.” The IEG record shows that transport has delivered 4 percent Satisfactory by commitment for a decade. That is not an operational effectiveness problem that coordination, streamlining, or capacity-building will solve. That is a structural delivery failure.
The Question the Report Does Not Ask
The ODI survey asks government officials: how relevant are MDBs? How effective? How well do they support your priorities?
It does not ask: are you aware of the IEG outcome ratings for MDB projects in your country? And if so, do they change your assessment?
That is the question that would have made this report transformative rather than confirmatory. If a government official in Kenya is told that the World Bank’s transport portfolio in their country has zero Satisfactory outcomes on $1.4 billion, and the energy portfolio has zero Satisfactory on $1.4 billion — would they still rate the World Bank as “very effective”? If an official in Nigeria is shown that every project named after the jobs agenda — Competitiveness and Job Creation ($250 million), Growth and Employment ($160 million) — was rated MU or worse, would they still say MDBs are “well aligned with country priorities”?
We do not know. The ODI report did not ask.
What Should Come Next
The ODI methodology — 650 respondents, 125 countries, 12 case studies — is a platform that could produce genuinely important findings if it were combined with the evaluation evidence. A third edition of this survey should:
1. Present respondents with the IEG outcome data for their country before asking them to rate effectiveness. Informed perception is more valuable than uninformed perception.
2. Ask whether respondents have seen the IEG ratings for MDB projects in their country, and whether their ministry tracks project outcomes after completion.
3. Distinguish between the MS+ metric and the Satisfactory standard. The Bank reports 84.6 percent MS+ in education. The honest bar is 31 percent Satisfactory by commitment. The gap — 54 percentage points — is the accountability gap. The survey should ask which metric respondents are aware of.
4. Cross-reference the perception data against the IEG outcome data at the country level. Where perception and outcomes diverge most sharply, the explanation is where the reform agenda should focus.
5. Ask about the sovereign guarantee. The Bank gets repaid whether the project works or not. The government repays the loan regardless of outcomes. Does this structure affect how government officials assess MDB effectiveness? The survey should ask.
The ODI report is a carefully constructed map of what clients think. The IEG database is a carefully constructed map of what happened. The two have never been placed side by side. When they are, the conversation will change.
Related Analysis — Zero Accountability
Eleven Findings from the Spring Meetings →
Related Analysis — The Seven Sector Records
Transport (4%) · Energy (15%) · Water (25%) · Health (29%) · Education (31%) · FCI (36%) · Agriculture (38%)
Related Analysis — The MIGA Record
The Guarantor That Cannot Verify Its Own Results →