Caroline Ashley

Caroline focuses on how innovative economic models can deliver more inclusive and resilient development.

Caroline has worked on markets, business models and investment approaches that deliver social impact for many years in roles with challenge funds, impact investors, entrepreneurs, corporates, NGOs and policy makers. As Results Director of the DFID Business Innovation Facility, and Sida Innovations Against Poverty programme, she founded the Practitioner Hub for Inclusive Business in 2010, then took on hosting it, and acted as Editor of the Hub for 7 years before it transitioned into managed by IBAN.

Most recently Caroline led economic justice programmes at Oxfam GB, before moving to Forum for the Future, to lead global systems change programmes to accelerate our transition to a sustainable future.

Does all this measurement add up? The challenge of aggregating results across an impact portfolio

9. Jun 2016

What information does an investor need, so as to know whether a business in their portfolio is doing well?  It may be turnover or team retention, licenses or milestones attained. Whatever it is, it is probably different from the information they need to aggregate progress across their portfolio, to report on what their entire fund or programme is achieving. 

The challenge of aggregating information across a range of innovations or social businesses is common to both investors and grant-makers such as challenge funds.  And it’s a problem that has not yet been cracked.  

It’s actually a problem in four parts:

  1. Information that is aggregatable across an entire portfolio may tell you little about progress or impact.  Revenue or numbers of lives touched (or ‘people reached’) are both a case in point.
  2. Information that does indicate progress of a specific deal or innovation is unlikely to be aggregatable with other investments.   If attrition rates of franchisees is an apple then kerosene expenditure avoided is an orange. Apples and oranges are different.
  3. Information that is needed to really indicate social value across a portfolio may be unavailable or burdensome for the investee to collect, or not be perceived to be relevant by the investee.  Share of beneficiaries that are women is an example and outcomes performance is another.
  4. When information is aggregated, is it usable and useful?   If other organizations use different rules and assumptions – such as what proxies are used, the time period reported, or share of impact to be claimed by an investor – then no comparison of performance is possible.

The funds and programmes that I work on tend to support inclusive businesses that directly engage people at the Base of the Pyramid (BoP). So the most common aggregate indicator is number of people reached.   The main limitations of this metric are:

  • Total disarray on what exactly is reported: do you multiply the number of beneficiaries by household size (so one solar system = 5 beneficiaries)? Do you count them from the day of investment or day the business started?   Is there data to work out unique households served rather than cumulative sales?
  • What does it tell us? As someone at Omidyar Network once said to me: if the number of lives touched was the only metric, Omidyar has done its job by investing in Wikipedia.

I do think the number of people reached should be collected, but with five provisos:

  • Count and report number of households reached.  Any multiplication by household size should come after that is reported.
  • Make the assumptions and definitions clear: such as how units sold convert to unique customers, and whether data is per year or cumulative since what point.
  • Keep separate numbers reached as consumers of a good or service (which are usually large) and those reached with income opportunities as employees, distributors or entrepreneurs (which are usually much smaller). 
  • Interpret them in context.  5,000 households reached is large scale for an agri-processor sourcing from farmers, it’s tiny for a mobile phone app.  So it may not be the actual number, but the level of scale achieved relative to its potential, that is aggregated across the portfolio.
  • Track and report other things too.

So what else can be tracked and included in aggregated indicators? 

Other elements of social impact include:

  • Information on who is reached.   E.g. the share that fall into specific income groups, or the share of clients that are women (the exact percentage is hard, so I tend to use these groups:  virtually all women, the majority (55-95%), roughly half (45-55%), a minority (5-45%), virtually none).
  • The depth of impact. This is hugely difficult and will no doubt be controversial if widely implemented.   But often a common sense judgement can be made whether the impact per person is high, medium or low.  An income opportunity that moves a family out of poverty is high, while an additional market for their tomatoes which diversifies risk and expands demand at prevailing prices, counts as ‘low’.  
  • Potential to influence innovation uptake by others can also be scored.  First movers will score higher, but it can depend on the relevance of the business to others and the extent to which the model is replicable.

Ideally aggregated metrics would cover outcomes not just outputs – changes in peoples’ lives, poverty level, health or skills.  In the health sector, with more advanced research methods and economics, it may be possible to convert outputs into DALYs – disability adjusted life years – at least theoretically.  Elsewhere, I’m impressed if a single business can report outcomes, and please let me know if you see these reported aggregated across a portfolio.

For assessing commercial viability, challenge fund or VC portfolios are often investing at an early stage when profit margins and IRR are not useful metrics.  Pre-profit qualitative measures can be scored High, Medium and Low across an entire portfolio:

  • Is the business on track against its own milestones?
  • How strong is the capacity of the leadership and management team?
  • Does the business have the external ‘deals’ in place it needs to scale, including investment, permissions, licenses and partnerships?
  • Is it operating at a price point that will cover operating costs once it is scaled?

The portfolio can be mapped against each question, or they can be combined together in an ‘index’ of viability for an overall high, medium or low score.   Such commercial considerations are essential to assessment of social impact, because viability drives scale.  So a business with a low viability score should have its social impact score muted.

This raises an important point across all this tracking. Results vary hugely by the maturity or stage of the business, given it can take 10 years for an inclusive business to scale.  So start by establishing maturity, and disaggregate data by business stage.

The same indicators will not work for different portfolios.   And even if they do, they will probably be weighted differently.   I worked on two quasi challenge funds - the Business Innovation Facility and Innovations Against Poverty – with similar goals but different instruments. Each used a development index and viability index combining a number of metrics like those above into an overall score.  But differences in strategy meant slight differences in which metrics were used and how they were weighted.  Now in Connect to Grow, supporting B2B partnerships, we are using a similar approach but again adapted for the programme strategy.   

Other programmes will prioritise other issues. Most impact investors will measure follow-on investment, as an indicator of leverage.  Aavishkar Fund seeks to reflect an element of the additionality of their investment by identifying in which deals they were the first investor.   A core part of their social impact is based on the percentage of investment deployed in low-income areas.   African Enterprise Challenge Fund and others put a monetary dollar on benefits delivered, maximizing aggregation potential. The impact assessment framework developed by Big Society Capital and applied to the KL Felicitas portfolio rates investees on their impact tracking practice – the process rather than the results.   It is usually sensible to aggregate indicators sector by sector:  Kilowatts or gigawatts generated, health workers enrolled or treatments provided, or student performance in exams.   A sector focus can be much more intelligent than aggregated data, but can still disguise huge differences between different types of models.

The details must vary but the broad principles are to create a ladder that converts specific deal data to a score or ranking on an indicator, and then converts the indicators to an overall judgement of progress.  Given the deal data itself cannot be aggregated, there is no alternative but to apply this process which of course involves a high degree of judgement. But the team that has the skills necessary to run the portfolio, should have the skills enough to recognize good/fair/poor or high/medium/low when they see it and the honesty to report it.

Further information see

This blog is based on a presentation delivered at the Social Value conference in London in February 2016 and is part of an upcoming series on cross sector perspectives of aggregation from Social Value UK.