Published on Development Impact

Does Business Training Work?

This page in:

What do we really know about how to build business capacity?    A nice new paper by David McKenzie and Chris Woodruff takes a look at the evidence on business training programs – one of the more common tools used to build up small and medium enterprises.   They do some work to make the papers somewhat comparable and this helps us to add up the totality of the lessons.   What’s more, as David and Chris go through the evidence, they come up with a lot of interesting (and some not-so-obvious) lessons for actually doing impact evaluation of business training programs – I’ll save these lessons for next week and today talk about why we don’t know that much.   

To take stock of the available evidence on business training, David and Chris search Econlit, Google scholar and then go out and ask folks for studies.   They limit their discussion to papers which have tried to deal with selection on observables and unobservables and focuses on business practices (thus not taking on the substantial technical/vocational training literature).   This gives them 14 studies to focus on (13 of which are randomized).   These are almost all some sort of classroom training (sometimes combined as part of a microfinance program) but they also briefly discuss three other experiments which focus on providing individual consulting services (more on this later).  

So who get this training? Bottom line: It’s a heterogenous bunch – not only across the different studies, but also within the different studies – and this within-program heterogeneity is going to make it harder to capture effects.   Some programs are for women entrepreneurs only (particularly those tied to microfinance) and some are male dominated (only a few have some semblance of balance – which makes it hard to compare the effects of a single program on both men and women – you’ll see below that the results for men and women are actually quite different).   Most of the participants of the programs in these studies already have an existing business , but with some exceptions. Most of the program participants are urban (maybe because that’s where the trainers are) and around 35-45 years old (with two studies tackling younger entrepreneurs).   Firm size is mostly in the range of microenterprises (maybe one or two employees), with two studies looking at subsistence firms (no employees) and two which sample SMEs from industrial clusters. Most of the studies they look at don’t focus on a particular, or a particular set, of clusters.     And only two of them take a sample of a representative frame of entrepreneurs to offer training to. 

What kind of training do they get? Here again there is a fair amount of heterogeneity. Length varies from short (two half days) to longer (a week or more full time). In terms of material: “they all focus on general business skills that should be broadly applicable to most businesses, rather than technical knowledge or sector-specific content.” Almost all programs cover some basic accounting – which includes lessons on keeping household and business accounts separate and record keeping.   For new businesses, the training tends to tackle generating a product and taking it to market.   For existing businesses, the training tends to marketing, pricing and costing, inventory management, customer service and financial planning.  

Now one significant issue that a lot of studies face is that if you offer this training, folks won’t come.   Indeed, across the studies, take up for entrepreneurs offered training comes in around 65%.   Even the “mandatory” treatment that was part of a microfinance program only gets 88% attendance. This low take up is going to lower power -- as Chris and David point out, 65% take up means that you need 2.4 times the sample size than when take up is 100%.  

OK, so with the heterogeneity (reducing power) and the lower take up (reducing power) and the mostly one year follow up (maybe reducing power) with mostly only one follow up survey (not increasing power), survey attrition (reducing power), and measurement issues (reducing power – and more on all of these next week) what do these studies find?  

·         Enterprise survival: David and Chris report a sample weighted average of a plus 4 percentage point increase for men and a 3 percentage point decrease for women.   A group of the studies don’t report on this, and a bunch do not have significant results (and by not significant, the confidence interval includes plus and minus 5 percentage points). Indeed, only one study finds significant effects at the 5% level (and they find +9 percentage points).

·         Enterprise start-up: Here the news is better – studies which focus on new business training do find some significant positive effects. However, among the two studies that look at this, there is some switching from wage work, so the net employment effect is not significant. And it’s not clear this persists – only one study (by David and Chris and a co-author) shows an initial positive effect, but one that dissipates with time. One additional thing to keep in mind though is that both start-up and survival effects may introduce some selectivity into which businesses you observe in follow up.

·         Business practices: Good news here – the entrepreneurs do seem to be doing more of what they’ve been taught. (But we can’t get gender disaggregated results here – when separated by the sex of the entrepreneur, the results are not significant).     What’s interesting is that, while significant, these effects often are not huge – Chris and David give us two examples of studies where the treatment entrepreneurs are 6 to 12 percentage points more likely to engage in practices than the control group.   Makes you wonder about the curriculum, or the teachers…

·         Profits and sales: This is tough to do, so not a lot of studies do it.   But among those who do, and who have significant power, there are some positive effects (3 studies for profits, 5 for revenues).   One issue that comes up here is attrition – for example one study which found a significant impact on profits for men loses significance when they calculate the lower bound taking into account the attrition.  

·         Impacts on microfinance institution outcomes: Recall that some of these interventions are run by microfinance institutions for their clients.   It turns out that, on balance, this kind of training improves microfinance repayment and/or borrower selection.

·         Individual consulting: While not a part of a lot of the results above, Chris and David also look at three experiments which try one-on-one consulting for firms.   Overall, the score is one not significant, one sort of mixed and one (for large firms) with significant positive results.  

So where does this leave us?   The bottom line, and David and Chris have a nice table to show this, is that most of these studies had low power (and since I am working on a study right now with a sample that looks a lot like these, I can sympathize).   They take the parameters of the studies (ex-post of course, when those sample size parameters are revealed) and do some power calculations.   Chris and David point out that for the average business a 25% increase in profit might cover 75% of the cost of the program over a year.   But none of the studies was powered for a 25% increase in revenues and only two were powered for a 25% increase in profits (at 80% power). So we shouldn’t be surprised by the relatively weak overall results on these programs. And keep in mind that this is a fairly new area – of their 14 studies, only 5 have been published so far, and the oldest one dates from 2010 – so we are still learning about doing evaluations in this area. 

David and Chris lay out a number of questions we still need to tackle (who does training help the most, what do effects look like over time, what are the general equilibrium effects, why don’t firms buy this training on their own, shall we train in practices or personality) but clearly number one on the agenda are some studies with more power.   And I’ll write about what they have to say on that and other methodological issues next week.   


Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000