How the D-WMS is different
The Development World Management Survey is an expanded survey tool to measure the quality of management practices at schools and hospitals in developing countries.
The original World Management Survey (WMS) project pioneered the methodology of measuring management in manufacturing firms using an interview-based survey tool that evaluates a range of day-to-day management practices on a set scale. The tool has also been adapted to education and healthcare sectors. As part of the core research team of the WMS, we have worked to significantly expand the original data collection project and systematically measure management practices within and across countries since 2008.
As the dataset grew we noticed that when we applied the tool to schools and hospitals in developing countries, the distribution of management was very tight and sometimes truncated at the lowest score. We were clearly missing important variation at the substantial left tail of the distribution.
We created the Development WMS to allow us to more finely capture the practices used in these establishments. The survey is fully backwards comparable to the WMS. For a thorough description of the new tool check out the working paper here, but we provide a summary of the key differences between the two surveys below.
Break down type of activities within management practices
We identified three activities within each management practice in the WMS that could not be extricated ex-post from a score in the original methodology: implementation, usage, and monitoring.
Map and independently measure different activity types
We expanded the survey “vertically” by disentangling and mapping an activity type to each question of the survey’s management practices. With this we reduce measurement error and greatly increase the amount of codified information. Policy-wise we can now pin-point the bottlenecks and where policy could be most effective.
Capture key variation in strength of practices and activities
We expanded the survey “horizontally” to allow for greater variation of scores and allow interviewers to differentiate at a finer level between the strength of activities in places. This way we “un-bunch” some of the crucial lower-tail variation prevalent in the distribution of scores in developing countries.
How it fits the development context
While we have kept the essence of the WMS ensuring backwards compatibility, we adapted the tool to make it applicable in the development setting by addressing the following challenges to using the original WMS in developing countries.
Thick left tail
The distribution of scores in the education sector tends to be tight around the scores for weak management practices. Although the global context of the WMS project allows for a very useful comparison of world-class and poorly managed organizations across a number of countries, the very thick (almost truncated) left tail for developing countries makes it harder to explore the variation of managerial practices in the less well managed organizations.
… allows for the systematic expansion of the possible scores to half points from full points, with specific instructions on what falls into each half point category.
Implementing the WMS with phone calls was a massive barrier in the public sector surveys in developing countries. For instance, sampling frames in India were difficult to acquire and build, and, when available, they often had names of schools and hospitals but no phone numbers. Unfortunately a common reason for the lack of phone number was that schools simply did not have a physical phone line available.
… includes field forms and a survey protocol such that it is possible to carry out these surveys on the field in a face-to-face interview in a comparable manner.
But what should policy focus on? A management topic is an information-rich data point, but it is hard to tell which part of the management practice is failing. Which particular types of processes matter the most across different settings in developing countries?
… allows for the systematic codification of the different information that went into a single WMS score into a more detailed analysis of the types of processes that are in place and which are opportunities for improvement. For example, we saw many schools that had brilliantly filled out report cards (a high score in the implementation process of the data-driven practice) but they lived in the corner of the principal’s office, stacked and unused (a low score on the usage and monitoring processes of the data-driven practice).