(Note: This is an extended version of the methodology story that appeared in the newspaper Sunday.)
In Rating the ’Burbs, The Kansas City Star set out to measure the quality of life of local suburbs as places to live.
The analysis includes every suburb in metro Kansas City larger than 3,500 residents in 2003. That comes out to 40 cities. In addition, we divided Overland Park in half, by north and south of Interstate 435, because the city is so large — significantly larger than most other suburbs, in fact — and its two halves are distinctly different in age and character.
Our scoring system is based on a carefully constructed set of polls, statistics and computations. We went several steps further than ratings done by magazines in other major metro areas, broadening the range of topics and data in our analysis. For instance, we ended up incorporating twice as many statistics as Money magazine did in its rating of communities across the country earlier this year.
To evaluate livability in the suburbs, we first had to determine what local suburbanites considered important. The Star conducted an opinion poll earlier this year, asking suburban residents what they valued in their communities. It found safety and schools topped the list, mirroring a similar poll done for the Mid-America Regional Council, a regional planning agency, earlier this decade.
Next, we looked for statistics that measured the performance of a place. Some suggestions were made by city managers. We were limited to a degree by what was available on a citywide basis and what was quantifiable. There’s no apparent way, for instance, to statistically assess the level of leadership shown by a mayor.
So after consulting with regional council staffers, city government officials, plus local professors, architects and demographers, we came up with 23 statistical measures grouped into nine general categories:
■ Violent crimes per capita in 2003.
■ Property crimes per capita in 2003.
■ Overall trend of violent and property crimes from 2001 to 2003.
This compilation tends to give more emphasis to property crimes, because there are many more of those than violent crimes, but we felt property crimes provided a better reflection of residents’ chances of being victimized.
■ Elementary school math scores for 2004, represented as the difference from the state average percentage of students achieving “proficient” or above on their statewide achievement test.
■ Elementary school reading/communications scores for 2004, represented as the difference from the state average percentage of students achieving “proficient” or above on their statewide achievement test.
■ Average high school ACT score for both 2003 and 2004.
■ Percent of high school graduates going to college for both ’03 and ‘04.
These represented student achievement better than school funding numbers or pupil-teacher ratios.
The ACT and graduates data were done as an average of both years to draw a larger sample of students.
Schools were attributed to the city in which they were located. However, in some cases where significant numbers of students from multiple cities attended one school, that school’s performance was counted partially for each of those cities. A city’s elementary score was determined by averaging the test scores for schools within the city, then determining the numeric difference between that average and the state’s average for that test.
Some school officials maintain that elementary test scores from Missouri and Kansas cannot be compared because the two states have different tests and different standards for achieving proficiency. For example, 72.6 percent of Kansas elementary students scored “proficient” and above in reading, while Missouri’s state average was only 34.7 percent, despite the fact that the students in the two states score very similarly on nationwide tests such as the National Assessment of Educational Progress.
While it can be argued that a 5 percentage point improvement relative to the state average means more in Missouri than in Kansas, the data do not bear this out, said Frank Lenk, Mid-America Regional Council’s director of research services who served as statistical adviser for this series. If it was more difficult to achieve a 5 percentage point improvement in Missouri, then the data should show the Missouri schools clustered more around their state average than the Kansas schools. But that is not the case. The standard deviation for area schools is nearly identical on both sides of the state line. The Star decided this method was the fairest way to compare elementary test scores.
■ Change in resale home prices from 2000 through 2004.
■ New single-family building permits in 2003 and 2004.
■ Balance of housing offerings computed by how close a city’s range of home prices matched our calculation of the suburban norm from the 2000 U.S. Census.
These statistics were chosen as indicators of desirability, new housing stock and affordability, respectively.
■ Property taxes on a $150,000 house this year.
■ City government efficiency this year, as calculated by general operations per capita, with cities compared only on those types of expenditures that they all share, such as administration, police, parks, etc.
These two statistical measures reflected how well local governments were holding their spending in check, with lower being better.
Government efficiency is one example of a measure that came out of The Star’s discussions with suburban city administrators and managers. They suggested splitting government spending into two measures, one based on core operations, where efficiency was most prized, and the other based on capital improvements, where higher spending was deemed an indicator of better investment. Capital improvements were included in a later category. The Star checked its budget tallies with each city government for accuracy.
Sense of community
■ Voter turnout rates in contested city and school elections this decade.
■ An evaluation of a city’s charm, determined by Realty Executives real estate agents filling out a charm survey. Realty Executives agents sell in and have knowledge of every suburb included in the series.
These statistics conveyed community engagement and attractiveness.
For the charm survey, The Star consulted with the local chapter of the American Institute of Architects to determine characteristics of suburban charm, which differ from urban charm. Seven characteristics were chosen: attractiveness of the housing stock, tree coverage, neighborhood design, general property maintenance, street beautification, availability of cul-de-sacs, existence of a downtown or town square.
■ Per capita acreage of parks in and adjacent to a city.
■ Tally of important cultural, recreational and health offerings.
The tally of suburban amenities included such things as an outdoor pool complex, an indoor recreation center, a skate park or skating rink, outdoor concerts or community theater, a hospital, a farmer’s market and two or more holiday-themed festivals or events. Plus, cities were given bonus points for significant extra attractions, such as Raytown’s BMX bike track and the NASCAR track in Kansas City, Kan.
■ Number of retail and service businesses per capita in 2002.
■ Retail sales per capita in 2004.
■ Current city government capital improvement spending per capita, with higher being better.
These illustrated the extent of public and private investment to make suburban life more convenient.
■ Percent of residents staying in their homes at least five years.
■ Likelihood that whites will see or interact with a minority, as expressed by a mathematical formula called an “exposure index,” favored by sociologists studying race relations.
These statistics, both computed from the 2000 U.S. Census, give a sense of residential stability and diversity.
■ Average commute time in 2000.
■ Traffic accidents per capita in 2003.
These measurements provided a snapshot of how well the transportation network works in suburbs.
Once we gathered these statistics, we asked cities to review the ones they could verify. Then the Mid-America Regional Council’s Research Services Department converted the raw numbers into points on a 1-to-100 scale. The points were based on how close a suburb was to the suburban norm in each statistical measure; the more a city was better than the suburban norm, the more points it got. However, we kept statistical outliers — those numbers on the extreme low or high end — to either 0 or 100 points, to avoid letting any one measure influence a city’s score too much.
Cities’ points were then averaged within each of the nine categories and multiplied by a weight based on how important the category was, based on polling results. For example, in The Star’s poll about quality-of-life factors, respondents gave safety a level of importance that was nearly twice as high as commute times.
These were the weights assigned to the categories: crime, 1.963; education, 1.469; housing, 1.415; government, 1.229; sense of community, 1.224; lifestyle, 1.162; services, 1.133; neighborhoods, 1.131; and the automobile 1.00.
Ultimately, many of the decisions that went into this analysis — such as the selection of statistics and the selection of categories — represent judgment calls. As MARC’s Lenk puts it: “There’s no such thing as judgment-free data analysis.”
In the end, the top-rated places earned just over half the total possible points. No suburb was close to “having it all.” And small degrees of differentiation sometimes put one suburb above another. In fact, the spread between cities finishing next to each other in the rankings usually was about 1 percent.
Above all, the rankings are merely a snapshot in time. Improvements that various cities are currently undertaking could change the rankings in future years.