I have a dilemma. It’s one I’ve been struggling with for a while, and it is this: if something can not be made be available to all people, should it be available to anyone at all?
I understand that’s a bit vague. Allow me to elaborate. This all started a couple years ago, when 12 year old Hebani was on (yet another) former house-turned-museum tour. I don’t remember exactly, but I want to say this was in the middle of nowhere Pennsylvania on one of those fake vacations your parents make you take to get out of the house — they’re not exotic enough to warrant a large budget, but there are enough activities (read: niche historical sights) to keep you busy enough to make your parents feel like they’ve pulled off some quality family time.
Regardless, here we are, in this Civil War-era house, complete with those butter churning things and a floorboards about to fall through. Twelve year old Hebani spots a staircase and attempts to venture upstairs before she is quickly rebuked — “The upstairs floors aren’t handicap accessible. If wheelchairs can’t get up there, neither can anyone else on the tour.”
That quote. That quote has lingered in my mind until today. I understand the premise — it is unfair to those that are handicapped to be barred from accessing historical places on a tour that is meant to be open to the general public. And since the historical site cannot be torn down to add in an elevator, it was decided the upstairs would simply not be accessible to the general public. Yet, I can’t help but apply this principle to situations that share similar characteristics, and see how an “all or nothing” approach stands up.
Let’s take, for example, the case of the now infamous app, StreetBump. StreetBump, a pet project out of the city of Boston’s Office of New Urban Mechanics, is an app that allows drivers to automatically report road hazards (potholes in particular) to the city through the use of their smartphones. Given that most smartphones are equipped with an accelerometer, StreetBump uses the phone’s built in motion detector to sense when a bump is hit. It then records the location of the bump through the phone’s GPS and transmits it to a server that allows the city to identify areas that are likely to have potholes that need to be fixed.
Sounds flawless right? Big data and social innovation coming together to contribute to public good? Almost. Not long after it was launched, people began to notice a trend in the areas StreetBump reported on. The app seemed to direct repair crews almost exclusively to wealthier neighborhoods, where people were more likely to own smartphones, download the app and participate in the larger process of reporting potholes. There was a clear socioeconomic bias to StreetBump that the city had not foreseen when rolling out the technology.
We are again faced with a similar dilemma to the one we had in the case of the non-handicap accessible museum. StreetBump is a great app in that it helps the city identify and correct potholes in neighborhoods. It essentially fulfills its purpose. However, it also participates in creating a disadvantage for those who cannot afford smartphones through no fault of their own — the city is seemingly blind to their plight should it choose to use only StreetBump to address the pothole situation. If we are to apply the “all or nothing” approach here (like the historical society did earlier when they decided the upstairs must be off limits to everyone), what would we go with? Would we immediately stop using StreetBump to fix potholes because it does not account for everyone? Or would we continue to use it because fixing more potholes is better than fixing fewer? I’m not sure.
I understand that this question is more philosophical than practical. Situations are rarely ever dealt with in such an all or nothing way. The museum, for example, has limited resources, and it chose (fairly, in my opinion) to allocate them to areas of the historical site that would be accessible by everyone rather than a few. StreetBump can be reworked to give a higher weight to those reports that come from economically disadvantaged areas to account for the lower volume of reports that come from those areas (an idea Professor Jon Kleinberg actually brought up in a recent talk on campus). Or perhaps it could find ways to utilize other, more commonly used technologies that everyone might have access to. There are always ways to be better in accounting for racial, economic and social inequalities that occur through the use of technology and big data in public policy. But, I ask, if we cannot find a way to correct for discrimination, if we must pick an all or nothing approach, which do we pick — all or nothing?
Hebani Duggal is a junior in the College of Arts and Sciences. She can be reached at firstname.lastname@example.org. Teach me How to Duggal appears every other Tuesday this semester.