This post was also cross posted on the Social Informatics Blog.
In the New York Times today there is an article about Google X, the top-secret lab for big ideas at Google. According to the article, the future being imagined here is “a place where your refrigerator could be connected to the Internet, so it could order groceries when they ran low. Your dinner plate could post to a social network what you’re eating. Your robot could go to the office while you stay home in your pajamas. And you could, perhaps, take an elevator to outer space.”
This is indeed a compelling vision.. maybe. Am I the only one who finds this future a little underwhelming, maybe even problematic and dysfunctional? For one thing, aren’t there already enough what-I-had-for-lunch tweets without plates getting in on the action? And what if the plate (because of course it has artificial intelligence) decides to chime in with some commentary: ‘pizza leftovers again?! @John’sMom are you seeing this?’.
And while staying at home in pajamas does sound pretty attractive, how does sending your robot into the office help? Does it make typing noises at your computer so people think you’re there? Does it go to meetings for you? Does it make decisions for you? What if it messes up? Could you really relax at home in your pajamas knowing that your robot might create a huge mess (bureaucratic or physical) that you will need to clean up? What if your robot knows how you really feel about your coworker and gets into a fight with your coworker’s robot? Could your robot be fired? Could your robot get you fired? Could it get promoted? Who would be held responsible for its actions: you, the robot, the robot’s designer? Would the robot have a moral compass, and if so, whose? Would everyone send their robots in for them, so the workplace would be entirely robots? Would it be all the same to them if the lights and heat were shut off to save electricity? Would there be robot unions to protest this mistreatment?
And then there’s the grocery-ordering refrigerator. This seems to be one of the most common images of a digital future of pervasive computing, no doubt inspired by a moment of watching the last few drops of milk drip onto still-dry cereal and thinking ‘man, I wish the refrigerator could have just taken care of that.’ But what kind of groceries would it order? It stands to reason that a digital refrigerator might need to deal in SKUs, which would make it easy to order more frozen pizza but maybe more difficult to order ‘the best-looking local in-season fruit’. Also, what infrastructure would this require? In addition to the refrigerator, the ordering system would need to be in place on the grocery store end, as well as maybe a delivery service. It’s hard to imagine smaller markets being able to invest in this, and vendors at the local farmers’ market would be out of the loop entirely. This would undoubtedly be unproblematic for many people, but it is significant that these biases could be encoded in technical systems that could encourage already-existing (unhealthy) habits to become even more entrenched.
As Langdon Winner has argued, technologies shape forms of life: technology design is ultimately about choosing ways of living, of ordering the world around us and our activities in it. While geeky technophiles tend to do a pretty good job of dreaming up some very cool and labor-saving technologies, they are less good at envisioning the forms of life that they might institute.
This is where more nuanced and critical approaches like Social Informatics might be useful. As scholars who study social dimensions of technologies we are used to teasing apart their various social, cultural, philosophical, historical, political, and ethical aspects, and looking at them critically. These aspects are just as much, if not more, important than technical feasibility, yet they are discussed far less frequently (if at all) during technology development and assessment. Maybe one of the reasons for this is that our existing critical approaches focus on technologies that already exist, not ones that have yet to be implemented.
But why should geeks working at big corporations with deep pockets be the ones who get to decide what our (digital) future should look like? What sorts of futures might Social Inforfmatics scholars envision? And as we’re imagining futures, could we also maybe move past our own laziness to consider how we might build a future with less inequality and more justice, less stress and more health, less poverty and excess and more true wealth and happiness?
All of these may sound like unattainable goals. But imagining a future in which they are true would be a first step toward making them a reality. And I would take that over a ‘smart’ refrigerator any day.