Back in 2009, I was talking about adaptive landscapes: three real places and the quite different systems that human communities had evolved there to manage them. That was just before the PhD let go of those strands to focus on spatial economics (I'd been, hubristically, trying to combine all those up to that point). Two of those communities are concrete examples of non-centralised social technologies[1] achieving specific resource goals. The Balinese rice system constrains water use in a way that optimises the balance between pest management and productivity. Andean potato production was a magical innovation machine and living, breathing laboratory spread over the hills.
This stuff is still very dear to my heart, and flowed directly from the questions in the original PhD proposal. I want to get on to the adaptive landscapes stuff, but let's lead into that by answering a more straightforward bit from PhD #1.0. Top of the list: was Hayek right about the sacredness of the price system? Was its 'spontaneous order' a singularity in human history, requiring any attempt at planned interference in human affairs to be suppressed? Given what I've just said about Bali and Peru – guess what? Shock: no, I don't think he was. He correctly identified the price system as a distributed social technology, emerging from the uniquely human mix of evolution and language. But, far from being astronomically unlikely, there's evidence that humans are primed to create this sort of structure. I've long entertained a notion that adaptive landscapes are intimately related to the emergence of language itself, Wittgenstein's notion of meaning as a kind of flock tying nicely to that.
Whether that's true, or whether adaptive landscapes were a later innovation built on the platform language provided, makes little difference to their riposte to Hayek: we are natural-born de-centralisers, and we can make systems as diverse as you can imagine. Deifying the price system? Educating the socialism out of people (Hayek acknowledged people have altruistic instincts early in life) so's they didn't get the urge to meddle? Silly.
That's a gross over-simplification of Hayek's thinking and, in particular, I do partly buy his aversion to "planning blindness" and his view that social change should be more like gardening than engineering or construction. (Planning blindness nearly broke Bali's rice management system, for instance.) But it's clear that, if we followed his manifesto to the letter, new adaptive landscapes would have immense difficulty taking root, let alone blossoming.
His extremism is a direct result of seeing the price system as such a gargantuan lucky accident. I can almost sympathise with him defending it in the way he did, given that. There's a parallel: many now see life itself as (stealing Kauffman's title) at home in the universe, in huge contrast to the hideous void I read about in science books as a child. These told stories of life as a bizarre, borderline impossible accident surrounded by an infinite cosmic nothingness. Now - at same time as we've evolved new eyes to see that cosmos with – our collective sense of life seems much more fecund. Kauffman's work paints a picture of life being built right into the mathematics of self-organisation – life lives in the logic the universe runs on.
We sorely need that same collective sense in how we view human organisation. There's no escaping the quagmire involved – Hayek wasn't wrong to park his argument squarely on political turf. How we collectively manage resources – that's a pretty decent definition of "politics". But politics is now firmly jamming its fist into the planet's face and we need to find some way of stopping the fighting. The planet will win – it's bigger than us. The planetary boundaries we've started identifying may seem abstract, but the consequences are as real as the 2001 Japanese tsunami, just as wholly indifferent to human life.
So, as the PhD started out asking: what difference does computation[2] make? I was swallowing whole complex adaptive systems memes without chewing at the time, trying to pin stories to it while it sped away (as in one of my undergrad dissertations on 'organic utopians'). Old-school modernist utopians were having a hard time with their carefully laid out blueprints and "total order to be erected floor by floor in a protracted, consistent, purpose-guided effort of labour" (Bauman). Did this new global wiring offer a route to self-organised utopias? Was that hope purely metaphorical or was the technology doing something more fundamental? Even if Hayek was once right, didn't ICT knock all that up in the air like a bored chess loser? Not playing this game any more, the rules are stupid, let's play something else.
The wiring of the global economy was quite obviously beginning to melt and quicken – but to what end? The original proposal talks about a "convergence of real-world and simulated systems that draw closer as their feedback tightens" – not a new idea. It's in Gelernters's 1993 Mirror Worlds, which I found via Harvey Miller (PDF). The idea's actually at least as old as Arthur C. Clark's 1956 "the city and the stars", in which a "memory bank" system containing a virtual world constantly regenerates the physical settlement in computer-controlled perfection.
That's the extreme sci-fi bookend, isn't it? Computer and reality converge on perfect homeostasis and, once there, not an atom drifting away from The Plan? There's a lot more to be said on that, but let's step out of this rattling between utopias and back to mundane computation. What things have actually made humans and computers draw closer? Our visions are almost always wrong. What's the most powerful cyborg attachment to have emerged in the past 25 years? It's a coin-flip between the mouse and the search engine. Neither would have made for the most engaging sci-fi movie (though Google is perhaps getting there).
What do both have in common? They play to the strength of both species[3]. With minimal practice, they mesh seamlessly with our own wiring. With a lot of practice, they change that wiring (as I'm reminded whenever my eyeballs' browser-scroll-compensation jolts when the browser fails to respond to the mousewheel). The search engine is a perfect human plugin: over time, I've developed a set of heuristics that can turn hazy memory into precise knowledge with little more than CTRL+K plus two or three keywords. "They promised us life in space, flying cars, and jetpacks but all we got were pocket-sized rectangles containing all human knowledge. FAIL." Quite. And human knowledge we can get at with ease because human and machine are doing what they're best at.
But how would you go about translating this idea to social technology – to groups of people acting independently? I'm going to gloss over the major problem with my argument here: I'm using two case studies of unproven relevance to make some big claims about us being natural-born self-organisers. Something to come back to! For now, let's assume that the price system proves its not atavism to think we're still capable of live social technologies (given we run the fecking world on it) and that new forms can be found. The question then is: what can machines bring to this? What are their strengths, best able to amplify ours? Can we actually plan for linking those strengths?
Assuming there's a case, it's the parallel with the mouse I like the most. Such a beautifully prosaic little device transformed individuals' relationship to machines. But what the hell would the human-collective equivalent of a mouse look like? What would the balance between physical and software be? Might new hardware be involved? What about, say, something that could be built on smart metres (every UK home should have one by 2020) to come up with new ways of managing energy?
I've got some specific ideas on incentive structures built on smart metres but, for the moment, I'm more interested in what it would take to get a lot of people trying a tangle of solutions to see what sticks. To borrow that Hayek metaphor again, can you make a garden to grow them? Select the plants you want, bed them in, water them?
Jane Jacobs writes about this superbly in the Economy of Cities: all development is accident fuelled by intention[4]. Humans excel at this: we get together to attempt stuff and, if we're atttempting it surrounded by a lot of others, that unique mix of evolution and humanity that's something like art will do its thing. It will try something, see other possibilities, tinker, steer, drop, combine. It will bump into someone in a nearby park and hear an idea that catalyses an entirely different use for something. It's, above all, organic. But you need the intention: again, like all art – as Julia Cameron says – it's about getting something down, not thinking it up.
Steven Johnson's Where Good Ideas Come From sources a vital ingredient for this: platforms. A computing metaphor, it can be applied to the backbone, the price system and democracy as much as mouse-plus-WYSIWYG, search engines, GPS. The word itself is perfect: a solid base over which new things can grow. To mix metaphors hideously, though, platforms are also gateways: they open up new possibilities that were previously next-to-impossible, largely by dropping the cost of exploring new spaces. (Think what having a backbone does for a phylum!)
So, I'm saying new adaptive landscapes / social technologies / computation may help us transition to a world that humans can survive, thrive in. Of some sort - possibly global but also a patchwork of scales and geofences, possibly like the price system, or building on it, or something completely new. The thing about platforms and innovation, though: platforms don't know they're going to be platforms beforehand, generally. But Jacobs gives me faith: as long as we're throwing ourselves at the problem, good things will happen. We just can't predict which things will succeed – which is why we could do with having as many engaged people going at it as possible. We already know adaptive landscapes can emerge that can change the planet – the price system demonstrates that. We just need to find ones that will allow us to stay here.
(Quick side-note: safety nets to support failure will support successful innovation. Left and right need to play a few more rounds of capture-the-flag with the concept of innovation. For example: have a national health service. It means potential innovators won't be clinging to their corporate benefits fearing, entirely understandably, for their family's well-being, when they could be inventing the next Internet.)
The range of possible systems is as large as the number of communities or forms of organisation, and they can combine in entirely unpredictable ways. The obvious targets are carbon, water, power, transport, health, social care. Central planning would work for swathes of that, let's not forget. The price system may do well elsewhere, or it could be augmented, or something entirely new could emerge. That is probably necessary for some elements: as Obama pointed out during his first campaign, traditional energy companies - like any other company - want to sell as many units as possible. A different model is needed (people here at Leeds have been working on one for a good while).
There's also the Jevons paradox, which everyone else in economics just calls supply and demand: increased efficiency makes for cheaper goods, makes for more goods sold[5]. Clearly, higher efficiency is important, but what kind of efficiency? Read Montague suggests it might not mean quite what we think. He defines efficiency as "the best long term returns from the least immediate investment." And as he says:
"All computations are not created equal. Some cost more to run, and some provide better long-term payoffs to the organism. For biological computations, efficient solutions have won the competition. How do we know? Because your brain is merely warm - you can safely touch your head - while the processor in your personal computer is so wastefully hot that it heats your office and you can’t touch it with a bare finger." ['Your Brain is Almost Perfect p.19]
He locates this type of efficiency in evolution's ability to actually cost computations according to their contribution to survival – perhaps directly translatable to human systems, at the very least a useful spin on in-grained notions of efficiency. The difference, perhaps, between economic and evolutionary energy use.
Then there's the problem of choosing goals. One huge advantage of the price system (lauded by those most devoted to it) is, supposedly, that it doesn't have one. Or, rather, not one that humans have concocted themselves. It emerged from a set of vaguely understood, theory-free interactions[6]. I do think there's a danger here. The idea of zero-growth troubles me, for instance. It makes perfect logical sense and, practically, has to happen at some point if we're to maintain a balance with the biosphere. I'm just unsure actually attempting to build systems that target zero growth will get us there. You'd need effective measures of growth to begin with, which would mean knitting a quite abstruse concept – just as nebulous as "the market" – into an immutable global system. Not appealling, particularly after having spent some time wading through national accounts methods that growth measurement are based on.
That said, isn't something precisely like that needed if we're to hit net-zero carbon? Carbon is – like other physical properties – at least something you can measure fairly straightforwardly. The various methods of hitting the target would be vastly messier, but at least the goal itself (net-zero carbon) is clear – relative to something like "growth".
So: the price system is not the 'only game in town' for the huge problems we face (or so I'm assuming for this post!) Elinor Ostrom has shown in frightening empirical depth that collective management of resources beyond the price system or central management is possible. Social technologies go one step further, away from collective to distributed management. Computation should offer whole new ways to let social technologies emerge - or perhaps simply offer ways to centralise, creating systems that aren't actually distributed in the way the price system is, but wouldn't have been possible before (the computer system acting as a kind of panopticon, making everyone visible to the centre). At any rate... how to get from spangly-eyed "look at the possibilities" to testing out ideas? A start might be to look at what's already happening through this prism, see what's been missed.
And what about marrying our fate to machines in this way? It's perhaps too late to be worrying about throwing our lot in with them given how far ICT has threaded through the globe's nervous system. It would certainly be a shame if a solar flare Borg-queened us, but that's not much less true now than it will be in fifty year's time. Computational sustainabilty certainly needs to include this kind of issue – and (back with the gardening metaphors) there are ways to make social computation more like a trellis to grow social life on than the cyborg metaphor, as projects like front porch forum show. So we don't have to throw away all resilience in the process.
[1] I'm still on the look-out for a better label. `Social technology' is Beinhocker's - but it has the drawback of also being a term used to describe twitter/facebook etc. Googling it is thus not very successful (assuming that `being distinguishable on Google' is something to aim for).
[2] By which I mean, "computation and ICT" but I don't want to say both all the time. I prefer computation because it makes clear that it's the algorithms that are prime – though the global network connecting them and, in part, providing the nerves of computation is of course essential to that.
[3] Computers, species? Bah - nice phrase > accurate. Sometimes. Maybe?
[4] Or at least, that's my seven-word summary of all of Jacobs' ideas.
[5] In pretty much every possible case. It's only Giffen goods that don't and they have very rare characteristics; they're generally only Giffen-ish when considered against some other good they're tightly bound to e.g. the classic example: the price of rice goes up, I can't afford chicken any more, I don’t buy chicken but buy more rice.
[6] After it emerged, plenty of theory got inserted though: Monbiot has a stark quote showing the 'anti-charitable contributions act' of 1887 banned "at the pain of imprisonment private relief donations that potentially interfered with the market fixing of grain prices" during the Great Famine in India. A law like this requires "the market" to have been lifted some way above its "spontaneous order" origins.
Recent comments
21 weeks 6 days ago
2 years 12 weeks ago
2 years 12 weeks ago
2 years 14 weeks ago
2 years 15 weeks ago
2 years 15 weeks ago
3 years 12 weeks ago
3 years 36 weeks ago
3 years 36 weeks ago
3 years 38 weeks ago