A Blog by Jonathan Low

 

Apr 24, 2018

Tech and Ethics: 'Yeah, It's Nice, But Does It Scale?'

Have the implications of performance begun to match the importance of performance itself? JL

Anne Currie reports in The Register:

"What if everyone did that?" is not a theoretical question any more. We've democratised scale using tech. My friends at Google, and at much smaller companies, and even my mate in a garage in Romania, have the means to affect human behavior on a global scale.We aren't a craft anymore. We're looking at three-orders-of-magnitude improvements using what is now off-the-shelf tech. Cloud-based services make massive speed and scale increases realistically doable. That's why we'(re) interested in ethics.
Widely available tech is now creating hyperscale software products but, as Enlightenment philosophers pointed out, ethics can look very different at scale. Are we ready?

Ethics at scale

When you committed a youthful transgression, like dropping a crisp packet, your mother probably said: "What if everyone did that?". That's because parents are natural followers of the 18th century German philosopher, Immanuel Kant.
Kant was particularly interested in establishing consistent moral laws and the first person to ask of behaviour: "Does it scale?" If it didn't, he reckoned it was immoral or at least terribly selfish. He called that concept the "maxim of universality", which is paraphrased by parents everywhere.
What Kant and my mum were doing was proposing a thought experiment. Just because I dropped some litter it didn't actually mean everyone else would start doing it – my eight-year-old self had limited ability to effect that kind of change. The idea was to get me to apply my reason, imagine a world where my action was scaled up to billions of people, and decide whether that was a world I wanted to live in.
"What if everyone did that?" is not a theoretical question any more. Suddenly, my friends at Google, and at much smaller companies, and even my mate in a garage in Romania, have the means to affect human behaviour on a global scale. I think that's why we've started to get so interested in ethics.

What is this scale?

I'm a sceptical software veteran, a "greybeard" as I'm often referred to, and I like to know broadly what's going on in our industry. About a year ago I realised I didn't have a great handle on what fairly advanced enterprises were doing in the cloud.
We know what super-advanced, slightly unreal folks like Netflix are up to because we hear about them ad nauseam, but what about profitable businesses that are ahead of the pack but still essentially normal? I had no idea. So, with my container solutions pals, we asked a lot of those folk about their systems and processes and learned something surprising and slightly scary. Cloud-based services make massive speed and scale increases realistically doable.
The well-known, normal companies we talked to were using cloud primarily to get much faster turnaround on features. The common theme was using cloud plus CI/CD plus microservices to get sub-hourly deploys. In most cases, their new launch rate was a 500x+ improvement on their old one.
Interestingly, for people that started around five to six years ago, getting to this point was a roughly five-year project. For the ones that started two years ago, it was a roughly two-year project. You get the picture. Off-the-shelf tooling is improving and this 500x change is becoming more achievable. I've seen these systems at the FT, Skyscanner, and others. It's not crazy talk.
They rapidly evolve killer features using their new fast turnaround then scale up their IaaS to handle a bigger user base. So, the scale I'm talking about is speed (idea to market) and reach (user numbers) and we're looking at nearly three-orders-of-magnitude improvements using what is now off-the-shelf tech. It's going very well for these folk. They're making more money and their customers are getting what they want. There's nothing wrong with that. So why do I feel nervous?
I'm paying attention because 500 or 1000x changes in things usually signal a revolution. I'm not talking about AI or ML here, I'm talking about anything that changes culture by touching millions or billions of people. That could be far more innocuous-seeming than Skynet.
We're getting close to the point where almost anyone could potentially affect the behaviour of a significant proportion of humanity. We've democratised scale using tech. I don't think we've got our heads round that yet.

People vs Things

On my phone this morning I pleasantly interacted with a company whose primary goal is to tinker with my behaviour and that of billions of other humans. The engineers at that company say they are more interested in things than people. Fortunately, it isn't their engineers who decide what aspect of my life to change next, it's their marketeers. The engineers just make it happen. Phew! That's a relief.
Now the cloud has commoditised scale that could be weaponised by your marketing teams. Think how excited they'll be!

Is that an oath?

There's a lot of chat right now about Hippocratic Oaths for software engineers. I think the last thing I solemnly swore as a developer was to implement the will of my boss as rapidly as possible in exchange for money. This was called the Oath of the JFDI. Marketeers tend to be similarly loyal followers of the KPI. Will that still work?
As a middle-aged engineer, we all know it's my job to rain on parades, but it isn't just figuratively bearded people like me who are concerned about tech and scale. Younger engineers are starting to wonder if we should put more thought into what we agree to do for product marketing.
Should software engineering organisations all adopt Kant? No. He's rather limiting – we don't want to stop doing anything. However, I suspect we can take some hints from him. We could start thinking about and addressing upfront the large scale implications of what we are building. If products are successful, we demonstrably often fail to address scale issues later. Bitcoin mining is projected to require as much energy as Italy by the end of the year. Car fumes are now producing dangerously toxic air in major cities, which software engineers addressed by faking a fix. Data centres are operating at only around 10 per cent energy efficiency, which doesn't support our projected growth. Not having a scalable plan for a bit of new tech that would be beneficial to all is shortsighted and, Kant would argue, unethical (interestingly, it is also ultimately uneconomic).
Maybe my JFDI Oath won't cut it any more. Instead, perhaps an ethical developer could consider if humanity might come to increased harm from her system at scale, and plan to handle the scale without the harm.
We aren't a craft anymore. We might feel like artisans with laptops but what we produce could potentially be in front of a significant chunk of the human race by lunchtime. We're not hand-crafting dovetail joints here. To be ethical engineers in a hyperscale world we need to reason critically about what we build, on a feature-by-feature basis, and stand by our reasoning if it is sound. Inevitably, we'll always get it wrong sometimes but when we do we must have a realistic plan to fix it.
BTW you know all those social media folk who are wringing their hands about the addictive algorithms they invented without considering the effect at scale and who had no realistic plan to resolve any problems? Taken together, that's unethical. ®

0 comments:

Post a Comment