Wednesday, April 18, 2018

no robot apocalypse (yet)

'The Frankenstein complex' is the term coined by 20th century American author and biochemistry professor Isaac Asimov in his famous robot novels series, to describe the feeling of fear we hold that our creations will turn on us (their creators) — like the monster in Mary Shelley’s 1818 novel.

One hundred years later in 2018 we still seem worried about this idea of subordination. That we might ultimately lose the ability to control our machines.

At least part of the problem are the concerns about AI alignment. Alignment is generally accepted as the ongoing challenge of ensuring that we produce AIs that are aligned with human values. This is our modern Frankenstein complex.

For example, if what has been described as an AGI (Artificial General Intelligence) ever did develop at some point in the future would it do what we (humans) wanted it to do?

Would/could any AGI values ‘align’ with human values? What are human values, in any case?

The argument might be that AI can be said to be aligned with human values when it does what humans want, but…

Will AI do things some humans want but that other humans don’t want?

How will AI know what humans want given that we often do do what we want but not what we ‘need’ to do?

And — given that it is a superintelligence — what will AI do if these human values conflict with its own values?

In the notorious thought experiment AI pioneer Eliezer Yudkowsky wonders if we can specifically prevent the creation of superintelligent AGIs like the paperclip maximizer?

In the paperclip maximizer scenario a bunch of engineers are trying to work out an efficient way to manufacture paperclips, and they accidentally invent an artificial general intelligence.

This AI is built as a super-intelligent utility-maximising agent whose utility is a direct function of the amount of paperclips it makes.

So far so good, the engineers go home for the night, but by the time they’ve returned to the lab the next day, this AI has copied itself onto every computer in the world and begun reprogramming the world to give itself more power to boost its intelligence.

Now, having control of all the computers and machines in the world, it proceeds to annihilate life on earth and disassembles the entire world into its constituent atoms to make as many paperclips as possible.

Presumably this kind of scenario is what is troubling Elon Musk when he dramatically worries that ‘…with artificial intelligence we are summoning the demon.’

Musk — when not supervising the assembly of his AI powered self-driving cars can be found hanging out in his SpaceX data centre’s ‘Cyberdyne Systems’ (named after the fictitious company that created “Skynet” in the Terminator movie series) — might possibly have some covert agenda in play in expressing his AI fears given how deep rival tech giants Google and Facebook are in the space. Who knows?

The demon AI problem is called ‘value alignment’ because we want to ensure that its values align with ‘human values’.

Because building a machine that won’t eventually come back to bite us is a difficult problem. Although any biting by the robots is more likely to be a result of our negligence than the machine’s malevolence.

More difficult is determining a consistent shared set of human values we all agree on — this is obviously an almost impossible problem.

There seems to be some logic to this fear but it is deeply flawed. In Enlightenment Now the psychologist Steven Pinker exposes the ‘logic’ in this way.

Since humans have more intelligence than animals — and AI robots of the future will have more of it than us — and we have used our powers to domesticate or exterminate less ­­well-endowed animals (and more technologically advanced societies have enslaved or annihilated technologically primitive ones), it surely follows that any super-smart AI would do the same to us. And we will be ­powerless to stop it. Right?

Nope. Firstly, Pinker cautions against confusing intelligence with motivation. Even if we did invent superhuman intelligent robots, why would they want to take over the world? And secondly, knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm (and in any case big data is still finite data, whereas the universe of knowledge is infinite).

The word robot itself comes from an old Slavonic word rabota which, roughly translated, means the servitude of forced labour. Rabota was the kind of labour that serfs would have had to perform on their masters’ lands in the Middle Ages.

Rabota was adapted to ‘robot’ — and introduced into the lexicon — in the 1920’s by the Czech playwright, sci-fi novelist and journalist Karel Capek, in the title of his hit play, R.U.R. Rossumovi UniverzálnĂ­ Roboti (Rossum’s Universal Robots).
In this futuristic drama (it’s set in circa 2000) R.U.R. are a company who initially mass-produced ‘workers’ (essentially slaves) using the latest biology, chemistry and technology.

These robots are not mechanical devices, but rather they are artificial organisms — (think Westworld) — and they are designed to perform tasks that humans would rather not.

It turns out there’s an almost infinite market for this service until, naturellement, the robots eventually take over the world although, in the process, the formula required to create new ‘robots’ has been destroyed and — as the robots have killed everybody who knows how to make new robots — their own extinction looms.

But redemption is always at hand. Even for the robots.

Two robots, a ‘male’ and a ‘female’, somehow evolve the ‘human’ abilities to love and experience emotions, and — like an android Adam and Eve — set off together to make a new world.

What is true is that we are facing a near future where robots will indeed be our direct competitors in many workplaces.

As more and more employers put artificial intelligences to work, any position involving repetition or routine is at risk of extinction. In the short-term humans will almost certainly lose out on jobs like accounting and bank telling. And everything from farm labourers, paralegals, pharmacists and through to media buyers are all in the same boat.

In fact, any occupations that share a predictable pattern of repetitive activities, the likes of which are possible to replicate through Machine Learning algorithms, will almost certainly bite the dust.

Already, factory workers are facing increased automation, warehouse workers are seeing robots move into pick and pack jobs. Even those banking on ‘new economy’ poster-children like Uber are realizing that it’s not a long game — autonomous car technology means that very shortly these drivers will be surplus to requirements.

We have dealt with the impact of technological change on the world of work many times. 200 years ago, about 98 percent of the US population worked in farming and agriculture, now it’s about 2 percent, and then the rise of factory automation during the early part of the 20th century - and the outsourcing of manufacturing to countries like China - has meant that there is much less need for labour in Western countries.

Indeed, much of Donald Trump’s schtick around bringing manufacturing back to America from China is ultimately fallacious, and uses China as a convenient scapegoat.

Even if it were possible to make American manufacturing great again, because of the relentless rise of automation any rejuvenated factories would only require a tiny fraction of human workers.

New jobs certainly emerge as new technologies emerge replacing the old ones, although the jury is out on the value of many of these jobs.

In 1930, John Maynard Keynes predicted that by the century’s end, technology would have advanced sufficiently that people in western economies would work a 15-hour week. In technological terms, this is entirely possible. But it didn’t happen, if anything we are working more.

In his legendary and highly amusing 2013 essay On the Phenomenon of Bullshit Jobs, David Graeber, Professor of Anthropology at the London School of Economics, says that Keynes didn’t factor into his prediction the massive rise of consumerism. ‘Given the choice between less hours and more toys and pleasures, we’ve collectively chosen the latter.’

Graeber argues that to fill up the time, and keep consumerism rolling, many jobs had to be created that are, effectively, pointless. ‘Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed.’ He calls these bullshit jobs.

The productive jobs have, been automated away but rather than creating a massive reduction of working hours to free the world’s population to pursue their own meaningful activities (as Keynes imagined) we have seen the creation of new administration industries without any obvious social value that are often experienced as being purposeless and empty by their workers.

Graeber points out that those doing these bullshit jobs still ‘work 40 or 50 hour weeks on paper’ in reality their job often only requires working the 15 hours Keynes predicted — the rest of their time is spent in pointless ‘training’, attending motivational seminars, and dicking around on Facebook.

To be fair, robots are unrivaled at solving problems of logic, and humans struggle at this.

But robot ability to understand human behavior and make inferences about how the world works are still pretty limited.

Robots, AIs and algorithms can be said to ‘know’ things because their byte-addressable memories contain information. However, there is no evidence to suggest that they know they know these things, or that they can reflect on their states of ‘mind’.

Intentionality is the term used by philosophers to refer to the state of having a state of mind — the ability to experience things like knowing, believing, thinking, wanting and understanding.

Think about it this way, third order intentionality is required to for even the simplest of human exchanges (where someone communicates to someone else that someone else did something), and then four levels are required to elevate this to the level of narrative (‘the writer wants the reader to believe that character A thinks that character B intends to do something’).

Most mammals (almost certainly all primates) are capable of reflecting on their state of mind, at least in a basic way — they know that they know. This is first-order intentional.

Humans rarely engage in more than fourth-order intentionality in daily life and only the smartest can operate at sixth-order without getting into a tangle. (‘Person 1 knows that Person 2 believes that Person 3 thinks that Person 4 wants Person 5 to suppose that Person 6 intends to do something’’).

For some perspective, and in contrast, robots, algorithms and black boxes are zero-order intentional machines. It’s still just numbers and math.

The next big leap for AIs would be with the acquisition first or second-order intentionality — only then the robots might just about start to understand that they are not human. The good news is that for the rest of this century we’re probably safe enough from suffering any robot apocalypse.

The kind of roles requiring intellectual capital, creativity, human understanding and applied third/fourth level intentionality are always going to be crucial. And hairdressers.

And so, the viability of ‘creative industries’ like entertainment, media, and advertising, holds strong. Intellectual capital, decision-making, moral understanding and intentionality.

For those of us in the advertising and marketing business it should be stating the obvious that we should compete largely on the strengths of our capability in those areas, or the people in our organisations who are supposed to think for a living.

By that I mean all of us.

For those who can still think any robot apocalypses are probably the least of our worries. But take a look inside the operations of many advertising agencies and despair at how few of their people are spending time on critical thinking tasks and creativity.

Even more disappointing is when we’d rather debate whether creativity can be ‘learned’ by a robot rather than focusing on speeding up the automation of the multitude of mundane activities in order to get all of our minds directed at fourth, fifth and (maybe) sixth order intentionality. The things that robots’ capabilities are decades away from, and that we can do today, if we could be bothered.

By avoiding critical thinking, people are able to simply get shit done and are rewarded for doing so.

Whilst there are often many smart people around, terms like disruption, innovation and creativity are liberally spread throughout agency creds power point decks, as are ‘bullshit’ job titles like Chief Client Solutions Officers, Customer Paradigm Orchestrators or Full-stack Engineers, these grandiose labels and titles probably serve more as elaborate self-deception devices to convince their owners that they have some sort of purpose.

The point being that far from being at the forefront of creativity most agencies direct most of their people to do pointless work giving disproportionate attention to mundane zero-order intentionality tasks that could and should be automated.

Will robots take our jobs away? Here’s hoping.

Perhaps the AI revolution is really the big opportunity to start over. To hand over these bullshit jobs — the purposeless and empty labour we’ve created to fill up dead space — and give us another bite at the Keynes cherry, now liberated to be more creative and really put to use our miraculous innate abilities for empathy, intentionality and high level abstract reasoning.

To be more human.

Because, and as evolutionary theory has taught us, we humans are fairly unique among species. We haven’t evolved adaptations like huge fangs, inch-thick armour plating or the ability to move at super speed under our own steam.

All of the big adaptations have happened inside our heads, in these huge brains we carry around, built for creativity and sussing out how the world works and how other humans work.

That’s the real work. Not the bullshit jobs.

In The Inevitable, Kevin Kelly agrees that the human jobs of the future will be far less about technical skills but a lot about these human skills.

He says that the ‘bots are the ones that are going to be doing the smart stuff but ‘our job will be making more jobs for the robots’.

And that job will never be done.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Eaon’s first book Where Did It All Go Wrong? Adventures at the Dunning-Kruger Peak Of Advertising’ is out now on Amazon worldwide and from other discerning booksellers.

This article is an adapted excerpt from his second book ‘What’s The Point of Anything? More Tales from the Dunning-Kruger Peak’ due at the end of 2018.

Tuesday, April 10, 2018

george carlin


"I’m 71, and I’ve been doing this for a little over 50 years, doing it at a fairly visible level for 40. 

By this time it’s all second nature. It’s all a machine that works a certain way: the observations, the immediate evaluation of the observation, and then the mental filing of it, or writing it down on a piece of paper. 

I’ve often described the way a 20-year-old versus, say, a 60- or a 70-year-old, the way it works. 

A 20-year-old has a limited amount of data they’ve experienced, either seeing or listening to the world. At 70 it’s a much richer storage area, the matrix inside is more textured, and has more contours to it. 

So, observations made by a 20-year-old are compared against a data set that is incomplete. Observations made by a 60-year-old are compared against a much richer data set. And the observations have more resonance, they’re richer."

Adding to Bob Hoffman's observation last week that 'People over 50 aren't creative enough to write a f***ing banner ad, but they are creative enough to dominate in Nobels, Pulitzers, Oscars, and Emmys.'