The Big Picture is a monthly PESTLE analysis to give you an overview of innovation over last month. Today we look at Society and Technology.
Defining who we are and our identity evolves with time. As complex as it may be for individuals, it also proves to be a challenge on a nation wide basis, as the recent events in Hong Kong have shown. This “identity crisis” as Zwen Wang names it for the TIME is different from interest crisis, people cannot be forced or bribed to change who they are. People themselves are defining it. And we should encourage them to do so. As Alan Iny from BCG explains, we need to teach children to test and experience their ideas, learn to go through failures and build sustainable victories, manage doubt and uncertainties. In following article, he explains “the intellectual starting place to create and evaluate multiple scenarios is doubt”. In following article from Forbes, Jordan Shapiro mentions how “Education becomes the structure within which narratives of personal and collective identity are contextualized using the intellectual structures and academic skills that we’ve inherited from preceding generations. But we need to make sure that these tools are also aligned with learning outcomes which prioritize human dignity rather than haste, consumption, and algorithmic metrics”.
This also how we teach them to say “NO” and grow with external feedbacks and remarks that will be seen as constructive building points rather than negative grades. As Dionne Lew mentions on her blog, we need to be “insatiably curious, read constantly including from opposing views, be ready to tolerate discomfort”. Yes, discomfort is also a part of who we are, how we behave, how we make decisions. They arise in tensions and conflicts that can have serious impact on a country level, and they need to be managed on a multi-cultural basis to be better driven.
Why? Because people want to know who they are in the whole wide world. In following article from IESE business school, Carlos Sanchez Runde explains how “in the global context, managers often have to negotiate a delicate balance between following their conscience and following the letter of the law in different countries”. The lack of systematic understanding of cultural values and individual identities too often lead to conflict. The upside is greater value to share together. Follow the example of Ibrahima Sarr, a Senegalese coder, who used his dialect structure to translate Firefox into Fulah, “spoken by 20 million people from Senegal to Nigeria”, as The Economist reports.
After all, our system is calling for change, and we should see this as an opportunity. Dorie Clark from Forbes how “awesome” it is that “job security is dead”, because it provides us with more opportunities to “relearn to learn”, and ask ourselves the right questions. This way, as Mark C Crowley has found out in following article, “traditional beliefs about how best to motivate human beings continue to be the key reason why 70% of the working population is disengaged”. We should instead focus on what they desire as workers, to enable real change in companies, consider fatherhood, diversity and men/women equality, and appreciate difference, as Lynda Gratton from the London Business School remarks.
We should focus on individual needs for another reason: we do not want to go wild. Do we? Several voices are rising to claim the “Capitalism has gone wild”, and they count Julia Kirby from HBR among them. She says that “if we start to think about indicators that society needs, what creates innovation, we can start to spring capitalism out of its excesses”. Alex Nicholls from Saïd Business School is following the same idea, and he claims that “timidity is not the way forward”. From his point of view, we shouldn’t just focus on competition but more on innovation. “Let’s see the social economy re-engage with the ideas, heroes and heroines of its past and set out an alternative vision of society that is values-driven, bottom-up, proud and stroppy” is his conclusive call. With such obvious changes needed, conscious capitalism has emerged as a way to have a better impact on the world. It focuses on cultural values and inspirational leadership to build businesses that care as much for their employees and customers as it does for their impact on the planet, as Susann Cramm from Strategy & Business highlights.
In the Age of Robots
As The Economist points out, annual robot shipments has risen from 50 million in 1992 to 175 million in 2013. Where are they going? Automotive is the first choice, with around 70,000 sales in 2013, followed by Electronics with around 40,000 sales. The vast majority of robots are heading to China, around 35,000 of them. No surprise until then. Here it comes: “for every robot deployed, there’s 3.6 jobs created to install them. As there will be 200 million shipments around the world in 2014, that’s 720 million new jobs” created by robots, for humans. Really? Well, there is a disagreement. When asked, 42% of experts said yes, and 58% said no.
Here’s someone who says yes. In following article, Abel Fernandez argues that when a society makes a technical innovation, which leads to industrial innovation, it has always and will always find a way to improve its own wealth, by creating and innovating in new services and products that could have never existed before. As the writer concludes, we are in the second half of Moore’s law, and from now everything is possible.
For example, decision making might be programmable in the near future. As Michael C Mankins explains for HBR, “Advanced analytic models can incorporate the experience of an organization’s best decision makers, helping to eliminate alternatives that are less viable than others and focusing the evaluation on the most promising courses of action.” This would allow (or force?) enterprises models to change and develop new skills, which in the long term will transform society. Why wouldn’t it be towards a positive future?
As we move the second half of the famous rice chessboard that relates to Moore’s law on exponential growth, computers can now also learn exponentially developed skills. As The Economist explains, “The next phase, many are predicting, will be defined by computers that no longer need to be explicitly programmed but instead learn what it is they need to do by interacting with humans and data”. Experts and scientists are currently developing neuromorphic chips that work by mimicking neurons and synapses in the brain. “Stanford University, Handerberg University, University of Manchester, ETH Zurich lead the field. But companies like IBM and Qualcomm also have very promising neuromorphic chips in R&D”.
Does it mean that computers will be able to think? Hard to say, or more precisely, hard to prove, as Antonio Regalado from the MIT explains. He interviewed M. Koch, chief scientific officer of the Allen Institute for Brain Science in Seattle, on the possibility that computer may eventually become conscious. Koch answers in this analogy: “you can make pretty good weather predictions these days. You can predict the inside of a storm. But it’s never wet inside the computer. You can simulate a black hole in a computer, but space-time will not be bent. Simulating something is not the real thing. Consciousness is always supervening onto the physical. But it takes a particular type of hardware to instantiate it”. As we are still far from understanding where how human intelligence is built from a biological point of view, there is no point in wondering how we could code it.
What if we would? There are slightly more pessimistic approaches developed in cybernetics. One of them is called “singularity”. “In terms of the technological singularity, it is the point where technological advances are happening at such a fast rate that it becomes impossible for contemporary humans to comprehend or understand. Simply put this means that computers will understand the world and its technologies better than humans. At this point we’ll be left behind and the world will become AI-centric”, explains Martin Butters from MarkITWrite Tech. According to him, some scientist fear that machines take control and improve by themselves up to the point they think humans are “redundant”. Maybe because we’ve turned out to actually damage Earth more than we protected it, as the article suggests. That’s of course in the eventuality that machines also learn to judge. Any reason why they couldn’t? Hey, this is Moore’s law.
So now is time to act. As Horizon shared in this article, E.U has granted financial support to projects “developing computer algorithms that harness information to augment human memory”. Jessica Leber, from FastCoExist, explains “IBM scientists announced in a publication in the journal Science last week that it has developed an ultra low-power computer chip, named TrueNorth, that thinks like a human brain, complete with 1 million programmable “neuron” connections”. The Wall Street Journal mentions that according to BCG, “Spending on robots is exploding worldwide and is projected to hit $67 billion a year over the next decade. According to the report, annual spending on robots will reach about $27 billion next year and more than double to nearly $67 billion by 2025.”