Work and Artificial Intelligence

Work and Artificial Intelligence

16 June 2020 Off By Oscar Giacomin

When we talk about artificial intelligence (AI), it’s impossible not to also look at the ethical and social aspects, such as those relating to work and employment, due to growing fears in the global community.

Fears that are justified if one considers that half of today’s work activities could be automated by 2055. Any type of work can be at least partially automated, and this is the starting point for the report entitled ‘A Future That Works: Automation, Employment And Productivity’, written by the McKinsey Global Institute-MGI (a 148-page document, available on the Davos World Economic Forum website, where it was officially presented last January). The report estimates that about half of the current workforce may be impacted by automation as a result of technologies that are already known and in use today.

However, several studies have been published that allay the fears that have been spreading for months on the web and social networks about the role of artificial intelligence in ‘destroying’ jobs. Here are some of the most significant ones: 

according to a Capgemini study entitled ‘Turning AI into Concrete Value: The Successful Implementers’ Toolkit’, 83% of the companies surveyed confirmed the creation of new jobs at the company. In addition, three quarters of the companies surveyed saw a 10% increase in sales following the implementation of artificial intelligence; 

a recent report by The Boston Consulting Group and MIT Sloan Management Review shows that a reduction in employment is feared by less than half of management (47%), and indeed most believe in the potential of AI (85% of respondents thought that it will allow companies to gain and maintain a competitive advantage); 

new research by Accenture (‘Reworking the Revolution: Are You Ready to Compete As Intelligent Technology Meets Human Ingenuity to Create The Future Workforce’) published at the recent Davos Economy Forum, estimates that corporate revenues could grow 38% by 2022, as long as they invest in AI and effective human-machine cooperation.

One of the widely-debated topics both in the scientific community and among experts in philosophy, sociology, politics and economics concerns the thinking abilities of robots or, more generally, the boundaries between AI and ‘human’ consciousness. Although AI technologies are progressing rapidly, computers are still far behind human performance in many ways.

“Human consciousness is not just about recognizing patterns and crunching numbers quickly,” said Hakwan Lau, a neuroscientist at the University of California, Los Angeles. “Figuring out how to bridge the gap between human and AI would be the holy grail.”

To address the controversial question of whether or not computers can develop consciousness, researchers from the University of California tried – at first – to explore how consciousness arises in the human brain. In doing so, they outlined 3 key levels of human consciousness that could serve as a roadmap for designing a truly conscious form of AI.

The scientists noted that some robots have reached a level equivalent to C2 for humans (a level that refers to the ability to monitor their thoughts and calculations; in other words, the ability to be self-aware), as they can monitor their progress in learning how to solve problems. To date, researchers suggest that human consciousness can result from a set of specific computations. “Once we can spell out in computational terms what the differences may be in humans between conscious and unconsciousness, coding that into computers may not be that hard,” Lau believes, undoubtedly opening new scenarios for the future of conscious robots. 

Considerable research is also being addressed at humans and increasing our memory abilities, through Neuromorphic Chips and Phase Change Memory, circuits that imitate the functioning of the neural connections in a human brain. This research is advancing quite rapidly; a recent scientific publication published in ‘Nature Nanotechnology’ explains how scientists from the IBM research laboratories in Zurich have managed to create artificial neurons in the laboratory with ‘phase change’ materials.

The researchers used ‘antimony germanium telluride’ (editor’s note: a derivative of GeSbTe alloy – germanium, antimony and tellurium – a phase-change material used in rewritable DVDs), a material that has 2 stable states (one known as amorphous, without a defined structure, and the other crystalline, and therefore featuring a structure) and that is not used to store information, but rather to enable synapses, as occurs between biological neurons. Via a series of electrical impulses, these artificial neurons see the progressive crystallisation of the material, but what is truly innovative is the change in electrical charge between the inside and outside of the chip (this change, called ‘integrated-and-fire’ properties, occurs in the human brain for example when you touch something hot, and forms the basis of event-based calculations).

Building on these findings, scientists are working on organising ‘hundreds of artificial neurons into populations’ to manage complex, fast signals. These artificial neurons have been shown to sustain billions of switching cycles, with very low energy consumption: the energy required for each neuron update, i.e. for its phase change, was less than 5 picojoule and the average power less than 120 microwatts. For comparison, 60 million microwatts power a 60 watt lightbulb.

Neuromorphic chips are a form of hardware development for carrying out processing tasks differently than the current approach, keeping data and processing capacity in the same component, just like the human brain does using neurons and synapses.

Oscar Giacomin  / General Manager, Facto Edizioni

© All Rights Reserved