The challenges of artificial intelligence

Home Technologist 11 Artificial Intelligence The challenges of artificial intelligence
Juergen Schmidhuber

“We need to be super careful with artificial intelligence. It is potentially more dangerous than nukes.” That was Elon Musk two years ago, on Twitter. What does it mean for a technology, when it faces serious doubts from a man who is passionate about creating a better world through innovation? Since its beginnings in the 1950s, artificial intelligence has been a favourite subject of science fiction. But now AI has entered the realm of fact: several studies predict that intelligent machines will have a big impact on how we work, how we move and even how wars are fought. Innovators and scientists around the world believe that now is the time to ensure that AI is beneficial above all for humans. And even if there are plausible reasons to be anxious about machines that could one day be more intelligent than we are, many scientists are ready to take up the challenge.


The double-edged promise of AI

Some people fret that artificial intelligence will end civilization as we know it. Others believe it can solve every problem. The reality is somewhere in between.

Until very recently, using a computer to challenge a professional in the ancient Chinese board game of Go would have been fruitless; the human would have won hands down. Machines may have defeated the best human competitors in chess, draughts and backgammon, but Go’s complexity and subtlety were thought to set the game apart. In a match played in Seoul in 2016, however, a program known as AlphaGo beat South Korea’s Lee Sedol, widely considered to be the world’s finest player, by four games to one.

AlphaGo’s victory stunned many Go enthusiasts, but it also provided further proof of the power of an artificial-intelligence (AI) technique known as deep learning, which uses a brain-like processing architecture to recognise patterns in large data sets. In the case of AlphaGo, designed by Google’s London subsidiary DeepMind, the data sets in question encode board configurations, which are used to select the software’s winning moves. But data can come in a wide variety of forms, and in the last few years the technique has been used to automate many other tasks that conventional programs struggle with, including identifying images, recognising speech and even driving cars.

Unlike humans, AI software is still unable to carry out multiple complex operations at the same time.

Made practicable by today’s powerful computer hardware and almost limitless online data, as well as better algorithms, deep learning has spurred renewed interest in AI. In part, it has stimulated fresh discussion about the possible threats posed by the technology; physicist Stephen Hawking warned in 2014 that AI could “spell the end of the human race”. But it has also led to huge cash injections from industry, with many high-tech companies investing to produce mobile devices and other consumer products with more intelligent features and human-like qualities. “When I started out in AI in the 1980s there was no World Wide Web, so it was a struggle to get hold of data, and most things weren’t in machine-readable form,” recalls Boi Faltings, head of the Artificial Intelligence Laboratory at the École Polytechnique Fédérale de Lausanne (EPFL). “But now computers are much faster, there are far more data, and things just work.”

Artificial neurons

AI involves studying how machines can be made to simulate aspects of intelligence. Some scientists see the main goal as trying to achieve a human-like intelligence, while others aim, as Faltings puts it, “to do more and better” than people. Officially born and christened during a scientific workshop in New Hampshire in 1956, AI spawned numerous approaches to creating machine intelligence. The one that underpins deep learning – neural networks – was first demonstrated in a sort of mechanical brain known as the Perceptron built by psychologist Frank Rosenblatt in the 1950s and 1960s. Neural networks consist of simulated neurons known as units that are arranged in layers. Like real neurons, every unit is connected to several others in its neighbouring layers, and each connection is weighted. A unit will only “fire”, in other words send a signal to the next layer, when the sum of the weighted signals from the previous layer exceeds some threshold. The idea is that certain groups of data sets at the input generate a specific output. In other words, those data sets are recognised as being examples of a particular “object” – be it a cat, a chair or a person’s face.

Networks are trained by being fed multiple examples of each object and seeing how close they get to producing the right output. The difference between the right answer and the answer they actually produce is then used to tune the weights so that the next time around they get a bit closer. In this way networks “learn” to recognise different objects. When a network is then set loose on fresh unlabelled data it should be able to correctly identify the object or objects within.

Despite early promise, researchers struggled for decades to build reliable, efficient neural networks. What they wanted to do was build networks with multiple layers. The idea was that layers would provide progressively more abstract descriptions of objects within the previous layer, so that, for example, the first layer might quantify light and dark pixels within an image, the second layer could then identify edges and basic shapes within the pixels, and the third layer might use that information to then recognise whole objects. Unfortunately, the process of adjusting the weights involved sending information from a device’s output backwards through successive layers, and that information degraded as it passed from one layer to another. In other words, the use of multiple layers – deep learning – at that stage wasn’t in the cards.

This problem was addressed in the first years of this century by a number of scientists, including Geoffrey Hinton at the University of Toronto. The solution involved clearly separating layers of neurons during the learning process, thereby avoiding degradation of the weighting adjustments. This breakthrough, together with the availability of “big” data and cheap, fast processors has contributed to many of the practical applications of AI that we now see.

Fooling the humans

One of the most discussed applications is self-driving cars. Tesla Motors applies deep learning by first observing countless hours of human driving and using sensors to record both a car’s changing environment – particularly the speed and position of nearby vehicles – and how the driver reacts. The environmental and driver data are then matched up for each step along the journey, with the former serving as input to the deep-learning model and the latter as the label. Setting up the model so that the car behaves as a human driver would involves continually adjusting the weights so that the expected and actual output from the model eventually line up.

Beyond driving, deep learning has also been used to improve image recognition. In 2012, Google reported a 16% success rate after training a neural network with a billion connections to distinguish 22,000 different objects contained in millions of randomly selected YouTube videos. DeepMind, meanwhile, has shown its versatility by creating a deep-learning algorithm that can play 49 different arcade games – including the classic Space Invaders – by learning them from scratch and then going on to beat professional players.

When it comes to mastering human language, so-called chatbots are becoming an increasingly common feature in mobile devices and around the home. These use pre-defined scripts together with deep learning to answer queries and carry out simple conversations. Apple’s Siri and Amazon’s Echo speaker are two examples, both of which can report sports scores, recommend restaurants and relay calendar entries. Meanwhile, a chatbot called Eugene Goostman, developed by three Russian scientists, apparently passed the Turing test in 2014 when it convinced more than 30% of a panel of judges that it was human during five-minute typed conversations at the Royal Society in London. (It would appear, however, that Eugene could still do with a bit of practice: asked by American computer scientist Scott Aaronson how many legs a camel has, it replied: “Something between two and four. Maybe, three?”) AI is also being used to improve other types of language processing. Another pioneer of deep learning, Jürgen Schmidhuber, director at the Dalle Molle Institute for Artificial Intelligence (IDSIA) in Switzerland, led a team that developed “long short-term memory”. Now used in Google’s machine translation and speech recognition software for smartphones, this technology uses what are known as recurrent neural networks, which contain feedback loops allowing data to be stored.

What’s in a brain?

Schmidhuber’s aim now is to build what is known as a general-purpose or real AI machine, the kind of device beloved of science fiction writers that has all the important attributes of human intelligence, including reasoning, goal-seeking, curiosity and creativity. His idea is to combine a deep-learning recurrent neural network with a second program, also a recurrent neural network, whose job is to design experiments that allow the first algorithm to learn as much as it can about the world. Schmidhuber’s “motto” since the 1970s, he says, has been to “build an AI [machine] smarter than myself such that I can retire.”

According to Schmidhuber, physics demands that true intelligence can be generated only by a brain-like recurrent neural network: a device that crams as many processors as possible into a given volume and which connects those processors with many short wires, to save energy, and a few long ones, to allow for longer-distance communication. And he reckons that such an all-singing, all-dancing device could be with us within the next 25 years. He bases that claim on a continuing trend of computer processors getting cheaper by a factor of 10 every five years and the fact that the human brain contains about 100,000 times as many neurons as today’s largest artificial neural network.

EPFL’s Faltings doesn’t buy that. He says that scientists are still a long way from understanding how the human brain works, particularly the temporal properties of brain phenomena, and points out that, unlike humans, AI software is unable to carry out multiple complex operations at the same time. He recognises that deep learning has “changed the game” for applications such as visual and speech recognition because there is an “almost infinite amount of data” available. But he believes that the technology will fail to make a significant impact in other areas where data are harder to come by. “I take my hat off to deep learning people because they have finally had success after many years of hard work,” he says. “But now they are maybe going over the top.”

While acknowledging that machines can do useful things by trying to replicate human behaviour, Faltings is interested in their carrying out tasks that are beyond people, such as complex planning and coordination. He trains software via explicit, logical reasoning, an approach he hopes will achieve success in several areas, including medicine. Here, he explains, probabilistic models can be used to work out the optimum treatment for a patient given certain symptoms.

This is also the approach taken by Google with its self-driving cars. Each car uses a variety of sensors to monitor surrounding traffic and feeds the data to a probabilistic model created by computer programmers that predicts the next moves of each vehicle, cyclist and pedestrian in its vicinity. Faltings points out that Tesla’s contrasting deep-learning technology allowed it to bypass years of painstaking design work and therefore get its cars on the market much more quickly. But, he says, the downside of this approach was highlighted by a fatal accident in May that involved a Tesla Model S driving straight into an articulated lorry that was turning in front of it – a scenario that evidently none of the drivers used to train the software had encountered. “Because there is no underlying model there is no guarantee that the software makes the right interpolation from the data,” he says.

AI for farmers

Gamaya, a spin-off of the École Polytechnique Fédérale de Lausanne, uses agricultural drones equipped with miniature hyperspectral cameras to monitor crops – an example of the potential of AI in the field of agriculture. The software applies AI to turn the wavelength spectral signature of plants into important information about crop conditions, thereby helping farmers decide when to use chemicals and fertilisers. An algorithm also predicts outcomes based on the analysed patterns. This significantly improves the efficiency of food production, says CEO Yosef Akthman. Founded in 2015, the company operates mainly in Latin America, where it estimates the market for the successfully tested system to be worth €4.5 billion.

Easy parking

According to a recent Stanford University study of future trends in AI, self-driving vehicles could bring significant benefits, including the elimination of traffic jams and parking problems. But according to Malte Helmert of the University of Basel, the publicity surrounding the Tesla incident highlights just how sensitive people are to dangers posed by AI, and therefore how little they will tolerate crashes caused by computers rather than people. “Qualitatively it is a very different thing if a human makes an error or if the car misbehaves, because you have this feeling of loss of control,” he says.

Ethical questions – such as what should be done, if anything, to protect the livelihoods of the many people who could lose their jobs to AI machines – are raised in the Stanford report and have also been discussed by a recently created group of researchers from five major US high-tech companies. Indeed, according to Helmert, scientists in general have become far more aware of AI’s potential impact on society as machine intelligence “begins to rival human intelligence in more and more areas of life.”

Some researchers, like Schmidhuber, are considering AI’s ultimate challenge: the “technological singularity”, or the point at which machine intelligence would surpass that of humans and thereafter rapidly accelerate away. Simon Hegelich (@SimonHegelich), an expert on chatbots at the Technical University of Munich, agrees with EPFL’s Faltings that the singularity can occur only once we have a better understanding of the human brain. However, he believes that recent progress in AI, especially machine learning, suggests the necessary breakthrough could happen within the next 10 or 15 years. As to the possible consequences, he is decidedly optimistic. He argues that for an AI machine to be genuinely intelligent it would have to be empathetic, and therefore not a threat to humans.

For others, such discussion is superfluous. Faltings says expectations surrounding deep learning have been raised too high, leading, as he sees it, to misguided talk of “robots or computers taking over from humans”. He adds: “We in AI laugh at those things.” But he is also eager to put a positive spin on recent achievements. “Even if we don’t get to the singularity, there are still amazing things that can happen with AI. It’s good that people realise this.”

By Edwin Cartlidge @EdwinC_01


War without humans

Future conflicts may be fought with unmanned fighter jets and autonomous robot soldiers. Lambèr Royakkers of the Eindhoven University of Technology analyses the dangers of having machines make life-or-death decisions.

Lamber Royakkers

Lambèr Royakkers. Photo: Bart Van Overbeeke

The last war in Iraq was already a war of robots: by the end of 2008, 12,000 ground robots and 7,000 drones were involved. Today, new robotic and AI applications are being developed not only to perform dull, dangerous, and dirty jobs, but also to attack and kill targets directly. At least 76 countries have entered into the extensive development of military robotics technology, including Russia, China, Pakistan and India. According to Lambèr Royakkers, an expert in ethics and technology at the Eindhoven University of Technology (TU/e), it will be necessary to establish a strict framework on its use.

TECHNOLOGIST When will we see autonomous systems at war?

LAMBÈR ROYAKKERS We have noticed recently that “cubicle warriors” – ground-based pilots – are increasingly being assigned monitoring tasks rather than supervisory roles. The next step would be for the cubicle warrior to become unnecessary and for the military robot to function autonomously. The US Air Force assumes that by around 2050 it will be possible to deploy fully autonomous unmanned aerial combat vehicles. It makes financial sense: autonomous vehicles are less expensive to produce than remote-controlled robots, and since they don’t require human flight support, they also don’t incur added personnel costs.

TECHNOLOGIST What are the risks if we left machines to make decisions about human life?

LAMBÈR ROYAKKERS We have the problem that machines can’t distinguish a citizen from a soldier. Their sensors may indicate that something is a man, but they cannot determine whether it’s a civilian or a combatant. This would require them to have the ability to understand the intentions of a human being and to judge what a person is likely to do in a given situation. These qualities are all the more important given the fact that, increasingly, “asymmetric” and non-traditional warfare is carried out by a conventional army using modern technology against insurgents. Typically, the insurgents – consisting of irregular and mainly undisciplined fighters led by warlords – are often not recognisable as combatants.

TECHNOLOGIST What does a machine do better than a human in a war situation?

LAMBÈR ROYAKKERS Indeed, there are also advantages. For example, the American robo-ethicist Ronald Arkin suggests that autonomous armed military robots could prevent acts of revenge and torture. At war, soldiers are exposed to enormous stress and all of its consequences. Various scientific publications have shown that during the US-led military operation in Iraq, feelings of anger and revenge after the loss of a fellow soldier resulted in double the number of civilian abuses, and that emotions can cloud soldiers’ judgement during war.

TECHNOLOGIST Should we limit the development of autonomous weapons?

LAMBÈR ROYAKKERS Yes, I think so. Automated killing is absurd – humans should always take life-and-death decisions, not machines. Fortunately, a serious debate has already started. For example, every year the UN organises a convention on this topic. Last year, leading robotics experts – such as Elon Musk, Steve Wozniak and Stephen Hawkins – urged a ban on autonomous weapons through an open letter signed by thousands of artificial intelligence and robotics experts.

By Robert Gloy @robertgloy


Machine as caregivers

Artificial intelligence has enormous potential for health care – from diagnostics to rehabilitation to services for the elderly. A number of cutting-edge projects are being developed in Europe.

Thomas Bock.

Thomas Bock. Photo Astrid Eckert.

In one room, a simulator analyses the reactions of electrode-wired motorists so as to better understand their reflexes in different traffic situations. In another, a wheelchair and a leg exoskeleton are being controlled by the user’s thoughts, just like those screen-equipped robots that enable paralysed persons to move about virtually. This is all part of the research being conducted by the Chair in Brain-Machine Interface (CNBI) at the École Polytechnique Fédérale de Lausanne. It draws on artificial intelligence, which features health care applications that extend beyond the Watson supercomputer and its incredible accuracy in diagnosing disease.

AI and machine learning are the future of health care, with advances being made every day. Watson, the world’s leading diagnostic system, examined 1,000 cancer cases and recommended the same treatment as doctors 99% of the time. Even more amazingly, IBM’s software revealed that the oncologists had omitted care options in 30% of cases. Other companies are looking for ways to apply AI to health care. In 2014, for example, Google bought the British artificial intelligence company DeepMind with plans to apply its solutions to diagnostics. Dell, HP and Apple have initiated similar projects. Some predict a tenfold increase in the market over the next five years.

AI also opens up fascinating opportunities in machine and software automation. That is the focus of research being carried out by the CNBI at the new Biotech Campus in Geneva. The lab uses human brain signals to help people with disabilities, notably paralysis, control instruments and interact with their environment. Using headsets embedded with electrodes, researchers measure brain activity and align it with a patient’s intentions. These signals are then associated with commands that move a wheelchair or shift a cursor around a virtual keyboard. The user thinks about turning right, and the chair obeys. But it takes a lot of data and practice to make it work.

We don’t tell the machine, ‘You must do this or that’. Instead we tell it, ‘This task was not performed correctly. You have to figure out what to do in this situation.

Repeating 50 times

A brain impulse is never exactly the same. It varies depending on factors including fatigue, stress and concentration. To build the most widely applicable models possible, scientists first have patients repeat each intention 50 or 100 times to collect the maximum amount of information. Even that is not very much, says Ricardo Chavarriaga, a researcher at the CNBI. “Compared with the number of movements it takes to learn how to walk, that’s just a tiny sample.”

Once data have been collected and the models constructed, testing can begin. A patient practises controlling actions and observes the results in real time. If the patient wants to turn right and the chair keeps going left, the researchers recalibrate the system manually or collect more data. AI comes in when the machine can also learn from the user on its own. If the user fails to direct a task, a brain signal will convey the feeling of failure. That signal is picked up by the machine, which recognises that it has made a mistake and will not repeat it. Similarly, a user’s signals of joy or satisfaction at the successful completion of a task will encourage the machine to replicate that action the next time.

“What’s interesting with this approach,” says Chavarriaga, “is that we don’t tell the machine, ‘You must do this or that’. Instead we tell it, ‘This task was not performed correctly. You have to figure out what to do in this situation.’ That’s really important because you can’t always tell the machine exactly what to do.” This is the very definition of machine learning. “These methods are used to help the machines figure things out by exploring,” he says. “The algorithm will gradually optimise the machine’s operation based on its findings.”

Trust a trader’s guts

A study from the University of Cambridge has determined that traders with “interoception” – the ability to sense the state of their own body – are better than algorithms when it comes to forecasting financial markets. According to the study’s author, this “gut feeling” will be hard to apply to machines.

Smart furniture

Thomas Bock also uses a form of machine learning. He heads the Chair of Building, Realisation and Robotics at the Technical University of Munich (TUM). He rigs furniture with sensors that track a person’s health and alert a doctor if necessary.

Body temperature, for example, is measured in the bathroom using thermal imaging. “If the algorithm picks up an abnormal temperature compared with long-term records, it will investigate further,” Bock explains. It can combine those data with other information such as blood-oxygen levels, which are monitored by the kitchen chair. Cameras observe the user and determine the state of fatigue. The system combines big data and algorithmic analysis to gradually adapt to its user.

Bock is convinced that technology is the key to the challenges facing an ageing population. “With this system, we help the elderly remain independent and keep them living at home longer.” Testing is under way at a home in Italy, in partnership with private organisations and other laboratories at TUM.

By Benjamin Keller @ben_kel


Where it all began

With its leading research institutes and ground-breaking innovations, Europe plays a major role in the field of AI. Digital giants like Google and Facebook trust European research to develop the technologies of tomorrow. 

Artificial-intelligence applications may be booming in the US and Asia, but AI’s roots are in Europe. Well before the 1956 Dartmouth College conference that gave the field its name, computing pioneers like Cambridge’s Alan Turing and Berlin’s Konrad Zuse made strong contributions to AI, and cybernetics pioneer Norbert Wiener battled Torres Quevedo’s 1912 chess machine at the 1951 Calculating Machines and Human Thought conference in Paris.

Wolfgang Wahlster, director and CEO of the German Research Centre for Artificial Intelligence, the world’s largest AI research centre, says Europe avoided the hype of US and Japanese “AI Winters” in the latter half of the century by favouring cautious, engineering-style research. The result was the development of many important AI fields – like Alexey Ivakhnenko’s foundation of deep learning in Ukraine in the 1960s – the building blocks of today’s rapid progress.

That fundamental research remains a European strength, according to Jürgen Schmidhuber, scientific director at the Dalle Molle Institute for Artificial Intelligence (IDSIA) in Switzerland, himself a pioneer in neural networks and machine learning. “Many, if not most of the basic breakthroughs in AI stem from Europe, even if that won’t be obvious to you if your only sources of info are blogs from Silicon Valley,” says Schmidhuber. “The Bay Area companies are heavily using fundamental AI methods developed by Europeans.” Schmidhuber says this is something being quickly recognised by major US and Asian companies, many of whom are outsourcing research to European labs and acquiring promising AI companies. The fact that this year Google has opened an AI research centre in Zurich – and Facebook one in Paris in 2015 – supports this observation. UK-based start-up DeepMind, which tries to create general purpose deep learning algorithms, is a notable €460 million acquisition. One of their systems this year made headlines and history by defeating South Korean grandmaster Lee Sedol 4-1 in the complex strategic game of Go. Co-founded by IDSIA alumnus Shane Legg, DeepMind is also taking on real-world challenges, from partnering with the UK National Health Service to combating medical error to boosting Google server power efficiency, already reducing its data centre cooling bill by 40%.

Linguistic benefits

Wahlster readily admits that European AI has its weaknesses – a lack of the consumer market applications and access to single-language markets present on the Pacific Rim – but he argues that trying to compete directly in these areas would be a mistake. He says Europe’s AI strength lies instead in its distribution, with many strong institutes as far apart as Barcelona, Trento, Edinburgh and Linköping. The language-fragmented European market even helped bring about today’s language technologies based on AI. “The EU funded this, and we’ve had a lot of success in machine translation, text understanding and spoken dialogue systems. With so many languages in Europe there is really demand, whereas in the US most people speak English or Spanish.”

Tax the machines

In the near future, more than half of all workers could be replaced by machines, say several studies. The financial consequences – notably the loss of tax income coupled with higher costs of social services – could be compensated by a tax on machines. Various European stakeholders – in the European Commission, in France and in Germany – are currently examining this problem.

Indeed, the original technology used to form the basis of Google Translate came from the Wahlster-led Verbmobil project, while the Long Short Term Memory neural network developed by Schmidhuber’s team enables applications like Google’s smartphone speech recognition. Wahlster says Europe also excels in the application of industrial AI in small companies. “We brought AI to the shop floor, so to speak, and we have a lot of smart factories that are full of AI features in the German Industrie 4.0 program,” he explains. Among examples he cites the development of collaborative co-bots working in teams with humans, as well as efficiency-boosted production lines combining internet-of-things sensors with machine learning algorithms, making mass customisation possible.

With such a groundwork of research and successful applications, Schmidhuber sees Europe moving from machine learning towards the first true, albeit basic AI in years, not decades. “In the not-so-distant future we’ll have a pretty impressive general purpose problem solver in little robots who can hear and see and act and do reasonable things in their environments, and learn by interacting with humans and through their own curiosity.”

By Joe Dogshun 


Of perceptrons and backpropagation

The foundations of today’s deep-learning process were laid in the 1950s. But it took two “winters” to reveal its full potential.

1943│ Logician Walter Pitts and neurophysiologist Warren McCulloch prove that neurons can perform logical operations like “and”, “or” or “not” once they are connected as a network.

1951│ Cognitive scientist Marvin Minsky creates SNARC (Stochastic Neural Analog Reinforcement Computer), the first neuronal network that solved a problem from the real world: finding the best way out of a labyrinth.

1956│ The term “Artificial Intelligence” is used for the first time at a conference at Dartmouth College, often considered the birth of AI.

1958│ Frank Rosenblatt, a psychologist from New York, creates neuronal networks that transmit information in one direction. These so called perceptrons are the basis of today’s AI networks.

1969│ In his book Perceptrons, Minsky says that neuronal networks will never solve the problems of AI. He supports an approach based on symbolic representations of problems. This was the dominant paradigm of AI research until the late 1980s.

1974–1980│ After several failures and disappointments, interest in AI dropped off (especially in the US). This period is referred to as “First AI Winter”.

1986│ Britain’s Geoffrey Hinton revives the idea of neuronal networks. With his backpropagation algorithm, he lays the foundations of the systems now used in deep learning.

1987–1993 After further failures, a second “AI winter” occurs.

1997│ IBM’s Deep Blue defeats Russian grandmaster Garry Kasparov – the first time a machine beats a human chess champion.

2013│ After buying Hinton’s start-up, DNNresearch, Google applies the British technology to create new and more efficient photo research.

2016│ AlphaGo, software developed by Google DeepMind, defeats Go champion Lee Sedol. As Go is far more complicated than chess, this event is seen as a breakthrough for AI.

By Robert Gloy @robertgloy


Goodbye to silicon

Electronic chips will soon reach their computational limit, meaning they need to be replaced. Graphene and molybdenite are candidates. 

Is it the end of Moore’s famous law? In 1975, Intel co-founder Gordon Moore predicted that the number of transistors per microprocessor on a silicon chip would double every two years. Indeed, the “engines” that power computers, microprocessors, have gone from holding 29,000 transistors in 1971 to packing in more than two billion today. The higher the number of transistors, the more powerful the computer. And the more obsolete its predecessors.

That is all about to change. In March 2016, Intel, the integrated circuit pioneer, announced that its microprocessors would double in power not every two years but every two and a half years. “We will have reached the limits of silicon chips within the next 10 to 15 years,” says Andras Kis, head of the Nanoscale Electronics and Structures Laboratory at the École Polytechnique Fédérale de Lausanne (EPFL). “We’re approaching physical limits because we simply can’t go any smaller than the silicon atoms themselves.”

Researchers around the world are scrambling to find new systems and materials to replace silicon and further boost processing power, for example to drive artificial intelligence and its energy-hungry applications.

Graphene has been pushed for several years as the wonder element to substitute silicon in transistors. This material derived from graphite has been the subject of scientific theory since 1947. A number of research studies have shown its outstanding electrical conductivity properties. But graphene is a zero band gap material. The band gap refers to the range within which electrons can move from one energy band to another in a semiconductor. Semiconductors need to have a non-zero band gap, which is difficult to reproduce artificially. But there is hope: in 2015, a joint research team from the Paris-Sud University and Georgia Institute of Technology produced graphene with a gap of half an electron volt.

Light solutions For several years EPFL’s Kis has been studying molybdenite, which has the advantage of having a band gap in its natural state. In 2011 Kis and his team revealed that the mineral was an excellent semiconductor that can be used to make smaller, more energy-efficient transistors. “Molybdenite can be produced in layers only three atoms thick, which means that we can build chips at least three times smaller than silicon chips,” he says. The EPFL laboratory has developed several prototypes showing the potential of molybdenite, including electronic chips, a flash memory drive and a photographic sensor.

Another answer may come from a completely unexpected source. The Paris-based start-up LightOn has come up with a surprising alternative using light. Their technology is based on controlling the way light spreads to crunch numbers very quickly while using little energy. This solution could offer a response to handling the exponential growth of data, especially the volumes required to power artificial intelligence systems.

By Steve Riesen @steve_riesen


SIMILAR ARTICLES

Algorithms, concerts, noise: Understanding the new dimensions of sound.
Driverless car

Will we need a new driving license?

Artificial intelligence raises thorny questions that will be keeping human brains very busy.
bethany_koby

Technology Will Save Us teaches programming using toys in a whole new way.