The development of deep learning AI programmes allows robots to emulate back-up staff in financial institutions, says leading computer scientist Qiang Yang – but they will not completely replace human decision-making.
In many financial institutions, expert advisors rely on trading data, company reports and news stories to detect minute market signals and provide clients with investment recommendations. Their work is often supported by teams of interns who analyse corporate history, the competitive environment and market trends in detail.
But developments in artificial intelligence (AI) could revolutionise this process, according to Professor Qiang Yang, by replacing those interns with robo-advisors which can replicate their abilities and surpass them in speed and accuracy. The AlphaGo deep learning programme beat the world’s best Go player in 2016 by developing a new generation of search algorithms which can be used to create learning systems able to emulate human capabilities.
Prof Yang, who heads the computer science department at Hong Kong’s Science and Technology University, says that such programmes can analyse fundamental financial data going back 20 years. They can also read through mountains of documents and other forms of text, and reason about their contents to make predictions about market developments.
‘We already have non-AI programmes that drive high frequency trading in the markets by using pure computing power to identify short-term trends in micro-seconds. But the new machine learning programmes use knowledge and big data to emulate human analysts in identifying much longer-term trends.
‘Senior advisors will still be needed,’ he adds. ‘Robots can emulate the work of interns, and replace them. But I don’t believe that machines can totally replace humans, because they need humans to teach them. They cannot learn by themselves – and nor can they innovate as humans do.’
Now one of the foremost researchers in his field, Qiang Yang started off by studying astrophysics at Peking University, before moving to the University of Maryland in 1982 to take a master’s. At the US university, he had greater access to computers which he used to analyse events in space such as solar eruptions, and that led him to switch to computer science, taking a second masters and a doctorate.
‘I found programming very interesting, because computers could predict events in the real world. People in the computer science department said they were working on developing artificial intelligence which could make computers simulate human behaviour. I could see the fascinating potential, even though computers were not very advanced compared with today.’
In 1989, he moved to the University of Waterloo in Canada to continue his research on automated planning, which creates robot brains able to reason and plan for their activities. When Netscape, the first browser, was launched in 1995, there was explosive growth in the amount of data available, and he began working on machine learning which could process it.
‘Deep learning tries to figure out the characteristics of something like a human or a car shape in data so that they can be identified – and if there is lots of data, the results will be accurate enough to identify such objects in the future. But deep learning is very brittle, and does not work well in moving from one area to another: if it is good at recognising rural scenery in photos, for example, it may not be so good at recognising urban scenery. I specialised in transfer learning, which enables computers to adapt their models from one area to another by drawing analogies between them.
‘It is similar to the problem of kids who memorise concepts in school and then fail exams which require them to tackle different concepts. But if kids also learn the principles behind the concepts, they can figure out what’s new and pass their exams. Real wisdom comes from adapting what you have learnt in the past – which is what transfer learning teaches computers to do.’
A practical application of transfer learning can be seen in sentiment analysis of different media such as twitter posts, book reviews and other sources. Deep learning can search them to see if people are feeling positive or negative about the likes of films, books, political candidates and products. This can be very useful for a variety of tasks such as market research and identifying trends, but it requires people to label texts as positive or negative in each medium first, which is very labour-intensive and expensive.
Transfer learning can train computers to do this with only a small amount of labelling – even between media that are very different such as films, photographs, books and online texts. ‘We even found that when comparing a picture with text, the saying that a picture is worth a thousand words is about right!’
One new use of this approach from Stanford University is improving the UN’s use of satellite pictures to identify where there is poverty in Africa. Just taking a picture is not enough – it requires human involvement to identify signs of poverty, and in some cases visits to see the locations. By comparing day and night pictures of locations to see where lighting indicates modern structures, transfer learning has enabled the UN to see accurately which regions are more developed.
Qiang Yang moved to Hong Kong in 2001 as a Professor at the University of Science and Technology because he wanted to work in Asia where many students – especially in China– are working on the development of AI. In 2012, he became the founding head of Huawei Noah’s Ark Research Lab, set up by the world’s largest telecommunications equipment manufacturer which wanted to move into big data and AI. He helped the company develop deep learning and transfer learning applications in telecom, mobile and financial services.
Now back in academia, he is working on the use of AI in handling financial services applications for online consumer companies. ‘These have a huge volume of requests from customers which they handle in service centres employing expensive staff who must be able to deal with complex questions. Robots can support a dialogue with people by giving information on products and guiding them towards sales because there is sufficient high quality digital data about such transactions.’
Prof Yang sees the most exciting development over the next few years will be a blossoming of AI-powered robo-advisors in other sectors that also have enough high quality data to work from. These could include medical applications, where robot assistants for doctors could answer 80 per cent of the questions and learn from them to deal with some of the harder ones. Other possible sectors include logistics for companies such as Amazon and education.
‘A combination of humans and machines is the future,’ he says. ‘Computers will not completely replace humans, who will still be the innovators. Robots will be the intern assistants advising the human decision-makers.’