en
Back to the list

Modern Trading Without AI is Unprofitable. Why

07 September 2018 18:22, UTC
Anna Zhygalina, Catherine Lange

In the previous article we wrote about the advanced methods of cybersecurity as well as the anticipated and unforeseen threats from the rapidly growing cybercrime. But what is the role of AI in this process and what threats could we face?

All about it is next in this article.

The AI in trading

Artificial neural networks have already become an independent player in the world of data. Machine deep learning is successfully used not only by large exchanges but also by private players.

A team of scientists from Germany used the self-learning neural networks to analyze the stocks of 500 leading companies from 1992 to 2015. The results showed that AI investment strategies exceeded the income of the long-term planning strategy by 30%. AI manifested the special precision in the analysis of crisis times when people can not make rational decisions.

Experts notice that nowadays the gamerules has become much more difficult at the exchanges. Neural networks do not analyze a predetermined set of data in comparison to the primary exchange robots. Thus AI simulate the mental process of a financial analyst who collects data for the report.

"During the last years of our sample period, profitability decreased and even became negative at times. We assume that this decline was driven by the rising influence of artificial intelligence in modern trading - enabled by increasing computing power as well as by the popularization of machine learning" - said the head of the research group Dr. Christopher KRAUSS.

AI on a data protection

Of course, AI will occupy a leading position in a data protection it the digital world in the near future.

Today, there is a technology to analyze the processes at the industrial facility and prevent the cyber attacks at all levels, from IT-systems to operational processes.

Enjoying the AI comprehensive capabilities we are ready to provide it with all the data that it can easily juggle while it protects us from any cyber threats. But what if the AI becomes a threat to our security itself?

Ethics and safety - how to keep the AI under

None of the scientists dare to give the accurate predictions about the potential threat of AI to human security so far. However, as we are “feeding up” all the neural networks by tons of personal and corporate information, the attention to this issue has been escalated.

The truly evidence of concern by scientists and developers is Asilomar conference in 2017, where the united principles for all the researchers and AI developers were claimed. They are aimed to regulate the development of AI research in order “to create not undirected intelligence, but beneficial intelligence”. The issue of the use of AI personal data is given a special place among the derived principles:

"Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty”, - it was stated among the principles published by results of Asilomar conference.

AI in the hands of a cybercriminal

Electronic payments and data exchange are tasty morsel for a hacker and a handy tool in the illegal corporate wars. It is not surprising that cybercrime has reached the level of business.

Peter SCOTT, the futurist and author of books about AI talks about the possible perspectives in his exclusive interview for Bitnewstoday.ru: "As AI becomes increasingly commoditized, it will be a tool in the arsenal of cybercriminals. Even five years ago, the ability to, for instance, recognize general images and label them was on the cutting edge of research, and now is available in open source libraries that can be invoked with a few lines of code. We can certainly expect an arms race between law enforcement and criminals with AI as the weaponry”.

SCOTT believes that machine learning will be an obvious tool for hackers deploy in optimizing network intrusion attempts and distributed denial of service attacks.

Fahad ALRUWAILY, PhD, Senior Consultant in CyberSecurity from Saudi Arabia,  сommented it on for Bitnewstoday.ru:

“Artificial Intelligence is curtail to cybersecurity and there are some foreseen and unforeseen risks when it comes to the capabilities of AI, machine learning and deep learning. Here are some for the foreseen risks: AI enabled phishing emails could potentially utilize its machine learning to automatically target victims and exploit their vulnerabilities. In addition, cybercriminals could potentially hack and compromise autonomous driving technologies endangering/threatening the lives of thousands or maybe millions of daily commuters”.

Dr Fahad also considers the weaponizing AI through the creation of killer robots and drones to be a serious risk. He insists this moral dilemma should be solved at the international level.

Non-profit organizations like Future of Life Institute aware of raising risks by the development of militaristic AI and call for all researchers, developers and influential individuals to follow the principles that were adopted in Asilomar conference.

Steven WEISMAN, college professor at Bentley University and one of the leading experts in cybersecurity of USA, expressed his apprehension on this score in an exclusive interview for Bitnewstoday.ru: “Earlier this year a report was written by 26 experts entitled,  "The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation” in which they warned about the extreme danger of criminals effectively using this technology.  This is a frightening prospect. I think the threat of AI becoming a threat to humans by robots controlling cybersecurity is more a matter of science fiction than science fact”.

More about the moral and ethical aspects of using AI as a weapon and the arms race is in the next article.