A Blog by Jonathan Low

 

Dec 18, 2019

The Biggest AI Trends Of the Coming Year

It's that time of year when everyone wants to explain what just happened in the year past and what will happen in the year to come...

What is interesting about the prospective trends is their emphasis on AI's declining dependence on humans. JL

Ben Dickson reports in The Next Web:

Expect to see significant innovations in ‘AI for AI’: using AI to help automate the steps and processes involved in the life cycle of creating, deploying, managing, and operating AI models to help scale AI more widely into the enterprise.“We’ll see a rise of data synthesis methodologies to combat data challenges in AI.” Explainability (being able to explain the forces behind AI-based decisions) is also increasing. “There will be more scrutiny of the reliability and bias behind AI methods.
Artificial intelligence is one of the fastest moving and least predictable industries. Just think about all the things that were inconceivable a few years back: deepfakes, AI-powered machine translation, bots that can master the most complicated games, etc.
But it never hurts to try our chances at predicting the future of AI. We asked scientists and AI thought leaders about what they think will happen in the AI space in the year to come. Here’s what you need to know.

AI will make healthcare more accurate and less costly

As Jeroen Tas, Philips’ Chief Innovation & Strategy Officer, told TNW: “AI’s main impact in 2020 will be transforming healthcare workflows to the benefit of patients and healthcare professionals alike, while at the same time reducing costs. Its ability to acquire data in real-time from multiple hospital information flows – electronic health records, emergency department admissions, equipment utilization, staffing levels etc. – and to interpret and analyze it in meaningful ways will enable a wide range of efficiency and care enhancing capabilities.”
This will come in the form of optimized scheduling, automated reporting, and automatic initialization of equipment settings, Tas explained, which will be customized “to an individual clinician’s way of working and an individual patient’s condition – features that improve the patient and staff experience, result in better outcomes, and contribute to lower costs.”
“There is tremendous waste in many healthcare systems related to complex administration processes, lack of preventative care, and over- and under-diagnosis and treatment. These are areas where AI could really start to make a difference,” Tas told TNW. “Further out, one of the most promising applications of AI will be in the area of ‘Command Centers’ which will optimize patient flow and resource allocation.”
Philips is a key player in the development of necessary AI-enabled apps seamlessly being integrated into existing healthcare workflows. Currently, one in every two researchers at Philips worldwide work with data science and AI, pioneering new ways to apply this tech to revolutionizing healthcare. 
For example, Tas explained how combining AI with expert clinical and domain knowledge will begin to speed up routine and simple yes/no diagnoses – not replacing clinicians, but freeing up more time for them to focus on the difficult, often complex, decisions surrounding an individual patient’s care: “AI-enabled systems will track, predict, and support the allocation of patient acuity and availability of medical staff, ICU beds, operating rooms, and diagnostic and therapeutic equipment.”

Explainability and trust will receive greater attention

“2020 will be the year of AI trustability,” Karthik Ramakrishnan, Head of Advisory and AI Enablement at Element AI, told TNW. “2019 saw the emergence of early principles for AI ethics and risk management, and there have been early attempts at operationalizing these principles in toolkits and other research approaches. The concept of explainability (being able to explain the forces behind AI-based decisions) is also becoming increasingly well known.”
There has certainly been a growing focus on AI ethics in 2019. Early in the year, the European Commission published a set of seven guidelines for developing ethical AI. In October, Element AI, which was co-founded by Yoshua Bengio, one of the pioneers of deep learning, partnered with the Mozilla Foundation to create data trusts and push for the ethical use of AI. Big tech companies such as Microsoft and Google have also taken steps toward making their AI development conformant to ethical norms.
The growing interest in ethical AI comes after some visible failures around trust and AI in the marketplace, Ramakrishnan reminded us, such as the Apple Pay rollout, or the recent surge in interest regarding the Cambridge Analytica scandal
“In 2020, enterprises will pay closer attention to AI trust whether they’re ready to or not. Expect to see VCs pay attention, too, with new startups emerging to help with solutions,” Ramakrishnan said.

AI will become less data-hungry

“We’ll see a rise of data synthesis methodologies to combat data challenges in AI,” Rana el Kaliouby, CEO and co-founder of Affectiva, told TNW. Deep learning techniques are data-hungry, meaning that AI algorithms built on deep learning can only work accurately when they’re trained and validated on massive amounts of data. But companies developing AI often find it challenging getting access to the right kinds of data, and the necessary volumes of data. 
“Many researchers in the AI space are beginning to test and use emerging data synthesis methodologies to overcome the limitations of real-world data available to them. With these methodologies, companies can take data that has already been collected and synthesize it to create new data,” el Kaliouby said.
“Take the automotive industry, for example. There’s a lot of interest in understanding what’s happening with people inside of a vehicle as the industry works to develop advanced driver safety features and to personalize the transportation experience. However, it’s difficult, expensive, and time-consuming to collect real-world driver data. Data synthesis is helping address that – for example, if you have a video of me driving in my car, you can use that data to create new scenarios, i.e., to simulate me turning my head, or wearing a hat or sunglasses,” el Kaliouby added.
Thanks to advances in areas such as generative adversarial networks (GAN), many areas of AI research can now synthesize their own training data. Data synthesis, however, doesn’t eliminate the need for collecting real-world data, el Kaliouby reminds: “[Real data] will always be critical to the development of accurate AI algorithms. However [data synthesis] can augment those data sets.”

Improved accuracy and efficiency of neural networks

Neural network architectures will continue to grow in size and depth and produce more accurate results and become better at mimicking human performance on tasks that involve data analysis,” Kate Saenko, Associate Professor at the Department of Computer Science at Boston University, told TNW. “At the same time, methods for improving the efficiency of neural networks will also improve, and we will see more real-time and power-efficient networks running on small devices.”
Saenko predicts that neural generation methods such as deepfakes will also continue to improve and create ever more realistic manipulations of text, photos, videos, audio, and other multimedia that are undetectable to humans. The creation and detection of deepfakes has already become a cat-and-mouse chase.
As AI enters more and more fields, new issues and concerns will arise. “There will be more scrutiny of the reliability and bias behind these AI methods as they become more widely deployed in society, for example, more local governments considering a ban on AI-powered surveillance because of privacy and fairness concerns,” Saenko said.
Saenko, who is also the director of BU’s Computer Vision and Learning Group, has a long history in researching visual AI algorithms. In 2018, she helped develop RISE, a method for scrutinizing the decisions made by computer vision algorithms.

Automated AI development

“In 2020, expect to see significant new innovations in the area of what IBM calls ‘AI for AI’: using AI to help automate the steps and processes involved in the life cycle of creating, deploying, managing, and operating AI models to help scale AI more widely into the enterprise,” said Sriram Raghavan, VP of IBM Research AI.
Automating AI has become a growing area of research and development in the past few years. One example is Google’s AutoML, a tool that simplifies the process of creating machine learning models and makes the technology accessible to a wider audience. Earlier this year, IBM launched AutoAI, a platform for automating data preparation, model development, feature engineering, and hyperparameter optimization.
“In addition, we will begin to see more examples of the use of neurosymbolic AI which combines statistical data-driven approaches with powerful knowledge representation and reasoning techniques to yield more explainable & robust AI that can learn from less data,” Raghavan told TNW.
An example is the Neurosymbolic Concept Learner, a hybrid AI model developed by researchers at IBM and MIT. NSCL combines classical rule-based AI and neural networks and shows promise in solving some of the endemic problems of current AI models, including large data requirements and a lack of explainability.

AI in manufacturing

“2020 will be the year that the manufacturing industry embraces AI to modernize the production line,” said Massimiliano Versace, CEO,  and co-founder of Neurala. “For the manufacturing industry, one of the biggest challenges is quality control. Product managers are struggling to inspect each individual product and component while also meeting deadlines for massive orders.”
By integrating AI solutions as a part of workflows, AI will be able to augment and address this challenge, Versace believes: “In the same way that the power drill changed the way we use screwdrivers, AI will augment existing processes in the manufacturing industry by reducing the burden of mundane and potentially dangerous tasks, freeing up workers’ time to focus on innovative product development that will push the industry forward.”
“Manufacturers will move towards the edge,” Versace adds. With AI and data becoming centralized, manufacturers are forced to pay massive fees to top cloud providers to access data that is keeping systems up and running. The challenges of cloud-based AI have spurred a slate of innovations toward creating edge AI, software and hardware that can run AI algorithms without the need to have a link to the cloud.
“New routes to training AI that can be deployed and refined at the edge will become more prevalent. As we move into the new year, more and more manufacturers will begin to turn to the edge to generate data, minimize latency problems and reduce massive cloud fees. By running AI where it is needed (at the edge), manufacturers can maintain ownership of their data,” Versace told TNW.

The geopolitical implications of AI

“AI will remain a top national military and economic security issue in 2020 and beyond,”  said Ishan Manaktala, CEO of Symphony AyasdiAI. “Already, governments are investing heavily in AI as a possible next competitive front. China has invested over $140 billion, while the UK, France, and the rest of Europe have plowed more than $25 billion into AI programs. The U.S., starting late, spent roughly $2 billion on AI in 2019 and will spend more than $4 billion in 2020.
Manaktala added, “But experts urge more investment, warning that the U.S. is still behind. A recent National Security Commission on Artificial Intelligence noted that China is likely to overtake U.S. research and development spending in the next decade. The NSCAI outlined five points in its preliminary report: invest in AI R&D, apply AI to national security missions, train and recruit AI talent, protect U.S. technology advantages, and marshal global coordination.”

AI in drug discovery

“We predict drug discovery will be vastly improved in 2020 as manual visual processes are automated because visual AI will be able to monitor and detect cellular drug interactions on a massive scale,” Emrah Gultekin, CEO at Chooch, told TNW. “Currently, years are wasted in clinical trials because drug researchers are taking notes, then entering those notes in spreadsheets and submitting them to the FDA for approval. Instead, highly accurate analysis driven by AI can lead to radically faster drug discoveries.”
Drug development is a tedious process that can take up to 12 years and involve the collective efforts of thousands of researchers. The costs of developing new drugs can easily exceed $1 billion. But there’s hope that AI algorithms can speed up the process of experimentation and data gathering in drug discovery.
“Additionally, cell counting is a massive problem in biological research—not just in drug discovery. People are hunched over microscopes or sitting in front of screens with clickers in their hands counting cells. There are expensive machines that attempt to count, inaccurately. But visual AI platforms can perform this task in seconds, with 99% accuracy in just moments,” Gultekin added.

2 comments:

rara said...

Hello, after reading this amazing article i am as well cheerful to share my familiarity here with colleagues. situs poker online terpopuler

Post a Comment