Johannes Kepler University of Linz
Title: Modern Hopfield Networks
Abstract: Associative memories are one of the earliest artificial neural models dating back to the 1960s and 1970s. Best known are Hopfield Networks, presented by John Hopfield in 1982. Recently, Modern Hopfield Networks have been introduced, which tremendously increase the storage capacity and converge extremely fast. We generalize the energy function of modern Hopfield Networks to continuous patterns and propose a new update rule. The new Hopfield Network has exponential storage capacity. Its update rule ensures global convergence to energy minima and converges in one update step with exponentially low error. The new Hopfield network has three types of energy minima (fixed points of the update): (1) global fixed point averaging over all patterns, (2) metastable states averaging over a subset of patterns, and (3) fixed points, which store a single pattern. Surprisingly, the transformer attention mechanism is equal to the update rule of our new modern Hopfield Network with continuous states. Transformer and BERT models operate in their first layers preferably in the global averaging regime, while they operate in higher layers in metastable states. We provide a new PyTorch layer called “Hopfield”, which allows equipping deep learning architectures with modern Hopfield networks as a new powerful concept comprising pooling, memory, and attention. The layer serves for applications like multiple instance learning, set-based and permutation invariant learning, associative learning, and many more. We show some tasks, for which we could increase the performance by integrating the new Hopfield layer into a deep learning architecture.
Sepp Hochreiter is heading the Institute for Machine Learning, the LIT AI Lab and the AUDI.JKU deep learning center at the Johannes Kepler University of Linz and is director of the Institute of Advanced Research in Artificial Intelligence (IARAI). He is regarded as a pioneer of Deep Learning as he discovered the fundamental deep learning problem: deep neural networks are hard to train, because they suffer from the now famous problem of vanishing or exploding gradients. He is best known for inventing the long short-term memory (LSTM) in his diploma thesis 1991 which was later published in 1997. LSTMs have emerged into the best-performing techniques in speech and language processing and are used in Google’s Android, in Apple’s iOS, Google’s translate, Amazon’s Alexa, and Facebook’s translation. Currently, Sepp Hochreiter is advancing the theoretical foundation of Deep Learning, investigates new algorithms for deep learning, and reinforcement learning. His current research projects include Deep Learning for climate change, smart cities, drug design, for text and language analysis, for vision, and in particular for autonomous driving.
Washington State University
Title: Constructing a Human Digital Twin
Abstract: Digital Twins are a disruptive technology that can automate human health assessment and intervention by creating a holistic, virtual replica of a physical human. The increasing availability of sensing platforms and the maturing of data mining methods support building such a replica from longitudinal, passively-sensed data. By creating such a quantified self, we can more precisely understand current and future health status. We can also anticipate the outcomes of behavior-driven interventions. In this talk, I will discuss the challenges that accompany creating human digital twins in the wild, survey emerging data mining methods that tackle these challenges, and describe some of the current and future impacts that technologies have for supporting our aging population.
Diane Cook is Regents Professor and Huie-Rogers Chair in the School of Electrical Engineering and Computer Science at Washington State University, founding director of the WSU Center for Advanced Studies in Adaptive Systems (CASAS), and co-director of the WSU AI Laboratory. She is a Fellow of the IEEE and the National Academy of Inventors. Diane’s work is featured in BBC, IEEE The Institute, IEEE Spectrum, Smithsonian, The White House Fact Sheet, Scientific American, the Wall Street Journal, AARP Magazine, HGTV, and ABC News. Her research aims to create smart environments that automate health monitoring and intervention, evaluated via the CASAS Smart Home in a Box installed in over 160 sites across 9 countries. Her research currently focuses on developing machine learning methods that map a human behaviorome as a foundation for constructing a digital twin. She also conducts multidisciplinary research to leverage digital twin technologies for automatically assessing, extending, and enhancing a person’s functional independence.
Title: XYZ-Code – A Holistic Representation to Advance Integrative AI
Abstract: At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-like approach to learning and understanding. Microsoft has achieved five historical human parity milestones in the past five years, in speech recognition, machine translation, conversational question answering, machine reading comprehension and image captioning. These AI breakthroughs on the open research task are incorporated into Azure Cognitive Services – the most comprehensive AI product in the industry for building more intelligent applications for any developer. These five breakthroughs provided us with strong signals towards our more ambitious aspiration to produce a leap in AI capabilities, achieving multisensory, multimodal, and multilingual learning that is more human-like. This unique perspective views the relationship among three human cognition attributes: text (X); audio or visual signals, aka sensory, (Y) and multilingual (Z). At the intersection of all three, there’s magic—XYZ-code— a holistic representation in produce a leap in AI capabilities that can speak, hear, see, and understand humans better.
Dr. Xuedong Huang is a Technical Fellow in the Cloud and AI group at Microsoft and the Chief Technology Officer for Azure Cognitive Services. Huang oversees the AI Cognitive Services team and manages researchers and engineers around the world who are building AI-based services from computer vision to spoken language understanding. In 1993, Huang joined Microsoft to establish the company’s speech technology group, where he led Microsoft’s spoken language efforts for over two decades. In this role, he helped to bring speech recognition to the mass market and led his team in achieving historical milestones in both conversational speech recognition human parity and machine translation human parity. Today, Huang’s team provides Azure AI Cognitive Services that power world-wide applications, from Microsoft Office 365 to numerous 3rd party services and applications. Huang is an IEEE & ACM fellow. He was named Asian American Engineer of the Year (2011), Wired Magazine’s 25 Geniuses Who Are Creating the Future of Business (2016), and AI World’s Top 10 (2017). Huang holds over 170 patents and has published over 100 papers & two books. Before Microsoft, Huang was on the faculty at Carnegie Mellon University. He is a recipient of the Alan Newell Award for research excellence.