Tuesday, April 17, 11:00-12:00
Detecting deception from different types of behavior is a longstanding research goal which is of considerable interest to the military, law enforcement, corporate security, social services and mental health workers. However, both humans and polygraphs are very poor at this task. We describe more reliable methods we have developed to detect deception automatically from spoken language. Our classifiers are trained on the largest cleanly recorded corpus of within-subject deceptive and non-deceptive speech that has been collected. We make use of acoustic-prosodic, lexical, demographic, and personality features to distinguish truth from lie. We examine differences in behavior based upon gender, personality, and native language and compare the behavior of our automatic methods to human performance on the same data. We are particularly interested in studying personality factors associated with deceptive speech in order to identify individual differences in deceptive speech production and also to identify characteristics of good human deception detectors. We conclude with a discussion of speech features associated with human decisions to trust both truth and lie which is the goal of our future research.
Julia Hirschberg is Percy K. and Vida L. W. Hudson Professor and Chair of Computer Science at Columbia University. She previously worked at Bell Laboratories and AT&T Labs where she created the HCI Research Department. She served on the Association for Computational Linguistics executive board (1993-2003), the International Speech Communication Association board (1999-2007; 2005-7 as president), the International Conference on Spoken Language Processing board (1996--), the NAACL executive board (2012-15), the CRA Executive Board (2013-14), and the AAAI Council (2012-15). She has been editor of Computational Linguistics and Speech Communication, is a fellow of AAAI, ISCA, ACL, ACM, and IEEE, and a member of the National Academy of Engineering. She received the IEEE James L. Flanagan Speech and Audio Processing Award and the ISCA Medal for Scientific Achievement. She currently the serves on the IEEE Speech and Language Processing Technical Committee, is co-chair of the CRA-W Board, and has worked for diversity for many years at AT&T and Columbia. She works on spoken language processing and NLP, studying text-to-speech synthesis, spoken dialogue systems, entrainment in conversation, detection of deceptive and emotional speech, hedging behavior, and linguistic code-switching (language mixing).
Wednesday, April 18, 11:00-12:00
Deep Learning is a hot technology, with many ICASSP papers using acronyms like DNN, CNN or LSTMs. But Neural Networks have been around for decades, so in this talk I will explain what has changed and what has caused deep learning to now take center stage. I will give a bit of history of deep learning and describe the basic concepts behind it, including its "deep" Signal Processing roots. Deep Learning is now used in many fields, showing success in fields such as speech technology, natural language processing, computer vision, genetics, games. So I will show a few examples where deep learning is changing our daily lives and the way we interact with technology.
Alex Acero is Sr. Director at Apple in charge of speech recognition, speech synthesis, and machine translation for Siri, Apple’s personal assistant for iPhone, iPad, Apple Watch, Apple TV, Carplay, Macintosh, and HomePod. Prior to joining Apple in 2013, he spent 20 years at Microsoft Research managing teams in speech, audio, multimedia, computer vision, natural language processing, machine translation, machine learning, and information retrieval. His team at Microsoft Research built Bing Translator, worked on Xbox Kinect, and pioneered the use of deep learning in large vocabulary speech recognition. From 1991-1993 he managed the speech team for Spain’s Telefonica. His first stint at Apple started in 1990. Alex received an engineering degree from the Polytechnic University of Madrid, a Masters from Rice University, and a PhD from Carnegie Mellon. He is Affiliate Faculty at the University of Washington. Dr. Acero is a Fellow of IEEE and ISCA. He received the 2017 Norbert Wiener Society Award and the 2013 Best Paper Award from the IEEE Signal Processing Society. Alex served as President of the IEEE Signal Processing Society and is currently a member of the IEEE Board of Directors. Alex is co-author of the textbook “Spoken Language Processing”, over 250 technical papers and 150 US patents.
Thursday, April 19, 11:00-12:00
Deep learning is causing revolutions in computer perception, signal restoration/reconstruction, signal synthesis, natural language understanding and control. But almost all these successes largely rely on supervised learning, where the machine is required to predict human-provided annotations. For control and game AI, most systems use model-free reinforcement learning, which requires too many trials to be practical in the real world. In contrast, animals and humans seem to learn vast amounts of knowledge about how the world works through mere observation and occasional actions. Good predictive world models are an essential component of intelligent behavior: with them, one can predict outcomes and plan courses of actions. One could argue that prediction is the essence of intelligence. Good predictive models may be the basis of intuition, reasoning and "common sense", allowing us to fill in missing information: predicting the future from the past and present, or inferring the state of the world from noisy percepts. After a brief presentation of the state of the art in deep learning, some promising principles and methods for prediction-based self-supervised learning will be discussed.
Yann LeCun is Director of AI Research at Facebook and Silver Professor at New York University, affiliated with the Courant Institute, the Center for Neural Science and the Center for Data Science, for which he served as founding director until 2014. He received an EE Diploma from ESIEE (Paris) in 1983, a PhD in Computer Science from Université Pierre et Marie Curie (Paris) in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU in 2003 after a short tenure at the NEC Research Institute. In late 2013, LeCun became Director of AI Research at Facebook, while remaining on the NYU Faculty part-time. He was visiting professor at Collège de France in 2016. His research interests include machine learning and artificial intelligence, with applications to computer vision, natural language understanding, robotics, and computational neuroscience. He is best known for his work in deep learning and the invention of the convolutional network method which is widely used for image, video and speech recognition. He is a member of the US National Academy of Engineering, the recipient of the 2014 IEEE Neural Network Pioneer Award, the 2015 IEEE Pattern Analysis and Machine Intelligence Distinguished Researcher Award, the 2016 Lovie Award for Lifetime Achievement, and a honorary doctorate from IPN, Mexico.
Friday, April 20, 11:00-12:00
Just a decade ago, the field of Autonomous Vehicles (AVs) was relatively esoteric, with only a few research teams around the world focused on it. Fast forward to today: tens of thousands of engineers and scientists around the globe work in this area; AV-related topics dominate the tech press; and there is strong consensus that Autonomous Vehicles will profoundly change people's lives and reshape our cities. However what is still unclear is exactly how this transformation will take place, or how quickly. At Lyft, the core belief is that deploying AVs as part of hybrid "Transportation as a Service" (TaaS) networks is the safest and most effective way towards large scale and impact. In this talk, I will give an overview of Lyft's two-prong approach to tackling this challenge. First, the company is opening its TaaS network to select third-party AV partners: to the existing real-time dispatch infrastructure, their AVs look just like a car with a driver, but with additional restrictions such as geofences, time of day, pick-up location or weather. Second, Lyft has fully committed to building its own Self-Driving System as part of the recently announced Lyft “Level 5 Engineering Center”. Not only is technology being built specifically for TaaS, another objective is to contribute to advancing the state of the art by open-sourcing parts of the stack and sharing data with the research community.
Luc Vincent is Vice President of Engineering at Lyft, where he leads the company's Marketplace & Autonomous Platform division. His responsibilities include real-time supply and demand matching, real-time pricing, mapping, and Lyft's “Level 5” group, focused on self-driving technology. Prior to Lyft, Luc spent 12 years at Google, most recently as Sr Director of Engineering, leading all imagery-related activities of Google's Geo group. He is recognized for having bootstrapped Street View and turned it into an iconic Google product. Before Google, he was Chief Scientist at LizardTech, where he was responsible for the DjVu advanced document image compression technology. Prior to that, he spent several years at Xerox Corporation, including a stint as Area Manager at the Xerox Palo Alto Research Center (PARC). Luc is listed as an inventor on almost 100 issued patents and has over 60 publications in the areas of computer vision, image analysis and document recognition. He has served as an Associate Editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), and for the Journal of Electronic Imaging. Luc earned his B.S. from Ecole Polytechnique (France), M.S. in Computer Science from University of Paris XI, and PhD in Mathematical Morphology from Ecole des Mines de Paris. In addition, he was a postdoctoral fellow in the Division of Applied Sciences of Harvard University.