2022 World AI IoT Congress

CORPORATE KEYNOTE SERIES

                                                    Vasilis Syrgkanis

                                                  (Principal Researcher at Microsoft Research, New England)

Bio:  Dr. Syrgkanis is the Principal Researcher at Microsoft Research, New England, where he is also co-leading the project on Automated Learning and Intelligence for Causation and Economics (ALICE). His research lies at the intersection of theoretical computer science, machine learning and economics/econometrics. He received his Ph.D. in Computer Science from Cornell University, where he had the privilege to be advised by Eva Tardos and then spent two years as a postdoc researcher at Microsoft Research, NYC, as part of the Algorithmic Economics and the Machine Learning groups. He obtained my diploma in EECS at the National Technical University of Athens, Greece.

                                                Sercan Arik

                                                       (Research Scientist and Manager at Google Cloud AI)

Bio: Sercan Arik is currently working as a Research Scientist and Manager at Google Cloud AI. His current work is motivated by the mission of democratizing AI and bringing it to the most impactful use cases, from Healthcare, Finance, Retail, Media, Education, Communications and many other industries. Towards this goal, he focuses on how to make AI more high-performance for the most-demanded data types, interpretable, trustable, data-efficient, robust and reliable. He led research projects that were launched as major Google Cloud products and yielded significant business impact, such as TabNet and Covid-19 forecasting. Before joining Google, he was a Research Scientist at Baidu Silicon Valley AI Lab. At Baidu, he focused on deep learning research, particularly for applications in human-technology interfaces. He co-developed deep learning-based state-of-the-art speech synthesis, keyword spotting, voice cloning, and neural architecture search systems. He completed his PhD degree in Electrical Engineering at Stanford University. He co-authored more than 50 journal and conference publications.

Title For Talk: Explainable Deep Learning for Structured Data

Abstract: In this talk, we go over our 3 projects developed to push the limits of deep learning for structured data: TabNet, TFT, and DVRL.

TabNet is a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other neural network and decision tree variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into the global model behavior. Finally, for the first time to our knowledge, we demonstrate self-supervised learning for tabular data, significantly improving performance with unsupervised representation learning when unlabeled data is abundant.
Multi-horizon forecasting problems often contain a complex mix of inputs — including static (i.e. time-invariant) covariates, known future inputs, and other exogenous time series that are only observed historically — without any prior information on how they interact with the target. While several deep learning models have been proposed for multi-step prediction, they typically comprise black-box models which do not account for the full range of inputs present in common scenarios. We introduce the Temporal Fusion Transformer (TFT) — a novel attention-based architecture which combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. To learn temporal relationships at different scales, the TFT utilizes recurrent layers for local processing and interpretable self-attention layers for learning long-term dependencies. The TFT also uses specialized components for the judicious selection of relevant features and a series of gating layers to suppress unnecessary components, enabling high performance in a wide range of regimes. On a variety of real-world datasets, we demonstrate significant performance improvements over existing benchmarks, and showcase three practical interpretability use-cases of TFT.
Quantifying the value of data is a fundamental problem in machine learning. Data valuation has multiple important use cases: (1) building insights about the learning task, (2) domain adaptation, (3) corrupted sample discovery, and (4) robust learning. To adaptively learn data values jointly with the target task predictor model, we propose a meta learning framework which we name Data Valuation using Reinforcement Learning (DVRL). We employ a data value estimator (modeled by a deep neural network) to learn how likely each datum is used in training of the predictor model. We train the data value estimator using a reinforcement signal of the reward obtained on a small validation set that reflects performance on the target task. We demonstrate that DVRL yields superior data value estimates compared to alternative methods across different types of datasets and in a diverse set of application scenarios. The corrupted sample discovery performance of DVRL is close to optimal in many regimes (i.e. as if the noisy samples were known apriori), and for domain adaptation and robust learning DVRL significantly outperforms state-of-the-art by 14.6% and 10.8%, respectively.
 

                                                       Yu (Hugo) Chen

                                                                     (Research Scientist, Facebook Meta AI)

Bio: Yu (Hugo) Chen is a Research Scientist at Meta AI. He got his PhD degree in Computer Science from Rensselaer Polytechnic Institute. His research interests lie at the intersection of Machine Learning (Deep Learning) and Natural Language Processing, with a particular emphasis on the fast-growing field of Graph Neural Networks and their applications in various domains. His work has been published at top-ranked conferences including but not limited to NeurIPS, ICML, ICLR, AAAI, IJCAI, NAACL, KDD, WSDM, TheWebConf, ISWC, and AMIA. He was the recipient of the Best Student Paper Award of AAAI DLGMA’20. He was one of the book chapter contributors of the book “Graph Neural Networks: Foundations, Frontiers, and Applications”. He delivered a series of DLG4NLP tutorials at NAACL’21, SIGIR’21, KDD’21, IJCAI’21, AAAI’22 and TheWebConf’22. His work has been covered in popular technology and marketing publications including World Economic Forum, TechXplore, TechCrunch, Ad Age and Adweek. He is a co-inventor of 4 US patents.

Title For Talk: Graph Structure Learning for Graph Neural Networks

Abstract:  Due to the excellent expressive power of Graph Neural Networks (GNNs) on modeling graph-structure data, GNNs have achieved great success in various applications such as Natural Language Processing, Computer Vision, recommender systems and drug discovery. However, the great success of GNNs relies on the quality and availability of graph-structured data which can either be noisy or unavailable. The problem of graph structure learning aims to discover useful graph structures from data, which can help solve the above issue.

In this talk, I will provide a comprehensive introduction of graph structure learning through the lens of both traditional machine learning and GNNs. Specifically, I will show how this problem has been tackled from different perspectives, for different purposes, via different techniques, as well as its great potential when combined with GNNs. I will cover recent progress in graph structure learning for GNNs and highlight some promising future directions in this research area.

                                              Chris Boshuizen

                                                           (Co-founder of Planet Labs)

Bio: Chris was co-founder of Planet Labs, a DCVC company providing unprecedented daily, global mapping of our changing planet from space. As the company’s CTO for 5 years, he took the company from the drawing board to having launched more satellites into space than any other company in history, completely transforming the space industry along the way.

Chris was previously a Space Mission Architect at NASA Ames Research Center. After working on a number of traditional spacecraft programs at NASA, Chris co-created Phonesat, a spacecraft built solely out of a regular smartphone. While at NASA, Chris also established Singularity University, a school for studying the consequences of accelerating technological development. Initially fulfilling the role of Interim Director, Chris helped raise over $2.5 million to establish the university, assembled the faculty, and served as co-chair for the University’s Department of Space and Sciences. Chris received his Ph.D. in Physics (with honors) and BSc. in Physics and Mathematics, both from the University of Sydney.

Title of Talk: Lessons from the transformation of Space Exploration

Abstract: Space has historically been the domain of large government programs, but over the past two decades new technologies have driven the creation of a thriving commercial space sector. New generations of satellite operators can do more in space than Super Power nations of the past. In this talk Dr Boshuizen will examine lessons from his first hand experience developing these spacecraft, and share exciting news about recent developments in human spaceflight, including his recent trip to space with Star Trek actor William Shatner. 

 

Important Deadlines

Full Paper Submission:7th May 2022
Acceptance Notification: 19th May 2022
Final Paper Submission:28th May 2022
Early Bird Registration: 27th May 2022
Presentation Submission: 29th May 2022
Conference: 6 – 9 June 2022

Previous Conference

IEEE AIIoT 2021

Sister Conferences

IEEE CCWC 2021

IEEE UEMCON 2020

IEEE IEMCON 2020

Search

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Announcements


•    Best Paper Award will be given for each track