2022 World AI IoT Congress


                                                    Vasilis Syrgkanis

                                                  (Principal Researcher at Microsoft Research, New England)

Bio:  Dr. Syrgkanis is the Principal Researcher at Microsoft Research, New England, where he is also co-leading the project on Automated Learning and Intelligence for Causation and Economics (ALICE). His research lies at the intersection of theoretical computer science, machine learning and economics/econometrics. He received his Ph.D. in Computer Science from Cornell University, where he had the privilege to be advised by Eva Tardos and then spent two years as a postdoc researcher at Microsoft Research, NYC, as part of the Algorithmic Economics and the Machine Learning groups. He obtained my diploma in EECS at the National Technical University of Athens, Greece.

Title: Towards Automating the Causal Inference Pipeline

Abstract:  With the advent of modern automated machine learning platforms, predictive modelling has been made accessible to almost everyone with access to a dataset and an outcome of interest. However, many data analytic tasks that data shareholders are facing are decision making tasks that require causal modeling; understanding how the outcome of interest will change when we intervene on one of the variables. Making causal inference as accessible as predictive modeling is one of the main challenges of modern data analytics. The causal inference pipeline requires many more cognitive steps than predictive modeling, such as data cleaning, assumption elicitation, causal validation, sensitivity analysis, inference (uncertainty quantification), experimentation. Many of these extra components necessitate a human-in-the-loop architecture. We discuss some recent technical advancements on the research side, such as dealing with corrupted data within a causal task, automating causal estimation with machine learning and automating the construction of confidence intervals. We will also give an overview of the growing software ecosystem that makes causal machine learning more accessible to data scientists and decision makers, such as the EconML and the DoWhy library from Microsoft Research.

                                                Sercan Arik

                                                       (Research Scientist and Manager at Google Cloud AI)

Bio: Sercan Arik is currently working as a Research Scientist and Manager at Google Cloud AI. His current work is motivated by the mission of democratizing AI and bringing it to the most impactful use cases, from Healthcare, Finance, Retail, Media, Education, Communications and many other industries. Towards this goal, he focuses on how to make AI more high-performance for the most-demanded data types, interpretable, trustable, data-efficient, robust and reliable. He led research projects that were launched as major Google Cloud products and yielded significant business impact, such as TabNet and Covid-19 forecasting. Before joining Google, he was a Research Scientist at Baidu Silicon Valley AI Lab. At Baidu, he focused on deep learning research, particularly for applications in human-technology interfaces. He co-developed deep learning-based state-of-the-art speech synthesis, keyword spotting, voice cloning, and neural architecture search systems. He completed his PhD degree in Electrical Engineering at Stanford University. He co-authored more than 50 journal and conference publications.

Title For Talk: Explainable Deep Learning for Structured Data

Abstract: In this talk, we go over our 3 projects developed to push the limits of deep learning for structured data: TabNet, TFT, and DVRL.

TabNet is a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other neural network and decision tree variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into the global model behavior. Finally, for the first time to our knowledge, we demonstrate self-supervised learning for tabular data, significantly improving performance with unsupervised representation learning when unlabeled data is abundant.
Multi-horizon forecasting problems often contain a complex mix of inputs — including static (i.e. time-invariant) covariates, known future inputs, and other exogenous time series that are only observed historically — without any prior information on how they interact with the target. While several deep learning models have been proposed for multi-step prediction, they typically comprise black-box models which do not account for the full range of inputs present in common scenarios. We introduce the Temporal Fusion Transformer (TFT) — a novel attention-based architecture which combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. To learn temporal relationships at different scales, the TFT utilizes recurrent layers for local processing and interpretable self-attention layers for learning long-term dependencies. The TFT also uses specialized components for the judicious selection of relevant features and a series of gating layers to suppress unnecessary components, enabling high performance in a wide range of regimes. On a variety of real-world datasets, we demonstrate significant performance improvements over existing benchmarks, and showcase three practical interpretability use-cases of TFT.
Quantifying the value of data is a fundamental problem in machine learning. Data valuation has multiple important use cases: (1) building insights about the learning task, (2) domain adaptation, (3) corrupted sample discovery, and (4) robust learning. To adaptively learn data values jointly with the target task predictor model, we propose a meta learning framework which we name Data Valuation using Reinforcement Learning (DVRL). We employ a data value estimator (modeled by a deep neural network) to learn how likely each datum is used in training of the predictor model. We train the data value estimator using a reinforcement signal of the reward obtained on a small validation set that reflects performance on the target task. We demonstrate that DVRL yields superior data value estimates compared to alternative methods across different types of datasets and in a diverse set of application scenarios. The corrupted sample discovery performance of DVRL is close to optimal in many regimes (i.e. as if the noisy samples were known apriori), and for domain adaptation and robust learning DVRL significantly outperforms state-of-the-art by 14.6% and 10.8%, respectively.

                                                       Yu (Hugo) Chen

                                                                     (Research Scientist, Facebook Meta AI)

Bio: Yu (Hugo) Chen is a Research Scientist at Meta AI. He got his PhD degree in Computer Science from Rensselaer Polytechnic Institute. His research interests lie at the intersection of Machine Learning (Deep Learning) and Natural Language Processing, with a particular emphasis on the fast-growing field of Graph Neural Networks and their applications in various domains. His work has been published at top-ranked conferences including but not limited to NeurIPS, ICML, ICLR, AAAI, IJCAI, NAACL, KDD, WSDM, TheWebConf, ISWC, and AMIA. He was the recipient of the Best Student Paper Award of AAAI DLGMA’20. He was one of the book chapter contributors of the book “Graph Neural Networks: Foundations, Frontiers, and Applications”. He delivered a series of DLG4NLP tutorials at NAACL’21, SIGIR’21, KDD’21, IJCAI’21, AAAI’22 and TheWebConf’22. His work has been covered in popular technology and marketing publications including World Economic Forum, TechXplore, TechCrunch, Ad Age and Adweek. He is a co-inventor of 4 US patents.

Title For Talk: Graph Structure Learning for Graph Neural Networks

Abstract:  Due to the excellent expressive power of Graph Neural Networks (GNNs) on modeling graph-structure data, GNNs have achieved great success in various applications such as Natural Language Processing, Computer Vision, recommender systems and drug discovery. However, the great success of GNNs relies on the quality and availability of graph-structured data which can either be noisy or unavailable. The problem of graph structure learning aims to discover useful graph structures from data, which can help solve the above issue.

In this talk, I will provide a comprehensive introduction of graph structure learning through the lens of both traditional machine learning and GNNs. Specifically, I will show how this problem has been tackled from different perspectives, for different purposes, via different techniques, as well as its great potential when combined with GNNs. I will cover recent progress in graph structure learning for GNNs and highlight some promising future directions in this research area.

                                              Chris Boshuizen

                                                           (Co-founder of Planet Labs)

Bio: Dr. Chris Boshuizen is an Australian astronaut, scientist, entrepreneur, investor, and musician. Currently a Partner at DCVC, a deep tech investment company in San Francisco where he focuses on funding cutting edge space companies, Boshuizen completed his PhD in physics at The University of Sydney before accepting a position at the NASA Ames Research Center in California. There Dr Boshuizen established Singularity University and most notably co-created the NASA Phonesat. After leaving NASA he co-founded Planet Labs, the first company to employ nanosatellites in a commercial capacity, radically reducing the cost of lifting payloads into space and paving the way for today’s large constellations of spacecraft. Today, Planet operates the largest fleet of Earth-observing satellites and maps the entire surface of the Earth daily, enabling key insights into our changing world that were previously unobtainable. Boshuizen was the 2014 Advance Global Australian of the Year award winner, and has subsequently become a member of the Advance Board of Directors where he is an active spokesperson for successful Australians abroad. Boshuizen is also a musician and releases music under the name “Dr Chrispy”. Dr Boshuizen flew to space as a commercial astronaut on Blue Origin’s New Shepard NS-18 mission on October 13 2021.

Title of Talk: Lessons from the transformation of Space Exploration

Abstract: Space has historically been the domain of large government programs, but over the past two decades new technologies have driven the creation of a thriving commercial space sector. New generations of satellite operators can do more in space than Super Power nations of the past. In this talk Dr Boshuizen will examine lessons from his first hand experience developing these spacecraft, and share exciting news about recent developments in human spaceflight, including his recent trip to space with Star Trek actor William Shatner. 


Important Deadlines

Full Paper Submission:7th May 2022
Acceptance Notification: 19th May 2022
Final Paper Submission:28th May 2022
Early Bird Registration: 27th May 2022
Presentation Submission: 29th May 2022
Conference: 6 – 9 June 2022

Previous Conference


Sister Conferences







•    Best Paper Award will be given for each track