Automated Deep Learning: Theory, Algorithms, Platforms, and Applications


The rapidly increasing use of machine learning, data mining, and data analytics techniques across a broad range of applications provides opportunities for automated model construction, sharing and reuse of models, algorithms, and code, to help increase speed to solution and reduce duplication of effort. Although this is true for a wide range of machine learning tasks, automated model construction is of particular importance to deep learning for a number of reasons. Examples of such reasons include that

(1) deep learning models have many hyperparameters that need to be tuned,

(2) deep learning models take long time to train and

(3) there is only a handful widely used deep learning architectures.

In this tutorial we plan to focus on timely topics such as neural architecture search, transfer learning and meta learning, and deep learning model compressions. The tutorial will include a comprehensive survey of state-of-the-art algorithms and systems, a detailed description of the presenters’ research experience, and live-demonstration of platforms built by the Baidu AutoDL team.

No prerequisites are required for the tutorial. General knowledge of supervised learning and deep learning is assumed.


Big Data, Deep Learning, and huge computing are shaping up AI and are transforming our society. In areas such as game playing, image classification, and speech recognition, AI algorithms may have already surpassed human experts’ capability. For cognition tasks such as Q&A and text generation, AI delivers capability that is comparable to human intelligence. We are also observing transformations AI produces to industry sectors such as social media, finance, and transportation.

Automated Deep Learning model construction becomes a critical research problem in current AI research and development. At the core of the problem is to study the design space of deep learning, gaining insights on how deep learning works and why it may fail to deliver desired results.

The research of automated deep learning should at least include three key components:

(1) neural architecture search

(2) model construction with a changing environment such as transfer learning or meta learning and

(3) the adaptation of models/architectures to a different computing environment such as mobile computing environment.

The proposed tutorial is hence organized around these three directions.


We plan to cover a wide-range of topics that are related to automated deep learning model construction, transfer, and compression. Specifically the outline of the tutorial is:

(1) Neural Architecture Search

a. Deep Reinforcement Learning based NAS

b. Differentia Architecture Search

c. Random Search and Evolutionary Search

(2) Deep Learning Model Transfer and Meta Learning

a. Fine tune

b. Regularization based transfer learning

c. Knowledge-distillation

(3) Deep Learning Model Compression

a. Prune, half-precision, and low-rank decomposition

b. Parameter Sharing

c. Knowledge-distillation

d. NAS-based Model Compression

(4) AutoML platforms

a. Google Cloud AutoML

b. Microsoft Azure ML

c. Amarzon SageMaker

(5) Live-demonstration of Baidu EasyDL and Jarvis (powered by Baidu AutoDL)


Dr. Dejing Dou is a Professor in the Computer and Information Science Department at the University of Oregon and leads the Advanced Integration and Mining (AIM) Lab. He is also the Director of the NSF IUCRC Center for Big Learning (CBL). He received his bachelor degree from Tsinghua University, China in 1996 and his Ph.D. degree from Yale University in 2004.

His research areas include artificial intelligence, data mining, data integration, information extraction, and health informatics. Dr. Dejing Dou has published more than 100 research papers, some of which appear in prestigious conferences and journals like AAAI, IJCAI, KDD, ICDM, ACL, EMNLP, CIKM, ISWC, JIIS and JoDS. His DEXA'15 paper received the best paper award. His KDD'07 paper was nominated for the best research paper award. He is on the Editorial Boards of Journal on Data Semantics, Journal of Intelligent Information Systems, and PLOS ONE. He has been serving as program committee members for various international conferences and as program co-chairs for four of them. Dr. Dejing Dou has received over $5 million PI research grants from the NSF and the NIH.

Other tutors:

Dr. Jun Huan

Dr. Siyu Huang

Dr. Di Hu

Mr. Xingjian Li

Dr. Haoyi Xiong

Dr. Boyang Li


We will distribute the tutorial materials through the AutoDL GitHub [Link:].

Tutorial Info

Our tutorial will be held in ICDM 2019:

19th IEEE International Conference on Data Mining
Beijing, China, 8-11 November 2019


Part I: AutoDL and its Applications by Dr. Dejing Dou

Part II: A Tutorial on Neural Architecture Search by Dr. Siyu Huang

Part III-a: Parameter Regularization Schemes for AutoDL Transfer Learning by Dr. Xuhong Li

Part III-b: AutoDL Transfer Learning by Mr. Xingjian Li

Part IV: Industrializing AI with Baidu AutoDL by Dr. Haoyi Xiong