IEEE/IEIE ICCE-Asia 2018

June 24th(Sun) - June 26th(Tue), 2018/ Ramada Plaza Hotel, Jeju, Korea

Program

Important Dates
  • Submission due
    April 20 April 27, 2018
  • Accepted papers and special sessions notification
    April 6 May 4, 2018
  • Final submission due
    May 4 May 18, 2018
  • Registration Due
    May 31 June 4, 2018

TODAY 2018. 10. 18

ICCE-Asia 2018

D-0

Tutorials

Explainable Artificial Intelligence: Models and Applications

Presenter

Prof. Jaesik Choi

  • UNIST

Abstract

Recent advances in artificial intelligence have strong potential to change our lives in a disruptive manner. Big data and enhanced computing power have led the way to build successful applications in many domains such as Go game and autonomous navigation. However, there is still a noticeable gaps to apply recently advanced AI technology to various practical applications in our daily lives. One of the main challenges is to make the decisions of AI systems understandable/explainable to human. As an examples, many applications in medical diagnosis are not yet successful used, since the reasons of decision are not often explainable to medical doctors. In this talk, recent research in explainable artificial intelligence will be introduced. Moreover, the goals of explainable AI project supported by the ministry of science and technology and IITP will be presented.

Biography

Prof. Jaesik Choi is an associate professor in the School of Electrical and Computer Engineering at UNIST and an affiliate researcher of the Lawrence Berkeley National Laboratory.

His research is concerned with statistical inference and machine learning for large-scale artificial intelligence problems including scaling up inference algorithms for large-scale dynamic systems, predictive analysis for time series data and its application to large-scale manufacturing systems. Some of his recent research results include the Lifted Relational Kalman Filtering (a scalable, linear time Kalman filtering algorithm), the Spatio-Temporal Pyramid Matching Kernel (the first pyramid matching kernel for spatio-temporal data), and the Relational Automatic Statistician (an automated, explainable interface for multivariate time series data).

He has been an associate professor at UNIST since September 2017. Previously, he was an assistant professor at UNIST since July 2013. He was a Computer Scientist Postdoctoral Fellow of Computational Research Division at the Berkeley Lab. He received his Ph.D in Computer Science from University of Illinois at Urbana-Champaign in 2012 and received B.S. degree in Computer Engineering from Seoul National University in 2004.

Currently, he is a POSCO fellow professor. He is a director of UNIST Explainable Artificial Intelligence Center.


Deep Neural Networks for Recognition in Videos

Presenter

Prof. Bohyung Han

  • Electrical and Computer Engineering at Seoul National University, Korea

Abstract

Deep neural networks are successful in various recognition problems with video data such as action classification and localization, object detection in videos, video generation, video captioning, visual tracking, visual surveillance, etc. However, there are still a lot of challenges, especially compared to recognition in images, that should be overcome for development of practical systems. The main topics of this tutorial include representation learning of videos, neural network architectures of video recognition algorithms, and practical issues in algorithm implementation. In particular, I present recent trends and progresses of visual recognition in videos, and describe critical limitations of the research in the problem. At the end of the tutorial, I briefly discuss a couple of potentially desirable directions for research in video recognition to make bigger achievements.

Biography

Bohyung Han is currently an Associate Professor in the School of Electrical and Computer Engineering at Seoul National University, Korea. Prior to the current position, he was an Associate Professor in the Department of Computer Science and Engineering at POSTECH, Korea and a visiting research scientist in Machine Intelligence Group at Google, Venice, CA, USA. He received the B.S. and M.S. degrees from the Department of Computer Engineering at Seoul National University, Korea, in 1997 and 2000, respectively, and the Ph.D. degree from the Department of Computer Science at the University of Maryland, College Park, MD, USA, in 2005. He served or will be serving as an Area Chair or Senior Program Committee member of major conferences in computer vision and machine learning including CVPR, ICCV, NIPS, IJCAI and ACCV, a Tutorial Chair in ICCV 2019, and a Demo Chair in ACCV 2014. He is also serving as Area Editor in Computer Vision and Image Understanding and Associate Editor in Machine Vision Applications. He is interested in various problems in computer vision and machine learning with emphasis on deep learning. His research group won Visual Object Tracking (VOT) Challenge in 2015 and 2016.


Energizing and Powering Microsensors

Presenter

Prof. Gabriel A. Rincón-Mora, Ph.D.

  • NAI Fellow, IEEE Fellow, and IET Fellow Georgia Institute of Technology

Abstract

Networked wireless microsensors can not only monitor and manage power consumption in small- and large-scale applications for space, military, medical, agricultural, and consumer markets but also add cost-, energy-, and life-saving intelligence to large infrastructures and tiny devices in remote and difficult-to-reach places. Ultra-small systems, however, cannot store sufficient energy to sustain monitoring, interface, processing, and telemetry functions for long. And replacing or recharging the batteries of hundreds of networked nodes can be labor intensive, expensive, and oftentimes impossible. This is why alternate sources are the subject of ardent research today. Except power densities are low, and in many cases, intermittent, so supplying functional blocks is challenging. Plus, tiny lithium-ion batteries and super capacitors, while power dense, cannot sustain life for extended periods. This talk illustrates how emerging microelectronic systems can draw energy from elusive ambient sources to power tiny wireless sensors.

Biography

Gabriel A. Rincón-Mora was Design Team Leader at Texas Instruments in 1994-2003, Adjunct Professor at the Georgia Institute of Technology (Georgia Tech) in 1999-2001, and Director of the Georgia Tech Analog Consortium in 2001-2004 and has been Professor at Georgia Tech since 2001 and Visiting Professor at National Cheng Kung University in Taiwan since 2011. He is Fellow of the American National Academy of Inventors (NAI), Fellow of the Institute of Electrical and Electronics Engineers (IEEE), and Fellow of the Institution of Engineering and Technology (IET). His scholarly products include 9 books, 4 book chapters, 42 patents, over 170 articles, over 26 commercial power-chip designs, and over 130 international speaking engagements. He was inducted into Georgia Tech's Council of Outstanding Young Engineering Alumni and named one of "The 100 Most Influential Hispanics" by Hispanic Business magazine. He received the National Hispanic in Technology Award, Charles E. Perry Visionary Award, Orgullo Hispano Award, Hispanic Heritage Award, IEEE Service Award, IEEE Certificate of Appreciation, and Commendation Certificate from the Lieutenant Governor of California.


Multi-task and continual learning for deep reinforcement learning

Presenter

Prof. Taesup Moon

  • Sungkyunkwan University (SKKU)

Abstract

Recently, deep reinforcement learning, which combines deep neural networks with reinforcement learning has made significant impacts in several areas that involve interactive environments for learning, e.g., game playing and robotics. Although those impacts were impressive, namely, often achieving superhuman performance as in AlphaGo and Atari game playing, one of the important remaining challenges is to make those algorithms quickly generalize to correlated tasks (multi-task) and not forget already learned tasks (continual). In this tutorial, I will first review some basic methods that were shown to be effective for deep RL, such as policy gradient, deep Q-network, and actor-critic methods. Then, I will cover more recent approaches that attempt to address challenges for multi-task and continual deep RL and summarize the high-level ideas of those approaches, e.g., model expansion / distillation, Bayesian regularization, modular approaches, and using external memory structures. Finally, I will conclude with some potential future research directions.

Biography

Taesup Moon received the B.S. degree in electrical engineering from Seoul National University in 2002 and the M.S. and Ph.D. degrees in electrical engineering from Stanford University in 2004 and 2008, respectively. From 2008 to 2012, he was a Research Scientist with Yahoo! Labs, Sunnyvale, CA, and he held a Post-Doctoral Researcher appointment with the Department of Statistics, UC Berkeley, from 2012 to 2013. From 2013 to 2015, he was a Research Staff Member with Samsung Advanced Institute of Technology, Samsung Electronics, Inc. From 2015 to 2017, he was an Assistant Professor at the Department of Information and Communication Engineering, Daegu-Gyeongbuk Institute of Science and Technology (DGIST). Since March 2017, he has been an Assistant Professor at the School of Electronic and Electrical Engineering, College of Information and Communication Engineering, Sungkyunkwan University (SKKU).

His research interests include diverse areas such as machine learning (deep learning), information theory, signal processing, speech recognition, and remote sensing. He is a recipient of GE Scholarship and Samsung Scholarship.


Deep Reinforcement Learning and AlphaGo

Presenter

Prof. Sae-Young Chung

  • KAIST

Abstract

Deep reinforcement learning can solve seemly intractable reinforcement learning problems such as learning to play arcade games directly from video and learning to play the game of Go. In this tutorial, I will first talk about some basics of deep reinforcement learning including value-based methods, deep Q-network (DQN), policy-based methods, actor-critic methods, A3C, hierarchical reinforcement learning, issues with exploration and learning speed, and model-based reinforcement learning. I will then talk about AlphaGo and AlphaGo Zero, principles behind their operations, and how training is done for them.

Biography

Sae-Young Chung received the B.S. and M.S. degrees in electrical engineering from Seoul National University, Seoul, South Korea, in 1990 and 1992, respectively and the Ph.D. degree in electrical engineering and computer science from the Massachusetts Institute of Technology, Cambridge, MA, USA, in 2000. From September 2000 to December 2004, he was with Airvana, Inc., Chelmsford, MA, USA. Since January 2005, he has been with the School of Electrical Engineering, KAIST, Daejeon, South Korea, where he is currently Professor. He served as Associate Editor for the IEEE Transactions on Communications from 2009 to 2013 and for the IEEE Transactions on Information Theory from 2014 to 2016. He served as the Technical Program Co-Chair of the 2014 IEEE International Symposium on Information Theory and as the Technical Program Co-Chair of the 2015 IEEE Information Theory Workshop. His research interests include information theory, deep learning, and reinforcement learning.


Bayesian analysis and its applications in AI

Presenter

Prof. Gwangsu Kim

  • KAIST

Abstract

This tutorial will introduce the fundamental concepts of probability, maximum likelihood estimator and Bayesian, and processes to perform data analysis using Bayesian approach. It will cover the properties and problems to be aware of in Bayesian analysis, and show the examples used in clustering, deep neural networks, and online learning.

Biography

Gwangsu Kim is a research associate professor in the School of Electrical Engineering at KAIST, and a member of Bayesian Research of the Korean Statistical Society.

His research is concerned with statistical inference and machine learning to use Bayesian. Bayesian nonparametrics such as Dirichlet process, beta process and Gaussian process (basis expansion) are the main interests of his research. He used Bayesian and machine learning methodologies in various areas and models such as signal analysis via Gabor frame, structured regression in medical research, detecting abrupt changes in hydrology, nonparametric Bayesian survival analysis, random effects analysis in medical research and sociology. His recent works are the submodular sampling, and the statistical properties of deep neural networks.

He was a research assistant professor in Seoul National University at March 2016 ~ August 2017 (March 2012 ~ August 2013) and a research professor in Korea University at September 2014 ~ February 2016. He received his Ph.D. (Statistics) in Seoul National University at 2012, and received his Bachelor's degree in Seoul National University at 2006. He has served as an advisory for the projects ordered by Ministry of Unification, Korea Maritime Institute, etc.


Interface between data collection and statistical modeling — an industrial statistician's perspective

Presenter

Prof. Youngdeok Hwang

  • Sungkyunkwan University

Abstract

Statistics has advanced while trying to give an answer to the urgent challenges from the advances in industry--from Fisher’s pioneering work on agricultural experiment and Box’s contribution on manufacturing revolution to recent rises of “data science.” This talk briefly reviews the historical aspect of statistics from an industrial statistician’s perspective. It introduces the interactive nature of the data collection and modeling in industrial statistical research with some examples. Also introduced are experiences of interdisciplinary collaboration to solve challenging problems that entail the effort from diverse background.

Biography

Youngdeok Hwang is an assistant professor at the Department of Statistics, Sungkyunkwan University (SKKU). Before joining SKKU, he worked as a statistician at IBM TJ Watson Research Center from 2012 to 2017. He received his master's and doctoral degrees from University of Wisconsin at 2010 and 2012, respectively. His research areas include developing new statistical methodologies for solving problems in industry and engineering – in the area of design and analysis of computer experiments, remote-sensing technology, wireless sensor networks and IoT.


TensorFlow Recipes for Deep Learning Methods

Presenter

Prof. Gil-Jin Jang

  • Kyungpook National University

Abstract

This tutorial is for the beginners in deep learning and TensorFlow. The main objective of the tutorial is to help the beginners write their own TensorFlow programs by introducing reference program examples from the basic concept to the advanced network architectures. Topics include primitive TensorFlow data structures such as variables, placeholders, tensors, and matrices; implementation skills for graph building, network layers, loss function design, and training algorithms; basic neural network architectures; convolutional neural networks (CNNs) with MNIST and CIFAR10 examples; advanced architectures including AlexNet and VGGNet; recurrent neural networks (RNNs) with text processing examples; extra implementation tips.

Biography

Dr. Gil-Jin Jang is an associate professor at Kyungpook National University, South Korea. He received his B.S. and M.S. degree in computer science from the Korea Advanced Institute of Science and Technology (KAIST), Daejon, South Korea in 1997 and 1999 respectively. He also received his Ph.D. degree in the same department in February 2004. From 2004 to 2006 he was a research staff at Samsung Advanced Institute of Technology and from 2006 to 2007 he worked as a research engineer at Softmax, Inc. in San Diego. From 2008 to 2009 he joined Shiley Eye Center at University of California, San Diego as a postdoctoral scholar. From November 2009 to February 2014 he was an assistant professor at School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST). Dr. Jang's research interests include deep learning and machine learning theories, acoustic signal processing, speech recognition and enhancement, computer vision, multimedia data analysis, and biomedical signal engineering.


Visually-Grounded Question and Answering: from VisualQA to MovieQA

Presenter

  • Jung-Woo Ha (NAVER Corp.)

    Jung-Woo Ha works as the leader of Clova machine learning team and the Clova AI Research Director of NAVER & LINE. He received his BS and PhD from the department of computer science in Seoul National University in 2004 and 2015. He got the best PhD thesis award of the department of computer science of SNU. He has interests in deep learning-based computer vision, natural language processing, audio signal modeling and recommendation.

  • Jin-Hwa Kim (Seoul National University)

    He is a Ph.D. candidate in Biointelligence Lab. at Seoul National University. He has been studying machine learning and artificial intelligence under the supervision of Professor Byoung-Tak Zhang. In September 2017, He received a 2017 Google Ph.D. Fellowship in Machine Learning. He was a research intern at Facebook AI Research (Menlo Park, CA) mentored by Yuandong Tian, Research Scientist, from January to May in 2017. In 2015, he received Master of Science in Engineering degree from Seoul National University, and, in 2011, Bachelor of Engineering degree from Kwangwoon University (summa cum laude). He was a software engineer in Search Infra Development Team at SK Communications (Seoul, Republic of Korea) from 2011 to 2012.

  • Kyung Min Kim (Seoul National University)

    Kyung Min Kim received his B.S. from Hongik University, Seoul, Korea in 2013. He is currently pursuing Ph.D degree in Computer Science in the field of machine learning at Seoul National University while working for Surromind Robotics. He won MovieQA Challenge in 2017 and has hold 1st position to date. He awarded Naver Ph.D. Fellowship in 2017.

Abstract

The advancement of computer vision and natural language processing based on deep learning has resulted in the remarkable progress of visually-grounded language learning. In particular, visual question and answering (VQA) has become one of the most interesting and preferred research topics in computer vision since the VQA challenge started in 2016. Beyond VQA, MovieQA also emerged a challenging topic as a new multimodal semantic learning tasks. In this talk, we introduce two visually-grounded question and answering such as VQA and MovieQA. This talk includes the task definition, data preparation, various state-of-the art approaches for two multimodal QAs. In addition, we explain the challenging issues of these tasks and discuss of their future direction.


Organized by