• Innovation through disruptive and scalable technology .
  • Cutting-edge AI research .
  • Accelerating innovations in research and service .
  • We strive for (and achieve) excellence! .
  • “SotA” (State-of-the-Art) .
  • Visual demo of research and service innovation
  • Human. Machine. Experience Together .

Participation in 2019 Association for Computational Linguistics (ACL)

2019.08.13

SK telecom’s AI Center participated in ACL, a major conference in the field of natural language processing, held from July 28 to August 2 in Florence, Italy. In order to find the latest trends, researchers from T-Brain within the AI Center attended Tutorial on recent research, Workshop on each topic, and Main Conference and Keynote Speech as well as hosted networking session with conference attendees.

In ACL, T-Brain published three papers and demonstrated the result in the field of natural language processing and multimodal learning.

First, in the session of Dialogue and Interactive Systems, research on SUMBT: Slot-Utterance Matching for Universal and Scalable Belief Tracking was presented. This paper proposed an architecture which tracks the dialogue state for a goal-oriented dialogue system. This model was designed to flexibly respond to the domain as well as scenario expansion, and we demonstrated this through experiments. The proposed model applies attention mechanism to contextual semantic vectors based on Bidirectional Encoder Representations for Transformers (BERT) model and achieved state-of-the-art (SOTA) performance from public datasets, WOZ 2.0 and MultiWOZ. In addition, T-Brain released the source codes and resource files used in the experiment, so that researchers can also reproduce the experimental results and use them in the comparative studies.

◆ Source Code & Resource: https://github.com/SKTBrain/SUMBT

Second, a paper on Soft Representation Learning for Sparse Transfer was presented in the session named Machine Learning. This study proposes transfer learning to simultaneously improve the performance of highly relevant tasks and applies it to multi-task learning and cross-lingual learning. The proposed method is to “soft-code” shared and private spaces by using adversarial training to avoid shared space become too sparse. In particular, we confirmed the increase in performance by solving negative transfer between low-relevant tasks observed in the hard-parameter sharing method.

Finally, CoDraw: Collaborative Drawing as a Testbed for Grounded Goal-driven Communication was presented in the session of Vision, Robotics, Multimodal, Grounding and Speech. In this work, we propose a CoDraw, a collaborative image drawing task to create an AI which can simultaneously learn the language, visual perception, and behavior by carrying out a goal-oriented collaboration task. There are two roles in this game: Teller and Drawer. The Teller can see preconfigured images with meaning made from a variety of clip art and deliver the content to Drawer via natural language through a chat room. The Drawer aims to reconstruct the image using clip art on a blank canvas based on the dialogue. We not only collected ~10K game dialogues consisting of ~138K messages from qualified people but also proposed performance indicators to quantitatively check the reconstructed images. The study revealed that when training the AI for two roles, it is important to use an evaluation condition called crosstalk, which uses non-overlapping training data. You can also check the result from the live game with humans. This research is in collaboration with UC Berkeley, Facebook AI Research and Seoul National University, and the dataset and training model can be found via the links below.

◆ Dataset: https://github.com/facebookresearch/CoDraw
◆ Training Model: https://github.com/facebookresearch/codraw-models