📝 Selected Publications

( * equal contribution)

ArXiv
sym

AV-DiT: Efficient Audio-Visual Diffusion Transformer for Joint Audio and Video Generation

Kai Wang, Shijian Deng, Jing Shi, Dimitrios Hatzinakos, Yapeng Tian.

Under Review

  • We design an efficient audio-visual diffusion transformer generate high-quality, realistic videos with both visual and audio tracks.
NeurIPS 2024
sym

MMLU-Pro: A more robust and challenging multi-task language understanding benchmark

Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue, Wenhu Chen.

NeurIPS 2024 (Spotlight)

EMNLP 2024
sym

VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation

Xuan He, Dongfu Jiang, Ge Zhang, Max Ku, Achint Soni, Sherman Siu, Haonan Chen, Abhranil Chandra, Ziyan Jiang, Aaran Arulraj, Kai Wang, Quy Duc Do, Yuansheng Ni, Bohan Lyu, Yaswanth Narsupalli, Rongqi Fan, Zhiheng Lyu, Yuchen Lin, Wenhu Chen

EMNLP 2024 (Main)

CVPR 2024
sym

Towards Efficient Audio-Visual Learners via Empowering Pre-trained Vision Transformers with Cross-Modal Adaptation

Kai Wang,Yapeng Tian, Dimitrios Hatzinakos.

CVPR 2024 Workshop

  • We propose a Spatial-Temporal-Global Cross-Modal Adaptation (STG-CMA) to gradually equip the frozen ViTs with the capability for learning audio-visual representation.
Pattern Recognition Letter
sym

HARWE: A multi-modal large-scale dataset for context-aware human activity recognition in smart working environments

Alireza Esmaeilzehi*, Ensieh Khazaei*, Kai Wang*, Navjot Kaur Kalsi, Pai Chet Ng, Huan Liu, Yuanhao Yu, Dimitrios Hatzinakos, Konstantinos Plataniotis.

Pattern Recognition Letter

  • We propose a novel dataset for the task of human activity recognition, in which the labels are specified for the working environments.
ICASSP 2024
sym

MoMA: Mixture-of-Modality-Adaptations for Transferring Knowledge from Image Models Towards Efficient Audio-Visual Action Recognition

Kai Wang, Dimitrios Hatzinakos.

ICASSP 2024 (Oral)

  • We propose a novel parameter-efficient scheme called Mixture-of-Modality-Adaptations (MoMA) for audio-visual action recognition.
APSIPA 2023
sym

SEformer: Dual-Path Conformer Neural Network is a Good Speech Denoiser

Kai Wang, Dimitrios Hatzinakos.

APSIPA 2023 (Oral)

  • We propose the SEformer, an efficient dual-path conformer neural network for speech enhancement.
IWAENC 2022
sym
ISCAS 2021
sym

CAUNet: Context-Aware U-Net for Speech Enhancement in Time Domain

Kai Wang, Bengbeng He and Wei-Ping Zhu.

ISCAS 2021

ICASSP 2021
sym