Posts by Collection

portfolio

publications

Matching Neural Network for Extreme Multi-Label Learning

Published in 4th International Conference on Artificial Intelligence Applications and Technologies (AIAAT 2020), 2020

We propose MNN, a neural matching framework for extreme multi-label learning that maps features and labels into aligned representations via contrastive learning, overcoming tail-label challenges.

Recommended citation: Zhao, Z., Li, F., Zuo, Y., & Wu, J. (2020, September). Matching Neural Network for Extreme Multi-Label Learning. In Journal of Physics: Conference Series (Vol. 1642, No. 1, p. 012013). IOP Publishing. https://iopscience.iop.org/article/10.1088/1742-6596/1642/1/012013/meta

Where to go? Predicting next location in IoT environment

Published in Frontiers of Computer Science, 2020

We propose TSIS, a neural model that jointly learns from trajectory transitions and IoT signal sequences via gated GNNs and GRUs for accurate next-location prediction in IoT environments.

Recommended citation: Lin, H., Liu, G., Li, F., & Zuo, Y. (2021). Where to go? Predicting next location in IoT environment. Frontiers of Computer Science, 15(1), 151306. https://link.springer.com/article/10.1007/s11704-019-9118-9

BoostXML: Gradient Boosting for Extreme Multilabel Text Classification With Tail Labels

Published in IEEE Transactions on Neural Networks and Learning Systems, 2023

We present BoostXML, a deep learning-based extreme multilabel text classification method enhanced by gradient boosting, which specifically improves tail-label prediction through a Boosting Step that optimizes residuals from unfitted tail-label instances, a Corrective Step to avoid optimization mismatches, and a Pretraining Step to balance label focus.

Recommended citation: Li, F., Zuo, Y., Lin, H., & Wu, J. (2023). Boostxml: gradient boosting for extreme multilabel text classification with tail labels. IEEE Transactions on Neural Networks and Learning Systems, 35(11), 15292-15305. https://ieeexplore.ieee.org/abstract/document/10161991

LLMs as Zero-shot Graph Learners: Alignment of GNN Representations with LLM Token Embeddings

Published in 38th Conference on Neural Information Processing Systems (NeurIPS 2024), 2024

We propose TEA-GLM, a zero-shot graph learning framework that aligns GNN representations with LLM token embeddings via a fixed linear projector, enabling cross-task and cross-dataset generalization without LLM fine-tuning.

Recommended citation: Wang, D., Zuo, Y., Li, F., & Wu, J. (2024). Llms as zero-shot graph learners: Alignment of gnn representations with llm token embeddings. Advances in Neural Information Processing Systems, 37, 5950-5973. https://proceedings.neurips.cc/paper_files/paper/2024/file/0b77d3a82b59e9d9899370b378087faf-Paper-Conference.pdf

Memory-Tuning: A Unified Parameter-Efficient Tuning Method for Pre-Trained Language Models

Published in IEEE Transactions on Audio, Speech and Language Processing, 2024

We propose memory-tuning, a novel parameter-efficient method that unifies task-specific knowledge learning for both multi-head attention and feed-forward networks in Transformers, theoretically linking it to prefix tuning while outperforming full fine-tuning on eight benchmarks across sentence- and token-level tasks.

Recommended citation: Qi, W., Liu, R., Zuo, Y., Li, F., Chen, Y., & Wu, J. (2024). Memory-Tuning: A Unified Parameter-Efficient Tuning Method for Pre-trained Language Models. IEEE/ACM Transactions on Audio, Speech, and Language Processing. https://ieeexplore.ieee.org/abstract/document/10769026

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.