- 04-03赵迪 远程监督关系抽取
- 04-02张桐瑄 利用基于注意力机制的胶囊网络解决多标签关系抽取问题
- 03-28任玉琪 多任务学习在文本分类及命名实体识别中的应用
- 03-28李智恒 复杂命名实体识别方法简介
- 03-28丁泽源 Glyce: Glyph-vectors for Chinese Character Representations
- 03-28陈彦光 A Hierarchical Framework for Relation Extraction with Reinforcement Learning
近日, 收到期刊《Neurocomputing》编辑部邮件，实验室徐博等的研究工作“Incorporating query constraints for autoencoder enhanced ranking”已被录用。
Abstract：Learning to rank has been widely used in information retrieval tasks to construct ranking models for document retrieval. Existing learning to rank methods adopt supervised machine learning methods as core techniques and classical retrieval models as document features. The quality of document features can significantly affect the effectiveness of ranking models. Therefore, it is necessary to generate effective document features in ranking to extend the feature space of learning to rank for better modeling the relevance between queries and their corresponding documents. Recently, deep neural network models have been used to generate effective features for various text mining tasks. Autoencoders, as one type of building blocks of neural networks, capture semantic information as effective features based on an encoder-decoder framework. In this paper, we incorporate autoencoders into constructing ranking models based on learning to rank. In our method, autoencoders are used to generate effective documents features for capturing semantic information of documents. We propose a query-level semi-supervised autoencoder by considering three types of query constraints based on Bregman divergence. We evaluate the effectiveness of our model on datasets from LETOR 3.0 and LETOR 4.0, and show that our model significantly outperforms other competing methods to improve retrieval performance.