>>最受欢迎的情感词典,欢迎点击下载!<<
研究方向
学术报告
资源下载
当前位置: 首页>>新闻动态>>正文
    实验室关于公平性推荐的研究成果被KBS录用
    2021-12-24 10:49  

      近日,博士生刘海峰的研究成果被ELSEVIER旗下计算机top期刊Knowledge Based System(KBS)录用,KBS为中科院一区期刊,CCF推荐期刊C类。

      题目:Dual constraints and adversarial learning for fair recommenders (双约束和对抗学习的公平性推荐系统)

      摘要:Recommender systems, which are consist of common artificial intelligence technology, have a profound impact on the lifestyles of people. However, recent studies have demonstrated that recommender systems have fairness problems which means that some people with certain attributes are treated unfairly. A fair recommender means that users with different attributes achieve the same recommender accuracy. In particular, the recommender systems completely rely on users’ behavior data for preferences learning, which leads to a high probability of unfair problems because that the behavior data usually contains sensitive information of users. Unfortunately, there are a few studies exploring unfair problem in recommender systems. To alleviate this problem, we present a novel fairness-aware recommender with dual fairness constraints (FRFC) to improve fairness in recommendations and protect the user’s sensitive information from being exposed. This model has several advantages: one advantage is that an adversarial-based graph neural network (GNN) is proposed to prevent the target user being infected by sensitive features of neighbor users; another advantage is that two fairness constraints are proposed to solve the problems of adversarial classifier failures in whole data and unfair ranking losses. With this design, the FRFC model can effectively filter out users’ sensitive information and give users of different attributes the same training opportunities, which is helpful for making a fair recommendation. Finally, extensive experiments demonstrate that the proposed model can significantly improve the fairness of recommendation results.

      基于人工智能的推荐系统以及深刻影响到人们的日常生活。然而,最近的研究表明推荐系统存在公平性问题,即特定属性的群体被算法不公平的对待。一个公平性的算法意味着不同属性的群体能够得到公平的对待。事实上,推荐系统完全依赖人类的用户行为来学习用户的偏好,这会导致极高的概率产生不公平问题。因为人类的行为数据通常包含用户的敏感信息。不幸的是,关于推荐系统公平性的探究仍然尚处于萌芽时期。为了有效解决推荐系统公平性的问题,我们提出了一个新颖的注重公平性的双约束推荐算法来保护用户的敏感信息不被泄露。该模型具有以下优点,一个是提出了基于对抗学习的图神经网络算法来组织目标用户被邻居节点的敏感信息所感染;第二个是两个公平性约束被提出用来解决对抗分类算法难以覆盖全数据和不公平排序问题。基于上述设计,提出的FRFC模型可以有效的过滤掉用户的敏感信息并且基于不同属性用户相同的训练机会。最终丰富是实验结果表明所提出的算法可以有效的提高推荐系统的公平性。

    关闭窗口