研究方向:人工智能、计算机视觉、多媒体计算、社交多媒体
部分发表的论文:
1. Junyi Wang, Bing-Kun Bao, Changsheng Xu, "Sentiment-Aware Multi-modal Recommendation on Tourist Attractions", MMM, 2019. 【BEST PAPER RUNNER UP】
2. Weiqing Min, Bing-Kun Bao, Shuhuan Mei, Yaohui Zhu, Yong Rui, Shuqiang Jiang, "You are What You Eat: Exploring Rich Recipe Information for Cross-Region Food Analysis", IEEE Transactions on Multimedia (IEEE TMM), 2017.
3. Fudong Nian, Bing-Kun Bao, Teng Li, Changsheng Xu, "Multi-Modal Knowledge Representation Learning via Webly-Supervised Relationships Mining", ACM Multimedia, pp. 411-419, 2017.
4. Bing-Kun Bao, Congyan Lang, Tao Mei, Alberto Del Bimbo: Learning Multimedia for Real World Applications. Multimedia Tools and Applications (MTAP), vol. 75, no .5, pp 2: 413-2417, 2016.
5. Weiqing Min, Bing-Kun Bao, Changsheng Xu: An incremental probabilistic model for temporal theme analysis of landmarks. Multimedia Systems (MMST). vol. 22, no. 4, pp: 465-477, 2016.
6. Bing-Kun Bao, Weiqing Min, Teng Li, Changsheng Xu, “Joint Local and Global Consistency on Inter-document and Inter-word Relationships for Co-clustering”, IEEE Transactions on Cybernetics (IEEE TCYB), vol. 45, no. 1, pp. 15-28, 2015.
7. Bing-Kun Bao, Weiqing Min, Changsheng Xu, “Cross-Platform Emerging Topic Detection and Elaboration from Multimedia Streams”, ACM Transactions on Multimedia Computing Communications and Applications (ACM TOMM), vol. 11, no. 4, pp. 1-21, 2015. 【BEST PAPER】
8. Weiqing Min, Bing-Kun Bao, Changsheng Xu, M. Shamim Hossain: Cross-Platform Multi-Modal Topic Modeling for Personalized Inter-Platform Recommendation. IEEE Transactions on Multimedia (IEEE TMM), vol. 17, no. 10, pp: 1787-1801, 2015.
9. Chao Sun, Bing-Kun Bao, Changsheng Xu: Knowing Verb From Object: Retagging With Transfer Learning on Verb-Object Concept Images. IEEE Transactions on Multimedia (IEEE TMM), vol. 17, no. 10, pp: 1747-1759, 2015.
10. Weiqing Min, Bing-Kun Bao, Changsheng Xu, "Multimodal Spatio-Temporal Theme Modeling for Landmark Analysis", IEEE MultiMedia 21(3): 20-29 (2014) 【BEST PAPER】
11. Bing-Kun Bao, Guangyu Zhu, Jialie Shen, Shuicheng Yan, “Robust Image Analysis with Sparse Representation on Quantized Visual Features”, IEEE Transactions on Image Processing (IEEE TIP), vol. 22, no 3, pp. 860-871, 2013.
12. Bing-Kun Bao, Guangcan Liu, Richang Hong, Shuicheng Yan, Changsheng Xu, “General Subspace Learning with Corrupted Training Data”, IEEE Transactions on Image Processing (IEEE TIP), vol. 22, no. 11, pp: 4380-4393, 2013.
13. Bing-Kun Bao, Guangcan Liu, Changsheng Xu, Shuicheng Yan, “Inductive Robust Principal Component Analysis”, IEEE Transactions on Image Processing (IEEE TIP), vol. 21, no 8, pp. 3794-3800, 2012.
14. Bing-Kun Bao, Teng Li, Shuicheng Yan, “Hidden-concept Driven Multi-label Image Annotation and Label Ranking", IEEE Transactions on Multimedia (IEEE TMM), vol. 14, no 1, pp. 199-210. 2012.
15. Bing-Kun Bao, Bingbing Ni, Yadong Mu, Shuicheng Yan, “Efficient Region-aware Large Graph Construction towards Scalable Multi-label Propagation”, Pattern Recognition (PR), vol. 44, no 3, pp. 589-606, 2011.
承担项目:
1. 国家自然科学基金杰出青年项目,跨视觉-语言的互生成,2024.01-2028.12,负责人
2. 国家重点研发计划:科技创新2030-“新一代人工智能”重大专项,认知计算基础理论与方法研究,2020.11至2023年10月,负责人;
3. 国家自然科学基金重点项目,跨模态社会媒体的深度分析与决策, 2020.01至2024.12,负责人;
4. 江苏省产业前瞻技术研发项目,决策大模型高效学习关键技术研发,2023.09-2027.09,负责人
5. 江苏省自然科学基金杰青项目,多媒体数据感知与分析,2021.01至2023.12,负责人;
6. 国家自然科学基金面上项目,面向社会事件的跨模态知识构建、演化与推理,2019.01至2022.12,负责人;
7. 国家自然科学基金面上项目,面向旅游的地理位置互联网大数据的分析与处理,2016-2019,负责人;
8. 国家自然科学基金青年项目,基于物理空间与网络空间的社会事件分析的研究,2013-2015,负责人;
9. 北京市自然科学基金面上项目,面向社交网络的社会事件感知、分析与处理,2015-2017,负责人;【结题入选优秀成果】
10. 腾讯横向合作项目,基于视觉认知的广告图片自动生成与评价,2018-2019,负责人;
11. 博士后基金特别项目,热点事件在物理空间和网络空间上自动检测与跟踪的研究,2013-2014,负责人;
12. 博士后基金面上项目,基于互联网资源的多标签图像标注问题的研究,2012-2013,负责人;
学术兼职:
1. MM 2019 Area Chair, MMM 2019 Publicity Chair, MM Asia 2019 Publication Chair, ICME 2019 Workshop Organizer, ICIMCS 2018 Publication Chair, MMM 2018 Special Session Organizor, PCM 2017 Special Session Chair, ICIMCS 2017/2016 Special Session Chair
2. Multimedia System Journal, Multimedia Applications and Tools Guest Editor