{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,10]],"date-time":"2026-04-10T10:07:02Z","timestamp":1775815622924,"version":"3.50.1"},"reference-count":42,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2022,5,3]],"date-time":"2022-05-03T00:00:00Z","timestamp":1651536000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"National Key Research and Development Program of China","award":["2018AAA0101100"],"award-info":[{"award-number":["2018AAA0101100"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Intell. Syst. Technol."],"published-print":{"date-parts":[[2022,8,31]]},"abstract":"<jats:p>Federated learning allows multiple parties to build machine learning models collaboratively without exposing data. In particular, vertical federated learning (VFL) enables participating parties to build a joint machine learning model based upon distributed features of aligned samples. However, VFL requires all parties to share a sufficient amount of aligned samples. In reality, the set of aligned samples may be small, leaving the majority of the non-aligned data unused. In this article, we propose Federated Cross-view Training (FedCVT), a semi-supervised learning approach that improves the performance of the VFL model with limited aligned samples. More specifically, FedCVT estimates representations for missing features, predicts pseudo-labels for unlabeled samples to expand the training set, and trains three classifiers jointly based upon different views of the expanded training set to improve the VFL model\u2019s performance. FedCVT does not require parties to share their original data and model parameters, thus preserving data privacy. We conduct experiments on NUS-WIDE, Vehicle, and CIFAR10 datasets. The experimental results demonstrate that FedCVT significantly outperforms vanilla VFL that only utilizes aligned samples. Finally, we perform ablation studies to investigate the contribution of each component of FedCVT to the performance of FedCVT.<\/jats:p>","DOI":"10.1145\/3510031","type":"journal-article","created":{"date-parts":[[2022,2,4]],"date-time":"2022-02-04T22:33:18Z","timestamp":1644013998000},"page":"1-16","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":50,"title":["FedCVT: Semi-supervised Vertical Federated Learning with Cross-view Training"],"prefix":"10.1145","volume":"13","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2016-9503","authenticated-orcid":false,"given":"Yan","family":"Kang","sequence":"first","affiliation":[{"name":"WeBank, Shenzhen, China"}]},{"given":"Yang","family":"Liu","sequence":"additional","affiliation":[{"name":"Institute for AI Industry Research, Tsinghua University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2288-601X","authenticated-orcid":false,"given":"Xinle","family":"Liang","sequence":"additional","affiliation":[{"name":"WeBank, Shenzhen, China"}]}],"member":"320","published-online":{"date-parts":[[2022,5,3]]},"reference":[{"key":"e_1_3_2_2_2","unstructured":"David Berthelot Nicholas Carlini Ian J. Goodfellow Nicolas Papernot Avital Oliver and Colin Raffel. 2019. MixMatch: A holistic approach to semi-supervised learning. Retrieved from http:\/\/arxiv.org\/abs\/1905.02249."},{"key":"e_1_3_2_3_2","first-page":"343","volume-title":"Advances in Neural Information Processing Systems 29","author":"Bousmalis Konstantinos","year":"2016","unstructured":"Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.). Curran Associates, 343\u2013351. Retrieved from http:\/\/papers.nips.cc\/paper\/6254-domain-separation-networks.pdf."},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/MIS.2021.3082561"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/1646396.1646452"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D18-1217"},{"key":"e_1_3_2_7_2","volume-title":"Data Protection Laws of the World: Full Handbook","author":"Piper DLA","year":"2018","unstructured":"DLA Piper. 2018. Data Protection Laws of the World: Full Handbook. https:\/\/www.dlapiperdataprotection.com\/."},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jpdc.2004.03.020"},{"key":"e_1_3_2_9_2","first-page":"892","article-title":"Secure linear regression on vertically partitioned datasets","author":"Gasc\u00f3n Adri\u00e0","year":"2016","unstructured":"Adri\u00e0 Gasc\u00f3n, Phillipp Schoppmann, Borja Balle, Mariana Raykova, Jack Doerner, Samee Zahur, and David Evans. 2016. Secure linear regression on vertically partitioned datasets. IACR Cryptol. ePrint Arch. (2016), 892.","journal-title":"IACR Cryptol. ePrint Arch."},{"key":"e_1_3_2_10_2","volume-title":"General Data Protection Regulation","year":"2018","unstructured":"GDPR. 2018. General Data Protection Regulation. https:\/\/gdpr.eu\/."},{"key":"e_1_3_2_11_2","first-page":"16937","volume-title":"Advances in Neural Information Processing Systems","author":"Geiping Jonas","year":"2020","unstructured":"Jonas Geiping, Hartmut Bauermeister, Hannah Dr\u00f6ge, and Michael Moeller. 2020. Inverting gradients - how easy is it to break privacy in federated learning? In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, 16937\u201316947. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/c4ede56bbd98819ae6112b20ac6bf145-Paper.pdf."},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01480"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.5555\/2976040.2976107"},{"key":"e_1_3_2_14_2","doi-asserted-by":"crossref","unstructured":"Otkrist Gupta and Ramesh Raskar. 2018. Distributed learning of deep neural network over multiple agents. Retrieved from http:\/\/arxiv.org\/abs\/1810.06060.","DOI":"10.1016\/j.jnca.2018.05.003"},{"key":"e_1_3_2_15_2","unstructured":"Stephen Hardy Wilko Henecka Hamish Ivey-Law Richard Nock Giorgio Patrini Guillaume Smith and Brian Thorne. 2017. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. Retrieved from https:\/\/abs\/1711.10677."},{"key":"e_1_3_2_16_2","volume-title":"Proceedings of the ICML Workshop on Challenges in Representation Learning","author":"Lee Dong Hyun","year":"2013","unstructured":"Dong Hyun Lee. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Proceedings of the ICML Workshop on Challenges in Representation Learning."},{"key":"e_1_3_2_17_2","volume-title":"Advances in Neural Information Processing Systems","author":"Jeon Jiwnoo","year":"2021","unstructured":"Jiwnoo Jeon, Jaechang Kim, Kangwook Lee, Sewoong Oh, and Jungseul Ok. 2021. Gradient inversion with generative image prior. In Advances in Neural Information Processing Systems, Vol. 34. Curran Associates. Retrieved from https:\/\/papers.nips.cc\/paper\/2021\/file\/fa84632d742f2729dc32ce8cb5d49733-Paper.pdf."},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1561\/2200000083"},{"key":"e_1_3_2_19_2","volume-title":"Proceedings of the International Workshop on Federated and Transfer Learning for Data Sparsity and Confidentiality in Conjunction with IJCAI","author":"Kang Yan","year":"2021","unstructured":"Yan Kang, Yang Liu, Yuezhou Wu, Guoqiang Ma, and Qiang Yang. 2021. Privacy-preserving federated adversarial domain adaption over feature groups for interpretability. In Proceedings of the International Workshop on Federated and Transfer Learning for Data Sparsity and Confidentiality in Conjunction with IJCAI."},{"key":"e_1_3_2_20_2","volume-title":"Proceedings of the 3rd International Conference on Learning Representations (ICLR\u201915)","author":"Kingma Diederik P.","year":"2015","unstructured":"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR\u201915), Yoshua Bengio and Yann LeCun (Eds.). Retrieved from http:\/\/arxiv.org\/abs\/1412.6980."},{"key":"e_1_3_2_21_2","volume-title":"Learning Multiple Layers of Features from Tiny Images","author":"Krizhevsky Alex","year":"2009","unstructured":"Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report. University of Toronto."},{"key":"e_1_3_2_22_2","volume-title":"Proceedings of the (ICLR\u201917)","author":"Laine Samuli","year":"2017","unstructured":"Samuli Laine and Timo Aila. 2017. Temporal ensembling for semi-supervised learning. In Proceedings of the (ICLR\u201917). OpenReview.net. Retrieved from http:\/\/dblp.uni-trier.de\/db\/conf\/iclr\/iclr2017.html#LaineA17"},{"key":"e_1_3_2_23_2","volume-title":"Proceedings of the NeurIPS Workshop on Federated Learning for Data Privacy and Confidentiality","author":"Li Daliang","year":"2019","unstructured":"Daliang Li and Junpu Wang. 2019. FedMD: Heterogenous federated learning via model distillation. In Proceedings of the NeurIPS Workshop on Federated Learning for Data Privacy and Confidentiality. Retrieved from https:\/\/arXiv:1910.03581."},{"key":"e_1_3_2_24_2","unstructured":"Oscar Li Jiankai Sun Xin Yang Weihao Gao Hongyi Zhang Junyuan Xie Virginia Smith and Chong Wang. 2021. Label Leakage and Protection in Two-party Split Learning. Retrieved from https:\/\/arXiv:2102.08504."},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00107"},{"issue":"226","key":"e_1_3_2_26_2","first-page":"1","article-title":"FATE: An industrial grade platform for collaborative learning with data protection","volume":"22","author":"Liu Yang","year":"2021","unstructured":"Yang Liu, Tao Fan, Tianjian Chen, Qian Xu, and Qiang Yang. 2021. FATE: An industrial grade platform for collaborative learning with data protection. J. Mach. Learn. Res. 22, 226 (2021), 1\u20136. Retrieved from http:\/\/jmlr.org\/papers\/v22\/20-815.html.","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/MIS.2020.2988525"},{"key":"e_1_3_2_28_2","volume-title":"Proceedings of the International Conference on Machine Learning Workshop (ICML\u201920)","author":"Liu Yang","year":"2020","unstructured":"Yang Liu, Zhihao Yi, and Tianjian Chen. 2020. Backdoor attacks and defenses in feature-partitioned collaborative learning. In Proceedings of the International Conference on Machine Learning Workshop (ICML\u201920). Retrieved from https:\/\/arXiv:2007.03608."},{"key":"e_1_3_2_29_2","unstructured":"H. Brendan McMahan Eider Moore Daniel Ramage and Blaise Ag\u00fcera y Arcas. 2016. Federated learning of deep networks using model averaging. Retrieved from http:\/\/arxiv.org\/abs\/1602.05629."},{"key":"e_1_3_2_30_2","first-page":"396","article-title":"SecureML: A system for scalable privacy-preserving machine learning","author":"Mohassel Payman","year":"2017","unstructured":"Payman Mohassel and Yupeng Zhang. 2017. SecureML: A system for scalable privacy-preserving machine learning. IACR Cryptol. ePrint Arch. (2017), 396.","journal-title":"IACR Cryptol. ePrint Arch."},{"key":"e_1_3_2_31_2","unstructured":"Richard Nock Stephen Hardy Wilko Henecka Hamish Ivey-Law Giorgio Patrini Guillaume Smith and Brian Thorne. 2018. Entity resolution and federated learning get a federated resolution. Retrieved from http:\/\/arxiv.org\/abs\/1803.04035."},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.222"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/1772690.1772767"},{"key":"e_1_3_2_34_2","unstructured":"Xingchao Peng Zijun Huang Yizhe Zhu and Kate Saenko. 2019. Federated Adversarial Domain Adaptation. Retrieved from https:\/\/arXiv:1911.02054."},{"key":"e_1_3_2_35_2","volume-title":"Proceedings of the NeurIPS Workshop on Federated Learning for Data Privacy and Confidentiality","author":"Peterson Daniel","year":"2019","unstructured":"Daniel Peterson, Pallika Kanani, and Virendra J. Marathe. 2019. Private federated learning with domain adaptation. In Proceedings of the NeurIPS Workshop on Federated Learning for Data Privacy and Confidentiality. Retrieved from https:\/\/arXiv:1912.06733."},{"key":"e_1_3_2_36_2","first-page":"169","article-title":"On data banks and privacy homomorphisms","author":"Rivest R. L.","year":"1978","unstructured":"R. L. Rivest, L. Adleman, and M. L. Dertouzos. 1978. On data banks and privacy homomorphisms. Found. Secure Comput. (1978), 169\u2013179.","journal-title":"Found. Secure Comput."},{"key":"e_1_3_2_37_2","volume-title":"Advances in Neural Information Processing Systems","author":"Tarvainen Antti","year":"2017","unstructured":"Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/68053af2923e00204c3ca7c6a3150cf7-Paper.pdf."},{"key":"e_1_3_2_38_2","volume-title":"Advances in Neural Information Processing Systems","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf."},{"key":"e_1_3_2_39_2","unstructured":"Praneeth Vepakomma Otkrist Gupta Tristan Swedish and Ramesh Raskar. 2018. Split learning for health: Distributed deep learning without sharing raw patient data. Retrieved from http:\/\/arxiv.org\/abs\/1812.00564."},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3298981"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.5555\/3386107"},{"key":"e_1_3_2_42_2","unstructured":"Bo Zhao Konda Reddy Mopuri and Hakan Bilen. 2020. IDLG: Improved deep leakage from gradients. Retrieved from https:\/\/arXiv:2001.02610."},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-63076-8_2"}],"container-title":["ACM Transactions on Intelligent Systems and Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3510031","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3510031","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:12:24Z","timestamp":1750191144000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3510031"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,5,3]]},"references-count":42,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2022,8,31]]}},"alternative-id":["10.1145\/3510031"],"URL":"https:\/\/doi.org\/10.1145\/3510031","relation":{},"ISSN":["2157-6904","2157-6912"],"issn-type":[{"value":"2157-6904","type":"print"},{"value":"2157-6912","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,5,3]]},"assertion":[{"value":"2021-03-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-12-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-05-03","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}