{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T04:12:57Z","timestamp":1760242377044,"version":"build-2065373602"},"reference-count":54,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2017,5,24]],"date-time":"2017-05-24T00:00:00Z","timestamp":1495584000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"the General Program of National Science Foundation of China","award":["61370164"],"award-info":[{"award-number":["61370164"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Algorithms"],"abstract":"<jats:p>Contradiction detection is a task to recognize contradiction relations between a pair of sentences. Despite the effectiveness of traditional context-based word embedding learning algorithms in many natural language processing tasks, such algorithms are not powerful enough for contradiction detection. Contrasting words such as \u201coverfull\u201d and \u201cempty\u201d are mostly mapped into close vectors in such embedding space. To solve this problem, we develop a tailored neural network to learn contradiction-specific word embedding (CWE). The method can separate antonyms in the opposite ends of a spectrum. CWE is learned from a training corpus which is automatically generated from the paraphrase database, and is naturally applied as features to carry out contradiction detection in SemEval 2014 benchmark dataset. Experimental results show that CWE outperforms traditional context-based word embedding in contradiction detection. The proposed model for contradiction detection performs comparably with the top-performing system in accuracy of three-category classification and enhances the accuracy from 75.97% to 82.08% in the contradiction category.<\/jats:p>","DOI":"10.3390\/a10020059","type":"journal-article","created":{"date-parts":[[2017,5,24]],"date-time":"2017-05-24T12:13:11Z","timestamp":1495627991000},"page":"59","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":19,"title":["Contradiction Detection with Contradiction-Specific Word Embedding"],"prefix":"10.3390","volume":"10","author":[{"given":"Luyang","family":"Li","sequence":"first","affiliation":[{"name":"Research Center for Social Computing and Information Retrieval, School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China"}]},{"given":"Bing","family":"Qin","sequence":"additional","affiliation":[{"name":"Research Center for Social Computing and Information Retrieval, School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China"}]},{"given":"Ting","family":"Liu","sequence":"additional","affiliation":[{"name":"Research Center for Social Computing and Information Retrieval, School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China"}]}],"member":"1968","published-online":{"date-parts":[[2017,5,24]]},"reference":[{"key":"ref_1","unstructured":"De Marneffe, M.C., Rafferty, A.N., and Manning, C.D. (2008). Finding Contradictions in Text, ACL."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Condoravdi, C., Crouch, D., De Paiva, V., Stolle, R., and Bobrow, D.G. (June, January 27). Entailment, intensionality and text understanding. Proceedings of the HLT-NAACL 2003 Workshop on Text Meaning, Edmonton, AB, Canada.","DOI":"10.3115\/1119239.1119245"},{"key":"ref_3","unstructured":"Kawahara, D., Inui, K., and Kurohashi, S. (2010, January 23\u201327). Identifying contradictory and contrastive relations between statements to outline web information on a given topic. Proceedings of the 23rd International Conference on Computational Linguistics: Posters, Beijing, China."},{"key":"ref_4","unstructured":"Xu, L., Yumoto, T., Aoki, S., Ma, Q., and Yoshikawa, M. (2011, January 4\u20137). Discovering Inconsistency in Multimedia News Based on a Material-Opinion Model. Proceedings of the 2011 44th Hawaii International Conference on System Sciences, Koloa, HI, USA."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Tsytsarau, M., Palpanas, T., and Denecke, K. (2010, January 26\u201330). Scalable discovery of contradictions on the web. Proceedings of the 19th International Conference on World Wide Web, Raleigh, NC, USA.","DOI":"10.1145\/1772690.1772871"},{"key":"ref_6","unstructured":"Poria, S., Cambria, E., Hazarika, D., and Vij, P. (2016). A Deeper Look into Sarcastic Tweets Using Deep Convolutional Neural Networks. arXiv."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Chen, Z., Lin, W., Chen, Q., Chen, X., Wei, S., Zhu, X., and Jiang, H. (2015, January 26\u201331). Revisiting word embedding for contrasting meaning. Proceedings of the 53th Annual Meeting of the Association for Computational Linguistics (ACL 2015), Beijing, China.","DOI":"10.3115\/v1\/P15-1011"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Mrksic, N., Seaghdha, D., Thomson, B., Gasic, M., Rojasbarahona, L., Su, P.H., Vandyke, D., Wen, T.H., and Young, S. (2016). Counter-Fitting Word Vectors to Linguistic Constraints. arXiv.","DOI":"10.18653\/v1\/N16-1018"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Schwartz, R., Reichart, R., and Rappoport, A. (2015, January 30\u201331). Symmetric Pattern Based Word Embeddings for Improved Word Similarity Prediction. Proceedings of the Nineteenth Conference on Computational Natural Language Learning, Beijing, China.","DOI":"10.18653\/v1\/K15-1026"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Liu, Q., Jiang, H., Wei, S., Ling, Z.H., and Hu, Y. (2015, January 26\u201331). Learning semantic word embeddings based on ordinal knowledge constraints. Proceedings of the ACL, Beijing, China.","DOI":"10.3115\/v1\/P15-1145"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"235","DOI":"10.1093\/ijl\/3.4.235","article-title":"WordNet: An on-line lexical database","volume":"3","author":"Miller","year":"2010","journal-title":"Int. J. Lexicogr."},{"key":"ref_12","unstructured":"Ganitkevitch, J., Van Durme, B., and Callison-Burch, C. (2013, January 9\u201315). PPDB: The Paraphrase Database. Proceedings of the NAACL-HLT, Atlanta, GA, USA."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Marelli, M., Bentivogli, L., Baroni, M., Bernardi, R., Menini, S., and Zamparelli, R. (2014, January 23\u201324). Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. Proceedings of the SemEval-2014, Dublin, Ireland.","DOI":"10.3115\/v1\/S14-2001"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Pennington, J., Socher, R., and Manning, C.D. (2014, January 25\u201329). Glove: Global vectors for word representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar.","DOI":"10.3115\/v1\/D14-1162"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Lai, A., and Hockenmaier, J. (2014, January 23\u201324). Illinois-lh: A denotational and distributional approach to semantics. Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland.","DOI":"10.3115\/v1\/S14-2055"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Bowman, S.R., Potts, C., and Manning, C.D. (2015). Recursive Neural Networks Can Learn Logical Semantics. arXiv.","DOI":"10.18653\/v1\/W15-4002"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"157","DOI":"10.1109\/72.279181","article-title":"Learning long-term dependencies with gradient descent is difficult","volume":"5","author":"Bengio","year":"1994","journal-title":"IEEE Trans. Neural Netw."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","article-title":"Long Short-Term Memory","volume":"9","author":"Hochreiter","year":"1997","journal-title":"Neural Comput."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"2451","DOI":"10.1162\/089976600300015015","article-title":"Learning to Forget: Continual Prediction with LSTM","volume":"12","author":"Gers","year":"2000","journal-title":"Neural Comput."},{"key":"ref_20","first-page":"3104","article-title":"Sequence to Sequence Learning with Neural Networks","volume":"4","author":"Sutskever","year":"2014","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Sak, H., Senior, A., and Beaufays, F. (2014). Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition. arXiv.","DOI":"10.21437\/Interspeech.2014-80"},{"key":"ref_22","unstructured":"Palangi, H., Deng, L., Shen, Y., Gao, J., He, X., Chen, J., Song, X., and Ward, R. (2015). Deep Sentence Embedding Using the Long Short-Term Memory Networks. arXiv."},{"key":"ref_23","unstructured":"Schmidhuber, K.G.S.K.S. (2015). LSTM: A Search Space Odyssey. IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Dagan, I., Glickman, O., and Magnini, B. (2005, January 11\u201313). The PASCAL recognising textual entailment challenge. Proceedings of the Machine Learning Challenges: Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, Southampton, UK.","DOI":"10.1007\/11736790_9"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Giampiccolo, D., Magnini, B., Dagan, I., and Dolan, B. (2007, January 28\u201329). The third pascal recognizing textual entailment challenge. Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, Prague, Czech Republic.","DOI":"10.3115\/1654536.1654538"},{"key":"ref_26","first-page":"177","article-title":"The fourth pascal recognising textual entailment challenge","volume":"3944","author":"Giampiccolo","year":"2009","journal-title":"J. Nat. Lang. Eng."},{"key":"ref_27","unstructured":"Voorhees, E.M. (2008, January 15\u201320). Contradictions and Justifications: Extensions to the Textual Entailment Task. Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, Columbus, OH, USA."},{"key":"ref_28","unstructured":"Bentivogli, L., Dagan, I., Dang, H.T., Giampiccolo, D., and Magnini, B. (2009, January 16\u201317). The fifth pascal recognizing textual entailment challenge. Proceedings of the TAC, Gaithersburg, MD, USA."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Dagan, I., Dolan, B., Magnini, B., and Roth, D. (2010). Recognizing textual entailment: Rational, evaluation and approaches\u2014erratum. Nat. Lang. Eng., 15.","DOI":"10.1017\/S1351324909990209"},{"key":"ref_30","unstructured":"Harabagiu, S., Hickl, A., and Lacatusu, F. (2006, January 16\u201320). Negation, contrast and contradiction in text processing. Proceedings of the 21st National Conference on Artificial Intelligence, Boston, MA, USA."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Ritter, A., Downey, D., Soderland, S., and Etzioni, O. (2008, January 25\u201327). It\u2019s a contradiction\u2014No, it\u2019s not: A case study using functional relations. Proceedings of the Conference on Empirical Methods in Natural Language Processing, Honolulu, HI, USA.","DOI":"10.3115\/1613715.1613718"},{"key":"ref_32","unstructured":"Magnini, B., and Cabrio, E. (2010, January 10). Contradiction-focused qualitative evaluation of textual entailment. Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, Uppsala, Sweden."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"17","DOI":"10.1145\/2382593.2382599","article-title":"Validating Contradiction in Texts Using Online Co-Mention Pattern Checking","volume":"11","author":"Shih","year":"2012","journal-title":"ACM Trans. Asian Lang. Inf. Process."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Kloetzer, J., Saeger, S.D., Torisawa, K., Hashimoto, C., Oh, J.H., Sanok, M., and Ohtake, K. (2013, January 18\u201321). Two-stage Method for Large-Scale Acquisition of Contradiction Pattern Pairs using Entailment. Proceedings of the EMNLP 2013, Seattle, WA, USA.","DOI":"10.18653\/v1\/D13-1065"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Liu, L.L.Q. (2015). Generating Triples Based on Dependency Parsing for Contradiction Detection. SMP.","DOI":"10.1007\/978-981-10-0080-5_19"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"555","DOI":"10.1162\/COLI_a_00143","article-title":"Computing Lexical Contrast","volume":"39","author":"Mohammad","year":"2013","journal-title":"Comput. Linguist."},{"key":"ref_37","unstructured":"Lin, D., Zhao, S., Qin, L., and Zhou, M. (2003, January 9\u201315). Identifying synonyms among distributionally similar words. Proceedings of the 18th International Joint Conference on Artificial Intelligence, Acapulco, Mexico."},{"key":"ref_38","unstructured":"Hashimoto, C., Torisawa, K., De Saeger, S., Oh, J.H., and Kazama, J. (2012, January 12\u201314). Excitatory or inhibitory: A new semantic orientation extracts contradiction and causality from the web. Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Jeju Island, Korea."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Chang, K.W., Yih, W.T., and Meek, C. (2013, January 18\u201321). Multi-Relational Latent Semantic Analysis. Proceedings of the EMNLP.","DOI":"10.18653\/v1\/D13-1167"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"146","DOI":"10.1080\/00437956.1954.11659520","article-title":"Distributional structure","volume":"10","author":"Harris","year":"1954","journal-title":"Word"},{"key":"ref_41","first-page":"467","article-title":"Class-based n-gram models of natural language","volume":"18","author":"Brown","year":"1992","journal-title":"Comput. Linguist."},{"key":"ref_42","unstructured":"Uszkoreit, J., and Brants, T. (2008, January 15\u201320). Distributed Word Clustering for Large Scale Class-Based Language Modeling in Machine Translation. Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, Columbus, OH, USA."},{"key":"ref_43","first-page":"1137","article-title":"A neural probabilistic language model","volume":"3","author":"Bengio","year":"2003","journal-title":"J. Mach. Learn. Res."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Collobert, R., and Weston, J. (2008, January 5\u20139). A unified architecture for natural language processing: Deep neural networks with multitask learning. Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland.","DOI":"10.1145\/1390156.1390177"},{"key":"ref_45","first-page":"1081","article-title":"A scalable hierarchical distributed language model","volume":"1","author":"Mnih","year":"2009","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Mikolov, T., Karafi\u00e1t, M., Burget, L., Cernock\u1ef3, J., and Khudanpur, S. (2010, January 26\u201330). Recurrent neural network based language model. Proceedings of the 11th Annual Conference of the International Speech Communication Association, Makuhari, Japan.","DOI":"10.21437\/Interspeech.2010-343"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"42","DOI":"10.1016\/j.knosys.2016.06.009","article-title":"Aspect extraction for opinion mining with a deep convolutional neural network","volume":"108","author":"Poria","year":"2016","journal-title":"Knowl.-Based Syst."},{"key":"ref_48","first-page":"833","article-title":"Stochastic neighbor embedding","volume":"41","author":"Hinton","year":"2002","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"141","DOI":"10.1613\/jair.2934","article-title":"From frequency to meaning: Vector space models of semantics","volume":"37","author":"Turney","year":"2010","journal-title":"J. Artif. Intell. Res."},{"key":"ref_50","first-page":"2493","article-title":"Natural language processing (almost) from scratch","volume":"12","author":"Collobert","year":"2011","journal-title":"J. Mach. Learn. Res."},{"key":"ref_51","first-page":"3111","article-title":"Distributed representations of words and phrases and their compositionality","volume":"26","author":"Mikolov","year":"2013","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_52","unstructured":"Levy, O., and Goldberg, Y. (2014, January 22\u201327). Dependencybased word embeddings. Proceedings of the ACL, Baltimore, MD, USA."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Faruqui, M., Dodge, J., Jauhar, S.K., Dyer, C., Hovy, E., and Smith, N.A. (2014). Retrofitting word vectors to semantic lexicons. arXiv.","DOI":"10.3115\/v1\/N15-1184"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Tang, D., Wei, F., Yang, N., Zhou, M., Liu, T., and Qin, B. (2014, January 22\u201327). Learning sentiment-specific word embedding for twitter sentiment classification. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, Baltimore, MD, USA.","DOI":"10.3115\/v1\/P14-1146"}],"container-title":["Algorithms"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-4893\/10\/2\/59\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T18:36:46Z","timestamp":1760207806000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-4893\/10\/2\/59"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2017,5,24]]},"references-count":54,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2017,6]]}},"alternative-id":["a10020059"],"URL":"https:\/\/doi.org\/10.3390\/a10020059","relation":{},"ISSN":["1999-4893"],"issn-type":[{"type":"electronic","value":"1999-4893"}],"subject":[],"published":{"date-parts":[[2017,5,24]]}}}