{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T02:29:47Z","timestamp":1760236187977,"version":"build-2065373602"},"reference-count":40,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2021,10,31]],"date-time":"2021-10-31T00:00:00Z","timestamp":1635638400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"the Science and Technology Project of Jiangsu Province, China","award":["BK20200978"],"award-info":[{"award-number":["BK20200978"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>Neural auto-regressive sequence-to-sequence models have been dominant in text generation tasks, especially the question generation task. However, neural generation models suffer from the global and local semantic semantic drift problems. Hence, we propose the hierarchical encoding\u2013decoding mechanism that aims at encoding rich structure information of the input passages and reducing the variance in the decoding phase. In the encoder, we hierarchically encode the input passages according to its structure at four granularity-levels: [word, chunk, sentence, document]-level. Second, we progressively select the context vector from the document-level representations to the word-level representations at each decoding time step. At each time-step in the decoding phase, we progressively select the context vector from the document-level representations to word-level. We also propose the context switch mechanism that enables the decoder to use the context vector from the last step when generating the current word at each time-step.It provides a means of improving the stability of the text generation process during the decoding phase when generating a set of consecutive words. Additionally, we inject syntactic parsing knowledge to enrich the word representations. Experimental results show that our proposed model substantially improves the performance and outperforms previous baselines according to both automatic and human evaluation. Besides, we implement a deep and comprehensive analysis of generated questions based on their types.<\/jats:p>","DOI":"10.3390\/e23111449","type":"journal-article","created":{"date-parts":[[2021,11,1]],"date-time":"2021-11-01T22:21:08Z","timestamp":1635805268000},"page":"1449","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Multi-Hop Question Generation Using Hierarchical Encoding-Decoding and Context Switch Mechanism"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0143-6220","authenticated-orcid":false,"given":"Tianbo","family":"Ji","sequence":"first","affiliation":[{"name":"School of Transportation and Civil Engineering, Nantong University, Nantong 226019, China"}]},{"given":"Chenyang","family":"Lyu","sequence":"additional","affiliation":[{"name":"Science Foundation Ireland Centre for Research Training in Machine Learning, School of Computing, Dublin City University, Dublin 9, Ireland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7815-3889","authenticated-orcid":false,"given":"Zhichao","family":"Cao","sequence":"additional","affiliation":[{"name":"School of Transportation and Civil Engineering, Nantong University, Nantong 226019, China"}]},{"given":"Peng","family":"Cheng","sequence":"additional","affiliation":[{"name":"Alibaba Group, Hangzhou 311121, China"}]}],"member":"1968","published-online":{"date-parts":[[2021,10,31]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1186\/s41039-021-00151-1","article-title":"Automatic question generation and answer assessment: A survey","volume":"16","author":"Das","year":"2021","journal-title":"Res. Pract. Technol. Enhanc. Learn."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"612","DOI":"10.1109\/TE.2005.856149","article-title":"AutoTutor: An intelligent tutoring system with mixed-initiative dialogue","volume":"48","author":"Graesser","year":"2005","journal-title":"IEEE Trans. Educ."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"121","DOI":"10.1007\/s40593-019-00186-y","article-title":"A systematic review of automatic question generation for educational purposes","volume":"30","author":"Kurdi","year":"2020","journal-title":"Int. J. Artif. Intell. Educ."},{"key":"ref_4","first-page":"43","article-title":"Question generation","volume":"12","author":"Room","year":"2020","journal-title":"Algorithms"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Yang, Z., Qi, P., Zhang, S., Bengio, Y., Cohen, W., Salakhutdinov, R., and Manning, C.D. (November, January 31). HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium.","DOI":"10.18653\/v1\/D18-1259"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Heilman, M., and Smith, N.A. (2009, January 01). Question Generation via Overgenerating Transformations and Ranking. Available online: https:\/\/apps.dtic.mil\/sti\/citations\/ADA531042.","DOI":"10.21236\/ADA531042"},{"key":"ref_7","unstructured":"Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., and Weinberger, K.Q. (2014). Sequence to Sequence Learning with Neural Networks. Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_8","unstructured":"Du, X., Shao, J., and Cardie, C. (August, January 30). Learning to Ask: Neural Question Generation for Reading Comprehension. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, BC, Canada."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Zhang, S., and Bansal, M. (2019, January 3\u20137). Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China.","DOI":"10.18653\/v1\/D19-1253"},{"key":"ref_10","unstructured":"Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C.D., Ng, A., and Potts, C. (2013, January 18\u201321). Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, WA, USA."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Tai, K.S., Socher, R., and Manning, C.D. (2015, January 26\u201331). Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing, China.","DOI":"10.3115\/v1\/P15-1150"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., and Hovy, E. (2016, January 12\u201317). Hierarchical Attention Networks for Document Classification. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, CA, USA.","DOI":"10.18653\/v1\/N16-1174"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Papineni, K., Roukos, S., Ward, T., and Zhu, W.J. (2002, January 7\u201312). Bleu: A Method for Automatic Evaluation of Machine Translation. Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, PA, USA.","DOI":"10.3115\/1073083.1073135"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Lin, C.Y., and Hovy, E. (June, January 27). Automatic evaluation of summaries using n-gram co-occurrence statistics. Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, AB, Canada.","DOI":"10.3115\/1073445.1073465"},{"key":"ref_15","unstructured":"Banerjee, S., and Lavie, A. (2005, January 9\u201310). METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and\/or Summarization, Ann Arbor, MI, USA."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"393","DOI":"10.1162\/coli_a_00322","article-title":"A Structured Review of the Validity of BLEU","volume":"44","author":"Reiter","year":"2018","journal-title":"Comput. Linguist."},{"key":"ref_17","unstructured":"Pan, L., Lei, W., Chua, T., and Kan, M. (2019). Recent Advances in Neural Question Generation. arXiv."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Sun, X., Liu, J., Lyu, Y., He, W., Ma, Y., and Wang, S. (November, January 31). Answer-focused and Position-aware Neural Question Generation. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium.","DOI":"10.18653\/v1\/D18-1427"},{"key":"ref_19","unstructured":"See, A., Liu, P.J., and Manning, C.D. (August, January 30). Get To The Point: Summarization with Pointer-Generator Networks. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, BC, Canada."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Ma, X., Zhu, Q., Zhou, Y., Li, X., and Wu, D. (2020). Improving Question Generation with Sentence-level Semantic Matching and Answer Position Inferring. arXiv.","DOI":"10.1609\/aaai.v34i05.6366"},{"key":"ref_21","unstructured":"Chen, Y., Wu, L., and Zaki, M.J. (2019, January 6\u20139). Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation. Proceedings of the 2019 International Conference on Learning Representations, New Orleans, LA, USA."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Dhole, K., and Manning, C.D. (2020, January 5\u201310). Syn-QG: Syntactic and Shallow Semantic Rules for Question Generation. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online Conference.","DOI":"10.18653\/v1\/2020.acl-main.69"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016, January 1\u20135). SQuAD: 100,000+ Questions for Machine Comprehension of Text. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, TX, USA.","DOI":"10.18653\/v1\/D16-1264"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Zhou, W., Zhang, M., and Wu, Y. (2019, January 3\u20137). Question-type Driven Question Generation. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China.","DOI":"10.18653\/v1\/D19-1622"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Tuan, L.A., Shah, D., and Barzilay, R. (2020, January 7\u201312). Capturing greater context for question generation. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i05.6440"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Duan, N., Tang, D., Chen, P., and Zhou, M. (2017, January 7\u201311). Question Generation for Question Answering. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark.","DOI":"10.18653\/v1\/D17-1090"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Gupta, D., Chauhan, H., Akella, R.T., Ekbal, A., and Bhattacharyya, P. (2020, January 13\u201318). Reinforced Multi-task Approach for Multi-hop Question Generation. Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain.","DOI":"10.18653\/v1\/2020.coling-main.249"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Pan, L., Xie, Y., Feng, Y., Chua, T.S., and Kan, M.Y. (2020, January 5\u201310). Semantic Graphs for Generating Deep Questions. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online.","DOI":"10.18653\/v1\/2020.acl-main.135"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Xie, Y., Pan, L., Wang, D., Kan, M.Y., and Feng, Y. (2020, January 13\u201318). Exploring Question-Specific Rewards for Generating Deep Questions. Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain.","DOI":"10.18653\/v1\/2020.coling-main.228"},{"key":"ref_30","unstructured":"Bahdanau, D., Cho, K., and Bengio, Y. (2015, January 7\u20139). Neural Machine Translation by Jointly Learning to Align and Translate. Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"270","DOI":"10.1162\/neco.1989.1.2.270","article-title":"A learning algorithm for continually running fully recurrent neural networks","volume":"1","author":"Williams","year":"1989","journal-title":"Neural Comput."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Cho, K., van Merri\u00ebnboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014, January 25\u201329). Learning Phrase Representations using RNN Encoder\u2013Decoder for Statistical Machine Translation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar.","DOI":"10.3115\/v1\/D14-1179"},{"key":"ref_33","unstructured":"Chung, J., Gulcehre, C., Cho, K., and Bengio, Y. (2014, January 12). Empirical evaluation of gated recurrent neural networks on sequence modeling. Proceedings of the NIPS 2014 Workshop on Deep Learning, Montreal, QC, Canada."},{"key":"ref_34","unstructured":"Kingma, D.P., and Ba, J. (2015, January 7\u20139). Adam: A Method for Stochastic Optimization. Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Gardner, M., Grus, J., Neumann, M., Tafjord, O., Dasigi, P., Liu, N.F., Peters, M., Schmitz, M., and Zettlemoyer, L. (2018, January 15\u201320). AllenNLP: A Deep Semantic Natural Language Processing Platform. Proceedings of the Workshop for NLP Open Source Software (NLP-OSS), Melbourne, Australia.","DOI":"10.18653\/v1\/W18-2501"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Pennington, J., Socher, R., and Manning, C.D. (2014, January 25\u201329). Glove: Global vectors for word representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar.","DOI":"10.3115\/v1\/D14-1162"},{"key":"ref_37","unstructured":"Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2021, October 11). Language Models Are Unsupervised Multitask Learners. Available online: https:\/\/d4mucfpksywv.cloudfront.net\/better-language-models\/language_models_are_unsupervised_multitask_learners.pdf."},{"key":"ref_38","unstructured":"Lin, C.Y. (2004). ROUGE: A Package for Automatic Evaluation of Summaries. Text Summarization Branches Out, Association for Computational Linguistics."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3383465","article-title":"Visual Question Generation: The State of the Art","volume":"53","author":"Patil","year":"2020","journal-title":"ACM Comput. Surv."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"12263","DOI":"10.1007\/s00521-021-05893-z","article-title":"Deep learning approaches to pattern extraction and recognition in paintings and drawings: An overview","volume":"33","author":"Castellano","year":"2021","journal-title":"Neural Comput. Appl."}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/23\/11\/1449\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T07:23:59Z","timestamp":1760167439000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/23\/11\/1449"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,10,31]]},"references-count":40,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2021,11]]}},"alternative-id":["e23111449"],"URL":"https:\/\/doi.org\/10.3390\/e23111449","relation":{},"ISSN":["1099-4300"],"issn-type":[{"type":"electronic","value":"1099-4300"}],"subject":[],"published":{"date-parts":[[2021,10,31]]}}}