• Medientyp: E-Artikel
  • Titel: A Novel Bi-Dual Inference Approach for Detecting Six-Element Emotions
  • Beteiligte: Huang, Xiaoping; Zhou, Yujian; Du, Yajun
  • Erschienen: MDPI AG, 2023
  • Erschienen in: Applied Sciences
  • Sprache: Englisch
  • DOI: 10.3390/app13179957
  • ISSN: 2076-3417
  • Schlagwörter: Fluid Flow and Transfer Processes ; Computer Science Applications ; Process Chemistry and Technology ; General Engineering ; Instrumentation ; General Materials Science
  • Entstehung:
  • Anmerkungen:
  • Beschreibung: <jats:p>In recent years, there has been rapid development in machine learning for solving artificial intelligence tasks in various fields, including translation, speech, and image processing. These AI tasks are often interconnected rather than independent. One specific type of relationship is known as structural duality, which exists between multiple pairs of artificial intelligence tasks. The concept of dual learning has gained significant attention in the fields of machine learning, computer vision, and natural language processing. Dual learning involves using primitive tasks (mapping from domains X to Y) and dual tasks (mapping from domains Y to X) to enhance the performance of both tasks. In this study, we propose a general framework called Bi-Dual Inference by combining the principles of dual inference and dual learning. Our framework generates multiple dual models and a primal model by utilizing two dual tasks: sentiment analysis of input text and sentence generation of sentiment labels. We create these model pairs (primal model f, dual model g) by employing different initialization seeds and data access sequences. Each primal and dual model is considered as a distinct LSTM model. By reasoning about a single task with multiple similar models in the same direction, our framework achieves improved classification results. To validate the effectiveness of our proposed model, we conduct experiments on two datasets, namely NLPCC2013 and NLPCC2014. The results demonstrate that our model outperforms the optimal baseline model in terms of the F1 score, achieving an improvement of approximately 5%. Additionally, we provide parameter values for our proposed model, including model iteration analysis, α parameter analysis, λ parameter analysis, batch size analysis, training sentence length analysis, and hidden layer size setting. These experimental results further confirm the effectiveness of our proposed model.</jats:p>
  • Zugangsstatus: Freier Zugang