An autonomous debate system | Nature

  • 1.

    Lawrence, J. & Reed, C. Argument Mining: A Survey. Calculate. Linguist. 45, 765–818 (2019).

    Google Scholar Article

  • 2.

    Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep two-way transformers for language comprehension. Preview at https://arxiv.org/abs/1810.04805 (2018).

  • 3.

    Peters, M. et al. Deeply contextualized word representations. In Proc. 2018 Conf. North Am. Ch. Assoc. for Computer Linguistics: Human Language Technologies Full. 1, 2227-2237 (Society for Accounting Linguistics, 2018); https://www.aclweb.org/anthology/N18–1202

  • 4.

    Radford, A. et al. Language models are multitask learners who are not supervised. OpenAI Blog 1, http://www.persagen.com/files/misc/radford2019language.pdf (2019).

  • 5.

    Socher, R. et al. Recursive deep models for semantic compositionality over a sentiment tree bank. In Proc. Empirical methods in natural language processing (EMNLP) 1631–1642 (Association for Computational Linguistics, 2013).

  • 6.

    Yang, Z. et al. XLNet: general autoregressive training for language comprehension. In Adv. in neural information processing systems (NIPS) 5753−5763 (Curran Associates, 2019).

  • 7.

    Cho, K., van Merriënboer, B., Bahdanau, D. & Bengio, Y. On the properties of neural machine translation: encoder – decoder approaches. In Proc. 8th workshop. on syntax, semantics and structure in statistical translation 103−111 (Association for Accounting Linguistics, 2014).

  • 8.

    Gambhir, M. & Gupta, V. Recent Automatic Text Summarization Techniques: A Survey. Artif. Intelleer. Ds. 47, 1–66 (2017).

    Google Scholar Article

  • 9.

    Young, S., Gašić, M., Thomson, B. & Williams, J. POMDP-based statistical dialogue systems: An overview. Proc. IEEE 101, 1160–1179 (2013).

    Google Scholar Article

  • 10.

    Gurevych, I., Hovy, EH, Slonim, N. & Stein, B. Debate Technologies (Dagstuhl Seminar 15512) Dagstuhl Report 5 (2016).

  • 11.

    Levy, R., Bilu, Y., Hershcovich, D., Aharoni, E. & Slonim, N. Detection of context-dependent claims. In Proc. COLING 2014, the 25th Int. Conf. on Computer Linguistics: Technical Papers 1489–1500 (Dublin City University and Association for Computational Linguistics, 2014); https://www.aclweb.org/anthology/C14–1141

  • 12.

    Rinott, R. et al. Show me your testimony – an automated method of finding evidence for context. In Proc. 2015 Conf. on empirical methods in natural language processing 440–450 (Association for Computational Linguistics, 2015); https://www.aclweb.org/anthology/D15–1050

  • 13.

    Shnayderman, I. et al. Fast end-to-end wiki. Preview at https://arxiv.org/abs/1908.06785 (2019).

  • 14.

    Borthwick, A. A maximum entropy approach to named entity recognition. PhD thesis, New York Univ. https://cs.nyu.edu/media/publications/borthwick_andrew.pdf (1999).

  • 15.

    Finkel, JR, Grenager, T. & Manning, C. Inclusion of non-local information in information mining systems through Gibbs sampling. In Proc. 43rd Ann. Meet. Assoc. for computer linguistics 363–370 (Association for Computational Linguistics, 2005).

  • 16.

    Levy, R., Bogin, B., Gretz, S., Aharonov, R. & Slonim, N. On the way to an argumentative content search engine with poor supervision. In Proc. 27th int. Conf. on computer linguistics (COLING 2018) 2066–2081, https://www.aclweb.org/anthology/C18-1176.pdf (International Committee on Computational Linguistics, 2018).

  • 17.

    Ein-Dor, L. et al. Mining on corpus-wide arguments – a working solution. In Proc. Thirty-fourth AAAI Conf. on artificial intelligence 7683−7691 (AAAI Press, 2020).

  • 18.

    Levy, R. et al. Corpus-wide unsupervised claims detection. In Proc. 4th workshop. on Argument Mining 79–84 (Association for Computational Linguistics, 2017); https://www.aclweb.org/anthology/W17–5110

  • 19.

    Shnarch, E. et al. Will it mix? Mixing poorly and strongly labeled data into a neural network for argumentation mining In Proc. 56th Ann. Meet. Assoc. for computer linguistics Full. 2, 599–605 (Society for Accounting Linguistics, 2018); https://www.aclweb.org/anthology/P18–2095

  • 20.

    Gleize, M. et al. Are you convinced? Choose the more convincing evidence with a Siamese network. In Proc. 57th Conf. Assoc. for Computational Linguistic, 967–976 (Association for Computational Linguistics, 2019).

  • 21.

    Bar-Haim, R., Bhattacharya, I., Dinuzzo, F., Saha, A. & Slonim, N. Attitude classification of context dependent claims. In Proc. 15th Conf. EUR. Ch. Assoc. for computer linguistics Full. 1, 251-261 (Association for Computational Linguistics, 2017).

  • 22.

    Bar-Haim, R., Edelstein, L., Jochim, C. & Slonim, N. Improving claim position classification with lexical knowledge expansion and context utilization. In Proc. 4th workshop. on Argument Mining 32–38 (Association for Accounting Linguistics, 2017).

  • 23.

    Bar-Haim, R. et al. From surrogacy to adoption; from bitcoin to cryptocurrency: debate on topic expansion. In Proc. 57th Conf. Assoc. for computer linguistics 977–990 (Association for Computational Linguistics, 2019).

  • 24.

    Bilu, Y. et al. Argument invention from first principles. In Proc. 57th Ann. Meet. Assoc. for computer linguistics 1013–1026 (Association for Accounting Linguistics, 2019).

  • 25.

    Ein-Dor, L. et al. Semantic relationship of Wikipedia concepts – benchmark data and a working solution. In Proc. Eleventh Int. Conf. on language resources and evaluation (LREC 2018) 2571−2575 (Springer, 2018).

  • 26.

    Pahuja, V. et al. Joint learning of correlated labeling tasks using bidirectional repetitive neural networks. In Proc. Interviews 548−552 (International Speech Communication Association, 2017).

  • 27.

    Mirkin, S. et al. Listening comprehension on argumentative content. In Proc. 2018 Conf. on empirical methods in natural language processing 719–724 (Association for Accounting Linguistics, 2018).

  • 28.

    Lavee, T. et al. Listening to demands: listening comprehension using corpus-wide mining. In ArgMining Worksh. 58−66 (Association for Computational Linguistics, 2019).

  • 29.

    Orbach, M. et al. A dataset of general rebuttals. In Proc. 2019 Conf. on empirical methods in natural language processing 5595−5605 (Association for Accounting Linguistics, 2019).

  • 30.

    Slonim, N., Atwal, GS, Tkačik, G. & Bialek, W. Information-based grouping. Proc. Natl Acad. Sci. USA 102, 18297–18302 (2005).

    ADS MathSciNet CAS Article Google Scholar

  • 31.

    Ein Dor, L. et al. Learning thematic matching metrics from article sections using triplicate networks. In Proc. 56th Ann. Meet. Assoc. for computer linguistics Full. 2, 49–54 (Association for Computational Linguistics, 2018); https://www.aclweb.org/anthology/P18–2009

  • 32.

    Shechtman, S. & Mordechay, M. Emphasize forecasting with deep Lstm networks. In 2018 IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP) 5119–5123 (IEEE, 2018).

  • 33.

    Mass, Y. et al. Prediction of word emphasis for expressive text after speech. In Interviews 2868–2872 (International Speech Communication Association, 2018).

  • 34.

    Feigenblat, G., Roitman, H., Boni, O. & Konopnicki, D. Summary-oriented multi-document summary without supervision using the cross-entropy method. In Proc. 40th Int. ACM SIGIR Conf. on research and development in information gathering 961–964 (Computer Machinery Association, 2017).

  • 35.

    Daxenberger, J., Schiller, B., Stahlhut, C., Kaiser, E. & Gurevych, I. Argument text: argument classification and grouping in a general search scenario. Database spectrum 20, 115–121 (2020).

  • 36.

    Gretz, S. et al. A large-scale dataset for ranking of arguments: construction and analysis. In Thirty-fourth AAAI Conf. on artificial intelligence 7805–7813 (AAAI Press, 2020); https://aaai.org/ojs/index.php/AAAI/article/view/6285

  • 37.

    Goodfellow, I., Bengio, Y. & Courville, A. Deep learning (MIT Press, 2016).

  • 38.

    Samuel, AL Some studies in machine learning using the game dam. IBM J. Res. Develop. 3, 210–229 (1959).

    Google Scholar MathSciNet Article

  • 39.

    Thesauro, G. TD-Gammon, a self-taught backgammon program, achieves mastery-level play. Neural computer. 6, 215–219 (1994).

    Google Scholar Article

  • 40.

    Campbell, M., Hoane, AJ, Jr & Hsu, F.-h. Deep blue. Artif. Intellate. 134, 57–83 (2002).

    Google Scholar Article

  • 41.

    Ferrucci, DA Introduction to “This is Watson”. IBM J. Res. Dev. 56, 235–249 (2012).

    Google Scholar Article

  • 42.

    Silver, D. et al. A general reinforcement learning algorithm that masters chess, shogi and through self-play. Science 362, 1140–1144 (2018).

    ADS MathSciNet CAS Article Google Scholar

  • 43.

    Coulom, R. Efficient selectivity and backup operators in Monte-Carlo tree search. In 5th Int. Conf. on computers and games inria-0011699 (Springer, 2006).

  • 44.

    Vinyals, O. et al. Grand Master level in Starcraft II using multi-agent reinforcement learning. Nature 575, 350–354 (2019).

    ADS CAS Article Google Scholar

  • Source