An autonomous debating system


  • 1.

    Lawrence, J. & Reed, C. Argument mining: a survey. Comput. Linguist. 45, 765–818 (2019).


    Google Scholar

  • 2.

    Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. Preprint at (2018).

  • 3.

    Peters, M. et al. Deep contextualized word representations. In Proc. 2018 Conf. North Am. Ch. Assoc. for Computational Linguistics: Human Language Technologies Vol. 1, 2227–2237 (Association for Computational Linguistics, 2018);–1202

  • 4.

    Radford, A. et al. Language models are unsupervised multitask learners. OpenAI Blog 1, (2019).

  • 5.

    Socher, R. et al. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. Empirical Methods in Natural Language Processing (EMNLP) 1631–1642 (Association for Computational Linguistics, 2013).

  • 6.

    Yang, Z. et al. XLNet: generalized autoregressive pretraining for language understanding. In Adv. in Neural Information Processing Systems (NIPS) 5753−5763 (Curran Associates,2019).

  • 7.

    Cho, K., van Merriënboer, B., Bahdanau, D. & Bengio, Y. On the properties of neural machine translation: encoder–decoder approaches. In Proc. 8th Worksh. on Syntax, Semantics and Structure in Statistical Translation 103−111 (Association for Computational Linguistics, 2014).

  • 8.

    Gambhir, M. & Gupta, V. Recent automatic text summarization techniques: a survey. Artif. Intell. Rev. 47, 1–66 (2017).


    Google Scholar

  • 9.

    Young, S., Gašić, M., Thomson, B. & Williams, J. POMDP-based statistical spoken dialog systems: A review. Proc. IEEE 101, 1160–1179 (2013).


    Google Scholar

  • 10.

    Gurevych, I., Hovy, E. H., Slonim, N. & Stein, B. Debating Technologies (Dagstuhl Seminar 15512) Dagstuhl Report 5 (2016).

  • 11.

    Levy, R., Bilu, Y., Hershcovich, D., Aharoni, E. & Slonim, N. Context dependent claim detection. In Proc. COLING 2014, the 25th Int. Conf. on Computational Linguistics: Technical Papers 1489–1500 (Dublin City University and Association for Computational Linguistics, 2014);–1141

  • 12.

    Rinott, R. et al. Show me your evidence—an automatic method for context dependent evidence detection. In Proc. 2015 Conf. on Empirical Methods in Natural Language Processing 440–450 (Association for Computational Linguistics, 2015);–1050

  • 13.

    Shnayderman, I. et al. Fast end-to-end wikification. Preprint at (2019).

  • 14.

    Borthwick, A. A Maximum Entropy Approach To Named Entity Recognition. PhD thesis, New York Univ. (1999).

  • 15.

    Finkel, J. R., Grenager, T. & Manning, C. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proc. 43rd Ann. Meet. Assoc. for Computational Linguistics 363–370 (Association for Computational Linguistics, 2005).

  • 16.

    Levy, R., Bogin, B., Gretz, S., Aharonov, R. & Slonim, N. Towards an argumentative content search engine using weak supervision. In Proc. 27th Int. Conf. on Computational Linguistics (COLING 2018) 2066–2081, (International Committee on Computational Linguistics, 2018).

  • 17.

    Ein-Dor, L. et al. Corpus wide argument mining—a working solution. In Proc. Thirty-Fourth AAAI Conf. on Artificial Intelligence 7683−7691 (AAAI Press, 2020).

  • 18.

    Levy, R. et al. Unsupervised corpus-wide claim detection. In Proc. 4th Worksh. on Argument Mining 79–84 (Association for Computational Linguistics, 2017);–5110

  • 19.

    Shnarch, E. et al. Will it blend? Blending weak and strong labeled data in a neural network for argumentation mining. In Proc. 56th Ann. Meet. Assoc. for Computational Linguistics Vol. 2, 599–605 (Association for Computational Linguistics, 2018);–2095

  • 20.

    Gleize, M. et al. Are you convinced? Choosing the more convincing evidence with a Siamese network. In Proc. 57th Conf. Assoc. for Computational Linguistic, 967–976 (Association for Computational Linguistics, 2019).

  • 21.

    Bar-Haim, R., Bhattacharya, I., Dinuzzo, F., Saha, A. & Slonim, N. Stance classification of context-dependent claims. In Proc. 15th Conf. Eur. Ch. Assoc. for Computational Linguistics Vol. 1, 251–261 (Association for Computational Linguistics, 2017).

  • 22.

    Bar-Haim, R., Edelstein, L., Jochim, C. & Slonim, N. Improving claim stance classification with lexical knowledge expansion and context utilization. In Proc. 4th Worksh. on Argument Mining 32–38 (Association for Computational Linguistics, 2017).

  • 23.

    Bar-Haim, R. et al. From surrogacy to adoption; from bitcoin to cryptocurrency: debate topic expansion. In Proc. 57th Conf. Assoc. for Computational Linguistics 977–990 (Association for Computational Linguistics, 2019).

  • 24.

    Bilu, Y. et al. Argument invention from first principles. In Proc. 57th Ann. Meet. Assoc. for Computational Linguistics 1013–1026 (Association for Computational Linguistics, 2019).

  • 25.

    Ein-Dor, L. et al. Semantic relatedness of Wikipedia concepts—benchmark data and a working solution. In Proc. Eleventh Int. Conf. on Language Resources and Evaluation (LREC 2018) 2571−2575 (Springer, 2018).

  • 26.

    Pahuja, V. et al. Joint learning of correlated sequence labelling tasks using bidirectional recurrent neural networks. In Proc. Interspeech 548−552 (International Speech Communication Association, 2017).

  • 27.

    Mirkin, S. et al. Listening comprehension over argumentative content. In Proc. 2018 Conf. on Empirical Methods in Natural Language Processing 719–724 (Association for Computational Linguistics, 2018).

  • 28.

    Lavee, T. et al. Listening for claims: listening comprehension using corpus-wide claim mining. In ArgMining Worksh. 58−66 (Association for Computational Linguistics, 2019).

  • 29.

    Orbach, M. et al. A dataset of general-purpose rebuttal. In Proc. 2019 Conf. on Empirical Methods in Natural Language Processing 5595−5605 (Association for Computational Linguistics, 2019).

  • 30.

    Slonim, N., Atwal, G. S., Tkačik, G. & Bialek, W. Information-based clustering. Proc. Natl Acad. Sci. USA 102, 18297–18302 (2005).


    Google Scholar

  • 31.

    Ein Dor, L. et al. Learning thematic similarity metric from article sections using triplet networks. In Proc. 56th Ann. Meet. Assoc. for Computational Linguistics Vol. 2, 49–54 (Association for Computational Linguistics, 2018);–2009

  • 32.

    Shechtman, S. & Mordechay, M. Emphatic speech prosody prediction with deep Lstm networks. In 2018 IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP) 5119–5123 (IEEE, 2018).

  • 33.

    Mass, Y. et al. Word emphasis prediction for expressive text to speech. In Interspeech 2868–2872 (International Speech Communication Association, 2018).

  • 34.

    Feigenblat, G., Roitman, H., Boni, O. & Konopnicki, D. Unsupervised query-focused multi-document summarization using the cross entropy method. In Proc. 40th Int. ACM SIGIR Conf. on Research and Development in Information Retrieval 961–964 (Association for Computing Machinery, 2017).

  • 35.

    Daxenberger, J., Schiller, B., Stahlhut, C., Kaiser, E. & Gurevych, I. Argumentext: argument classification and clustering in a generalized search scenario. Datenbank-Spektrum 20, 115–121 (2020).

  • 36.

    Gretz, S. et al. A large-scale dataset for argument quality ranking: construction and analysis. In Thirty-Fourth AAAI Conf. on Artificial Intelligence 7805–7813 (AAAI Press, 2020);

  • 37.

    Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016).

  • 38.

    Samuel, A. L. Some studies in machine learning using the game of checkers. IBM J. Res. Develop. 3, 210–229 (1959).


    Google Scholar

  • 39.

    Tesauro, G. TD-Gammon, a self-teaching backgammon program, achieves master-level play. Neural Comput. 6, 215–219 (1994).


    Google Scholar

  • 40.

    Campbell, M., Hoane, A. J., Jr & Hsu, F.-h. Deep Blue. Artif. Intell. 134, 57–83 (2002).


    Google Scholar

  • 41.

    Ferrucci, D. A. Introduction to “This is Watson”. IBM J. Res. Dev. 56, 235–249 (2012).


    Google Scholar

  • 42.

    Silver, D. et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362, 1140–1144 (2018).


    Google Scholar

  • 43.

    Coulom, R. Efficient selectivity and backup operators in Monte-Carlo tree search. In 5th Int. Conf. on Computers and Games inria-0011699 (Springer, 2006).

  • 44.

    Vinyals, O. et al. Grandmaster level in Starcraft II using multi-agent reinforcement learning. Nature 575, 350–354 (2019).


    Google Scholar

  • Source link