gwap.tex 18 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224
  1. %
  2. % File ACL2016.tex
  3. %
  4. \documentclass[11pt]{article}
  5. \usepackage{acl2016}
  6. \usepackage{times}
  7. \usepackage{latexsym}
  8. %\aclfinalcopy % Uncomment this line for the final submission
  9. %\def\aclpaperid{***} % Enter the acl Paper ID here
  10. % To expand the titlebox for more authors, uncomment
  11. % below and set accordingly.
  12. % \addtolength\titlebox{.5in}
  13. \usepackage[utf8]{inputenc}
  14. \usepackage[T1]{fontenc}
  15. \usepackage{url}
  16. \usepackage{graphicx}
  17. \usepackage{color}
  18. \usepackage{xspace}
  19. \newcommand{\txt}[1]{{\it "#1"}}
  20. \newcommand{\tool}[1]{{\sc #1}\xspace}
  21. \newcommand{\bruno}[1]{{\textcolor{blue}{<BG> #1 </BG>}}}
  22. \newcommand{\karen}[1]{{\textcolor{blue}{<KF> #1 </KF>}}}
  23. \newcommand{\zl}{{\sc ZombiLingo}\xspace}%TODO BG pourquoi ne pas le mettre en \tool ??
  24. \title{Zombies with a (Quality) Purpose:\\
  25. Producing Dependency Syntax Annotations With \zl}
  26. \author{
  27. Bruno Guillaume\\
  28. Inria Nancy Grand-Est/LORIA\\
  29. {\tt Bruno.Guillaume@loria.fr}
  30. \And
  31. Karën Fort\\
  32. Université Paris-Sorbonne/EA STIH\\
  33. {\tt karen.fort@paris-sorbonne.fr}
  34. }
  35. %\Keywords{Crowdsourcing, GWAP, Language Resources, Annotation, Dependency Syntax}}
  36. \begin{document}
  37. \maketitle
  38. \begin{abstract}
  39. We present here a game with a purpose allowing for the massive annotation of dependency syntax relations for French, \zl. We particularly detail the training and evaluation mechanisms and discuss some present biases of the game that we are addressing now.
  40. \end{abstract}
  41. % ========================================================================================================================
  42. \section{Games With A Purpose and (Quality) Language Resources Production}
  43. Games With A Purpose (GWAPs) are crowdsourcing platforms with a more or less gamified interface. Although some categorize them as serious games, the main purpose of GWAPs is to produce data, not directly to train people (although players end up being well-trained on the task, they are not trained on the domain). %TODO KF ici une ref
  44. The constantly growing need for language resources in natural language processing (NLP) has fostered the development of such platforms, with some impressive successes~\cite{Chamberlain2013}.
  45. Some GWAPs like \tool{JeuxDeMots}\footnote{See: \url{jeuxdemots.org}} use the basic knowledge of the language of the participants, in this particular case to find associations of ideas \cite{Lafourcade2008}. This is also the case for the word-sense disambiguation games described in \cite{Jurgens2014} or \cite{Venhuizen2013}. Other games, like \tool{Phrase Detectives}\footnote{See: \url{anawiki.essex.ac.uk/phrasedetectives}}, rely on the school knowledge of the players, for example to annotate co-reference~\cite{Chamberlain2008}. These GWAPs demonstrated their efficiency in producing dynamic and reliable language resources. Although they imply some degree of training on the game (especially \tool{Phrase Detectives}), none of them rely as much on the learning capabilities of the participants as \tool{FoldIt} does \cite{Khatib2011a} (for a totally different domain). In this GWAP, designed to help solve protein folding puzzles, the players, who have no previous knowledge in biochemistry, go through various training phases to be able to perform the (more and more complex) tasks at hand.
  46. \zl represents an attempt at using the learning capabilities of the players to have them perform dependency annotations, a task that is considered complex and at which at least some gamification attempts failed~\cite{Hana2012}. To ensure that the game allows for the production of quality annotations, we developed both the training of the players and the evaluation of the annotations.
  47. We will first present the game, then give details on the training provided to the players, before explaining the means of evaluation used throughout the game to ensure the annotation quality. Finally, we discuss the present limits of the system and propose ways to improve it.
  48. % ========================================================================================================================
  49. \section{Decomposing the Complexity of the Task}
  50. \label{sec:zl}
  51. \zl has been developed since the end of 2013. The design phase itself took around 6 months of work with a student trainee (we give details on this and on the gamification of \zl in \cite{Fort2014}). The development was done in several steps, by a student and a part-time engineer\footnote{The game is available at: \url{zombilingo.org}.}. We were finally able to hire a full-time engineer, who will work on the game in the coming years.
  52. \subsection{General Mechanism}
  53. For these reasons, the game is for the moment limited to annotating (freely available) French corpora, using the Sequoia corpus~\cite{Candito2012} as a reference. The input corpus is pre-annotated using the \tool{Talismane} parser~\cite{Urieli2013}, then fed into the game where the players correct the pre-annotations. The resulting annotated corpora are directly and freely available on the Web site of the game\footnote{Generally (depending on the input corpus) under a CC BY-NC-SA license, see: \url{zombilingo.org/information}.}.
  54. %\karen{vérifier la licence : CC BY-NC-SA ?}
  55. It is important to note that the players are not aware of the fact that they are correcting annotations. Instead, they are proposed a sentence, with a highlighted item (lexical unit) and they are asked to select the item associated to it through the relation being processed. See Figure~\ref{fig:interface} for an example with {\bf a\_obj} relation (in French, the relation {\bf a\_obj} links a verb to an argument introduced by the preposition ``à'' and other realizations of this argument).
  56. \begin{figure*}
  57. \begin{center}
  58. \includegraphics[width=12cm]{images/ZL_general.png}
  59. \caption{Annotation interface of ZombiLingo}
  60. \label{fig:interface}
  61. \end{center}
  62. \end{figure*}
  63. As the full dependency syntax annotation of a sentence would be too complex to be performed by players, we turned it into a sequence of tasks, focusing on one relation or one type of relation, at a time. This allows for a focused training of the players.
  64. % ========================================================================================================================
  65. \subsection{Training Mechanism}
  66. Training has been proven to be one of the most effective ways to improve the quality and speed of annotation~\cite{Dandapat2009,Bayerl2011} and it is all the more important in GWAPs, where the participants are not directly accessible for explanations.
  67. In \zl, the beginners are proposed a limited list of phenomena (relations), which they can only start playing when they have finished the dedicated tutorial, very much like in \tool{Phrase Detectives} or \tool{FoldIt}. This phase includes the correct annotation of ten examples of the phenomenon (each one in a different sentence) selected among the reference sentences. Players can access instructions\footnote{These instructions are built from the annotation guide, and adapted for non-linguists.} at anytime (see Professor Frankenperrier, in the top right corner of the interface, see Figure~\ref{fig:interface}) and they are given feedback in case of error (see Figure~\ref{fig:error}).
  68. The tutorial ends only when the player has correctly annotated ten examples. Therefore, the training of a bad player may cover much more than ten sentences. During this phase, the player cannot score any point.
  69. % Depending of the kind of resources which are expected, training may be needed.
  70. % Some tasks may just rely on the player's intuition; for these tasks, training will be meaningless.
  71. % For instance, when a player is asked to tag pictures with open keywords, or to give its feeling about a positive of negative judgment about any kind of items, no training are considered.
  72. % On the other hand, when a task is more deterministic it may be useful to train players.
  73. % We call deterministic a task on which we may expect a given result;
  74. % this includes tasks for which we have a precide documentation (like an annotation guide), a gold standard set of annotated data or an access to some experts which can decide in case of ambiguities.
  75. % ========================================================================================================================
  76. \section{Producing Quality Annotations}
  77. \label{sec:eval}
  78. \subsection{Evaluating Against a Reference}
  79. \label{sec:ref}
  80. We observed that the decomposition of the task to focus on one dependency relation (a phenomenon) at a time coupled with an initial training of players on each phenomenon is not enough to ensure the quality on the long run.
  81. A participant who did the training phase some time ago may have forgotten parts of the rules or may have developed some kind of misunderstanding on the phenomenon. This can lead to errors in the annotation. Another typical case is that of players who may be tempted to play very quickly, ignoring the rules once trained.
  82. %Obviously, this issue does not appear with all players and it would be annoying for them to be forced to retrain.
  83. To deal with this, we added a specific mechanism in the game using again the set of gold-standard annotations during the game to check that the players are still applying the rules. %TODO KF : vérifier PD
  84. For each player $p$ and for each phenomenon $x$, we define a score $\alpha(p,x) \in [0;1]$.
  85. This score is intended to record how confident we can be with respect to player $p$ playing phenomenon $x$.
  86. When an item is selected in the game, it is either a normal item, taken from pre-annotated sentences, with a probability $\alpha(p,x)$ or an item taken from the gold standard, with a probability $1 - \alpha(p,x)$.
  87. When a gold standard item is proposed to the player, the mechanism -- that we term "control phase" -- runs as follows:
  88. \begin{itemize}
  89. \item if the answer of the player is correct, $\alpha(p,x)$ is increased, meaning that we give more confidence to this player $p$ for this phenomenon $x$ (a maximum of $0.95$ is imposed on $\alpha(p,x)$ to ensure that all players will always be given gold-standard items at some future point in the game),
  90. \item if the answer of the player is wrong, the player is warned about the error, $\alpha(p,x)$ is decreased, and we give less confidence to $p$ playing $x$,
  91. \item if a player gives three wrong answers on the same phenomenon, his/her ability to play with this phenomenon is deactivated and s/he is invited to go back to the training phase like a new player.
  92. \end{itemize}
  93. The goal of the last point is to prevent players from using a strategy where they focus on the quantity of items rather than on the quality of the annotation.
  94. If a participant plays too fast (or simply badly), s/he will probably give wrong answers and will have to go over the training phase again, loosing time without scoring. A (very) bad player will end up retraining over and over again. Therefore, playing fast is not as a long-term winning strategy in \zl.
  95. \begin{figure*}
  96. \begin{center}
  97. \includegraphics[width=12cm]{images/ZL_ErreurTuto2.png}
  98. \caption{Feedback given to the player when annotating a reference sentence}
  99. \label{fig:error}
  100. \end{center}
  101. \end{figure*}
  102. \subsection{Selecting Annotations}
  103. %TODO BG : vote ?
  104. %\karen{Il faut parler de la manière dont on prend en compte le vote}
  105. %\karen{BG : parler de l'output ?}
  106. %TODO BG parler de l'output (?)
  107. In \zl, the selection of the best annotations is not directly dependent on the number of participants who "voted" for them. Instead, the system relies on the confidence it has in the player, which itself evolves according to the player's performance on reference annotations.
  108. The pre-annotations are assigned an initial confidence score of 5. If a player confirms a pre-annotation, the confidence score of this annotation is increased by the confidence score of the player on this phenomenon (see section~\ref{sec:ref}) at this time in the game.
  109. On the contrary, if a player corrects the pre-annotation, this new annotation is created and initialized with the confidence score of the player.
  110. The best annotation at time $T$ is the one with the best confidence score at that time.
  111. %
  112. % Pour l'export des corpus, il faudra décider de ce que l'on fait en cas de scores égaux sur la même relation.
  113. % ========================================================================================================================
  114. \section{Discussion}
  115. \label{sec:discuss}
  116. The current version of the game heavily relies on the pre-annotation tool.
  117. When a new sentence is added to the system, it is automatically annotated by a parser (the \tool{Talismane} parser is currently used), so the phenomena proposed to the player are relative to this pre-annotation.
  118. If the parser has predicted a dependency labeled $x$ with the governor $t_g$ and the dependency $t_d$,
  119. the player will be asked a question like \emph{``what is the dependent of governor $t_g$ with respect to relation $x$?''}\footnote{For some relations, the question is formulated in the other way: \emph{``what is the dependent of governor $t_g$ and relation $x$?''}.}
  120. This way of building game items presented to the player raises two problems.
  121. \subsection{Phenomena to Remove (Noise)}
  122. The first problem appears when the parser is wrong and there is no relation $x$ with governor $t_g$. In this case, the player may be influenced by the system and may try to find an item (lexical unit) in the sentence that would answer the question. In the game interface, the player can click on two crossing bones, on the right of the sentence (see Figure~\ref{fig:interface}) to express that \emph{``There is no such phenomenon here''}. But this ``crossing bones'' answer is not integrated in the training phase or in the control phase. We therefore suspect that it is underused by most of the players, generating some noise in the produced data.
  123. Indeed, looking at the database, we can observe that, among the 10 top players, the usage of the crossing bones ranges from 0.15\% to 2.83\%, i.e. very low rates that show that the players do not use them enough.
  124. If we focus, for instance on the {\bf de\_obj} relation\footnote{This corresponds to arguments introduced by the preposition ``de'' but it also contains some other kinds of realizations of the same type of argument, like the clitic \emph{en} or the use of the complementizer \emph{que}.} and manually explore 50 cases of rejection (i.e. when a player has clicked the crossing bones) for this phenomenon:
  125. \begin{itemize}
  126. \item 13 cases are wrongly corrected annotations (11 are reference sentences and 2 are correctly pre-annotated sentences). This reflects that the instructions given to the players are still not clear enough and that the training should be improved (5 of these 13 cases concern the usage of the complementizer \emph{que}, 2 concern the realization with the clitic \emph{en});
  127. \item in the 37 remaining cases, the players have correctly modified the wrong pre-annotations.
  128. \end{itemize}
  129. We believe that these results can be improved, at least partly, by modifying the training and control phases so that they include the absence of the phenomenon at hand. Another leverage to be used is the interface itself, as the crossing bones are not visible enough.
  130. \subsection{Phenomena to Add (Silence)}
  131. The second problem is more difficult to tackle. If some relation $x$ with the governor $t_g$ exists but was not predicted by the parser (not even with a wrong dependent), the player will never be asked the corresponding question (silence). We have introduced a first game mode to deal with this setting in the case where the parser predicts a relation from $t_g$ to $t_d$, but with some other relation $x'$ which is often confused with $x$. In this game mode, the player is given $t_g$ and $t_d$ and he has to choose between $x$ and $x'$ for the relation (see Figure~\ref{fig:alt}).
  132. We have identified two couples $x$/$x'$ of such relations with the \tool{Talismane} pre-annotation: tense auxiliary ({\bf aux.tps}) vs passive auxiliary ({\bf aux.pass}) and verb modifier ({\bf mod}) vs verb argument ({\bf p\_obj.o}).
  133. \begin{figure*}
  134. \begin{center}
  135. \includegraphics[width=12cm]{images/ZL_alt.png}
  136. \caption{Game mode where the user has to select the right relation}
  137. \label{fig:alt}
  138. \end{center}
  139. \end{figure*}
  140. One possible (partial) solution to this would be to use more than one pre-annotation tool, in particular to apply parsers giving probabilities on the output.
  141. % In future version, use more than a one simple preannotation
  142. % - a parser with probabilities on output
  143. % - several parser in competition
  144. % ========================================================================================================================
  145. \section{Obtained Results and prospects}
  146. As of mid-October 2015, the number of registered players on the game is 422. They have produced nearly 59,000 annotations. However, only 8 players participated a lot and produced more than 1,000 annotations.
  147. The first evaluation results show an average of 86\% accuracy on the reference sentences for the 10 most active players (best and worst omitted) and more than 89\% for the 30 most active players (who played easier cases).
  148. These numbers are very encouraging. However, the interface and the training phase have to be improved to take into account the fact that a phenomenon can sometimes not appear in the sentence, therefore reducing the produced noise. We also need to investigate the pre-annotation process in order to find more efficient ways to qualify the produced annotation and the potential ambiguities.
  149. These issues will be addressed in the coming months, together with the integration of a new design and game features, which will hopefully allow us to attract more players and produce better annotations.
  150. Finally, we plan to develop versions of \zl for other languages, to provide freely available corpora with quality dependency syntax annotations.
  151. % \section*{Acknowledgements}
  152. % \zl is funded by Inria and Délégation générale à la langue française et aux langues de France (DGLFLF). A number of students have participated in the project in the past, including Hadrien Chastant and Valentin Stern and we want to thank them here. We also thank Antoine Chenardin, who participated in the development as an engineer.
  153. \bibliographystyle{acl2016}
  154. \bibliography{gwap}
  155. \end{document}