| ==== Second Call for Participation ====
Germeval Task 2017 - Shared Task on Aspect-based Sentiment in Social Media Customer Feedback
Website: https://sites.google.com/view/germeval2017-absa/
We invite everyone from academia and industry to submit to the shared task on German Sentiment Analysis!
**** Jun 2017 Updates *** *** Evaluation script for download *** Java baseline system available ****
---- Introduction ----
In the connected, modern world, customer feedback is a valuable source for insights on the quality of products or services. This feedback allows other customers to benefit from the experiences of others and enables businesses to react on requests, complaints or recommendations. However, the more people use a product or service, the more feedback is generated, which results in the major challenge of analyzing huge amounts of feedback in an efficient, but still meaningful way.
We conduct a shared task on automatically analyzing customer reviews about ?Deutsche Bahn? - the German public train operator with about two billion passengers each year. The task is associated with the GSCL 2017 conference in Berlin, and will take place there as a workshop on September 12, 2017.
---- Data ---- The data for the task has been annotated as part of a joint project between TU Darmstadt and Deutsche Bahn, see https://sites.google.com/view/germeval2017-absa/data for details. All together it consists of 22,000 messages from various social media and web sources. The data is annotated with relevance, document sentiment, aspect-based sentiment and opinion target expressions and is provided in both xml and tsv.
---- Task description ----
To exploit the richness of the data, we subdivided the task into four subtasks. Participants can freely choose in what and how many sub-tasks they participate.
Subtask A) Relevance Classification Determine whether a social media post contains feedback about the 'Deutsche Bahn' or if the post is off-topic/contains no evaluation. Example: ?Ehrlich die männer in Der Bahn haben keine manieren?? In the given post, the task is to identify that the post is 'relevant'.
Subtask B) Document-level Polarity Identify, whether the customer evaluates the 'Deutsche Bahn' or travel as positive, negative or neutral. Example: 'Ingo Lenßen Guten morgen Ingo...bei mir kein regen aber bahn fehr wieder nicht...' In the given post, the task is to identify the posts' polarity : negative
Subtask C) Aspect-level Polarity Identify all aspects which are positively and negatively evaluated within the review. In order to increase comparability, the aspects are previously divided into predefined categories. Consequently, the aim of the subtasks is to identify all contained categories and their associated polarity. Example: ?Alle so 'Yeah, Streik beendet'' Bahn so 'Okay, dafür werden dann natürlich die Tickets teurer' Alle so 'Können wir wieder Streik haben??? In the given post, the task is to identify the aspects and their polarity: < Ticketkauf#Haupt:negativ> <allgemein#Haupt:positive>
Subtask D) Opinion Target Extraction Identify the linguistic expression in the posts which are used to express the aspect-based sentiment (subtask C). The opinion target expression is defined by its starting and ending offsets. Example: @m_wabersich IC 2151? Der fährt nicht. Ich habe Ihnen die Alternative bereits genannt. /je In the given post the task ist to identify the target expression <fährt nicht> (from='26' to='37').
The organizers provide a baseline system and an evaluation method that may be used to lower the obstacles for participation. For both the baseline system and the evaluation method the organisers release executables and open-source code to enable an easy integration into the participating systems: - baseline system + instructions : https://github.com/uhh-lt/GermEval2017-Baseline - evaluation method + instructions: https://github.com/muchafel/GermEval2017
Data, baseline system, evaluation method as well as the description of the tasks are distributed to the participants via the website https://sites.google.com/view/germeval2017-absa/.
Participating team/participants may submit several runs.
Submissions consist of one TSV or XML file per subtask providing predictions for the test data and a paper of 4 pages (excluding references) describing the chosen approach and analyzing the performance. Each participating team/participant should submit only one paper regardless of how many subtasks they participate in; per subtask, participants can add 1 page to the paper, up to a maximum of 7 pages. Papers should follow the GSCL 2017 style files. We expect authors to present summaries of their systems at the GSCL workshop and to participate in the discussions.
---- Important Dates ----
* March 2017 Release of Trial Data * April 2017 Release of Training Data * July 24, 2017 Release of Test Data * August 14, 2017 Submission of System Runs * August 21, 2017 Submission of System Description papers * September 1, 2017 Feedback on System Description papers * September 8, 2017 Final Submission of System Description papers * September 12, 2017 Workshop co-located with GSCL 2017
---- Organizing committee ----
* Chris Biemann, Language Technology, Uni Hamburg, biemann[at]informatik.uni-hamburg.de * Eugen Ruppert, Language Technology, Uni Hamburg, ruppert[at]informatik.uni-hamburg.de * Michael Wojatzki, Language Technology Lab, Uni Duisburg-Essen, michael.wojatzki[at]uni-due.de * Torsten Zesch, Language Technology Lab, Uni Duisburg-Essen, torsten.zesch[at]uni-due.de
We encourage all interested teams to join the GermEval 2017 ABSA mailing list at https://groups.google.com/forum/#!forum/germeval2017-absa, which will be the primary source for updates on dates and test conditions. To contact the organizing committee, please post to this mailing list or send an e-mail to Michael Wojatzki for private communication.
|