ISCA - International Speech
Communication Association


ISCApad Archive  »  2012  »  ISCApad #171  »  Journals

ISCApad #171

Tuesday, September 04, 2012 by Chris Wellekens

7 Journals
7-1Speech communication: Special issue on Processing Under-Resourced Languages

 

Correction: it is a special issue of Speech communication and not of Signal Processing as previously announced.

Call for Papers

 Special Issue on Processing Under-Resourced Languages


 

The creation of language and acoustic resources, for any given spoken language, is typically a costly task. For example, a large amount of time and money is required to properly create annotated speech corpora for automatic speech recognition (ASR), domain-specific text corpora for language modeling (LM), etc. The development of speech technologies (ASR, Text-to-Speech) for the already high-resourced languages (such as English, French or Mandarin, for example) is less constrained by this issue and, consequently, high-performance commercial systems are already on the market. On the other hand, for under-resourced languages, the above issue is typically the main obstacle.

 

Given this, the scientific community’s concern with porting, adapting, or creating language and acoustic resources or even models for low-resourced languages has been growing recently and several algorithms and methods of adaptation have been proposed and experimented with. In the mean time, workshops and special sessions have been organized on this domain.

 

This special issue focuses on research and development of new tools based on speech technologies for less-resourced national languages, mainly, used in the following large geographical regions: Eastern Europe, South and Southeast Asia, West Asia, North Africa, Sub-Saharan Africa, South and Central America, Oceania. The special issue is open to present problems and peculiarities of targeted languages in application to spoken language technologies, including automatic speech recognition, text-to-speech, speech-to-speech translation, spoken dialogue systems in an internationalized context. When developing speech-based technologies researchers are faced with many new problems from lack of audio databases and linguistic resources (lexicons, grammars, text collections), to inefficiency of existing methods for language and acoustical modeling, and limited infrastructure for the creation of relevant resources. They often have to deal with novel linguistic phenomena that are poorly studied or researched from a speech technology perspective (for instance, clicks in southern African languages, tone in many languages of the world, language switching in multilingual systems, rich morphology, etc).

 

Well-written papers on speech technologies for targeted languages are encouraged, and papers describing original results (theoretical and/or experimental) obtained for under-resourced languages, but important for well-elaborated languages too, are invited as well. Good papers from any countries and any authors may be accepted if they present new speech studies concerning the languages of interest of the special issue. Submissions from countries where issues related to under-resourced languages are a practical reality, are strongly encouraged for this special issue.

 


Important Dates:

Submission deadline:  1st August 2012

Notification of acceptance: 1st February 2013

Final manuscript due:  April 2013

Tentative publication date: Summer 2013

 

 

 

 

Editors

Etienne Barnard

North-West University, South Africa

Laurent Besacier

Laboratory of Informatics of Grenoble, France

Alexey Karpov

SPIIRAS, Saint-Petersburg, Russia

Tanja Schultz

University of Karlsruhe, Germany


 

Top

7-2Special issue ACM Transactions on Speech and Language Processing on Multiword Expressions

Call for Papers

   ACM Transactions on Speech and Language Processing Special Issue on Multiword Expressions:

                                                            from Theory to Practice and Use

                                                               multiword.sf.net/tslp2011si

                                                        Deadline for Submissions: May, 15th, 2012

Multiword expressions (MWEs) range over linguistic constructions like idioms (a frog in the throat, kill some time), fixed phrases (per se, by and large, rock'n roll), noun compounds (traffic light, cable car), compound verbs (draw a conclusion, go by [a name]), etc. While easily mastered by native speakers, their interpretation poses a major challenge for computational systems, due to their flexible and heterogeneous nature. Surprisingly enough, MWEs are not nearly as frequent in NLP resources (dictionaries, grammars) as they are in real-word text, where they have been reported to account for half of the entries in the lexicon of a speaker and over 70% of the terms in a domain. Thus, MWEs are a key issue and a current weakness for tasks like natural language parsing and generation, as well as real-life applications such as machine translation. In spite of several proposals for MWE representation ranging along the continuum from words-with-spaces to compositional approaches connecting lexicon and grammar, to date, it remains unclear how MWEs should be represented in electronic dictionaries, thesauri and grammars. New methodologies that take into account the type of MWE and its properties are needed for efficiently handling manually and/or automatically acquired expressions in NLP systems. Moreover, we also need strategies to represent deep attributes and semantic properties for these multiword entries. While there is no unique definition or classification of MWEs, most researchers agree on some major classes such as named entities, collocations, multiword terminology and verbal expressions. These, though, are very heterogeneous in terms of syntactic and semantic properties, and should thus be treated differently by applications. Type-dependent analyses could shed some light on the best methodologies to integrate MWE knowledge in our analysis and generation systems. Evaluation is also a crucial aspect for MWE research. Various evaluation techniques have been proposed, from manual inspection of top-n candidates to classic precision/recall measures. The use of tools and datasets freely available on the MWE community website (multiword.sf.net/PHITE.php?sitesig=FILES) is encouraged when evaluating MWE treatment. However, application-oriented techniques are needed to give a clear indication of whether the acquired MWEs are really useful. Research on the impact of MWE handling in applications such as parsing, generation, information extraction, machine translation, summarization can help to answer these questions. We call for papers that present research on theoretical and practical aspects of the computational treatment of MWEs, specifically focusing on MWEs in applications such as machine translation, information retrieval and question answering. We also strongly encourage submissions on processing MWEs in the language of social media and micro-blogs.

The focus of the special issue, thus, includes, but is not limited to the following topics:

 * MWE treatment in applications such as the ones mentioned above;

* Lexical representation of MWEs in dictionaries and grammars;

* Corpus-based identification and extraction of MWEs;

 * Application-oriented evaluation of MWE treatment;

* Type-dependent analysis of MWEs;

 * Multilingual applications (e.g. machine translation, bilingual dictionaries);

* Parsing and generation of MWEs, especially, processing of MWEs in the language of social media and micro-blogs; * MWEs and user interaction;

* MWEs in linguistic theories like HPSG, LFG and minimalism and their contribution to applications;

* Relevance of research on first and second language acquisition of MWEs for applications;

* Crosslinguistic studies on MWEs.

 Submission Procedure

Authors should follow the ACM TSLP manuscript preparation guidelines described on the journal web site http://tslp.acm.org and submit an electronic copy of their complete manuscript through the journal manuscript submission site http://mc.manuscriptcentral.com/acm/tslp. Authors are required to specify that their submission is intended for this special issue by including on the first page of the manuscript and in the field 'Author's Cover Letter' the note 'Submitted for the special issue on Multiword Expressions'.

Schedule Submission deadline: May, 15th, 2012

 Notification of acceptance: September, 15th ,

2012 Final manuscript due: November, 31st, 2012

Program Committee

* Iñaki Alegria, University of the Basque Country (Spain) * Dimitra Anastasiou, University of Bremen (Germany) * Eleftherios Avramidis, DFKI GmbH (Germany) * Timothy Baldwin, University of Melbourne (Australia) * Francis Bond, Nanyang Technological University (Singapore) * Aoife Cahill, ETS (USA) * Helena Caseli, Federal University of Sao Carlos (Brazil) * Yu Tracy Chen, DFKI GmbH (Germany) * Paul Cook, University of Melbourne (Australia) * Ann Copestake, University of Cambridge (UK) * Béatrice Daille, Nantes University (France) * Gaël Dias, University of Caen Basse-Normandie (France) * Stefan Evert, University of Darmstadt (Germany) * Roxana Girju, University of Illinois at Urbana-Champaign (USA) * Chikara Hashimoto, National Institute of Information and Communications Technology (Japan) * Kyo Kageura, University of Tokyo (Japan) * Martin Kay, Stanford University and Saarland University (USA & Germany) * Su Nam Kim, University of Melbourne (Australia) * Dietrich Klakow, Saarland University (Germany) * Philipp Koehn, University of Edinburgh (UK) * Ioannis Korkontzelos, University of Manchester (UK) * Brigitte Krenn, Austrian Research Institute for Artificial Intelligence (Austria) * Evita Linardaki, Hellenic Open University (Greece) * Takuya Matsuzaki, Tsujii Lab, University of Tokyo (Japan) * Yusuke Miyao, Japan National Institute of Informatics (NII) (Japan) * Preslav Nakov , Qatar Foundation (Qatar) * Gertjan van Noord, University of Groningen (The Netherlands) * Diarmuid Ó Séaghdha, University of Cambridge (UK) * Jan Odijk, University of Utrecht (The Netherlands) * Pavel Pecina, Charles University (Czech Republic) * Scott Piao, Lancaster University (UK) * Thierry Poibeau, CNRS and École Normale Supérieure (France) * Maja Popovic, DFKI GmbH (Germany) * Ivan Sag, Stanford University (USA) * Agata Savary, Université François Rabelais Tours (France) * Violeta Seretan, University of Geneva (Switzerland) * Ekaterina Shutova, University of Cambridge (UK) * Joaquim Ferreira da Silva, New University of Lisbon (Portugal) * Lucia Specia, University of Wolverhampton (UK) * Sara Stymne, Linköping University (Sweden) * Stan Szpakowicz, University of Ottawa (Canada) * Beata Trawinski, University of Vienna (Austria) * Kyioko Uchiyama, National Institute of Informatics (Japan) * Ruben Urizar, University of the Basque Country (Spain) * Tony Veale, University College Dublin (Ireland) * David Vilar, DFKI GmbH (Germany) * Begoña Villada Moirón, RightNow (The Netherlands) * Tom Wasow, Stanford University (USA) * Shuly Wintner, University of Haifa (Israel) * Yi Zhang, DFKI GmbH and Saarland University (Germany) Guest Editors * Valia Kordoni, DFKI GmbH and Saarland University (Germany) * Carlos Ramisch, University of Grenoble (France) and Federal University of Rio Grande do Sul (Brazil) * Aline Villavicencio, Federal University of Rio Grande do Sul (Brazil) and Massachusetts Institute of Technology (USA)

Contact

For any inquiries regarding the special issue, please send an email to mweguesteditor@gmail.com

Top

7-3CfP Special issue of EURASIP Journal on Audio, Speech, and Music Processing: Sparse Modeling for Speech and Audio Processing

Call for Papers

EURASIP Journal on Audio, Speech, and Music Processing Special Issue on Sparse Modeling for Speech and Audio Processing

Sparse modeling and compressive sensing are rapidly developing fields in a variety of signal processing and machine learning conferences, focused on the problems of variable selection in high-dimensional datasets and signal reconstruction from few training examples. With the increasing amount of high-dimensional speech and audio data available, the need to efficiently represent and search through these data spaces is becoming of vital importance. The challenges arise from selecting highly predictive signal features and adaptively finding a dictionary which best represents the signal. Overcoming these challenges is likely to require efficient and effective algorithms, mainly focused on l1-regularized optimization, basis pursuit, Lasso sparse regression, missing data problem and various extensions. Despite the significant advances in the fields, there are a number of open issues remain when realizing sparse model in real-life applications, e.g. stability and interpretability of sparse models, model selection, group/fused sparsity, and evaluation of the results. Furthermore, sparse modeling has ubiquitous applications in speech and audio processing areas, including dimensionality reduction, model regularization, speech/audio compression/reconstruction, acoustic/audio feature selection, acoustic modeling, speech recognition, blind source separation, and many others. Our goal aims to come up with a set of new algorithms/applications and to advance the state of the arts in speech and audio processing. In light of the sufficiently growing research activities and their importance, we openly invite papers describing various aspects of sparsity modeling and related techniques as well as their successful applications. Submissions must not have been previously published and must have specific connection to audio, speech, and music processing. The topics of particular interest will include, but are not limited to: • Sparse representation and compressive sensing • Sparse modeling and regression • Sparse modeling for model regularization • Sparse modeling for speech recognition • Sparse modeling for language processing • Sparse modeling for source separation • Sparse modeling for music processing • Deep learning for sparse models • Practical applications of sparse modeling • Machine learning algorithms, techniques and applications Before submission authors should carefully read over the journal’s Instructions for Authors, which are located at http://asmp.eurasipjournals.com/authors/instructions. Prospective authors should submit an electronic copy of their complete manuscript through the SpringerOpen submission system at http://asmp.eurasipjournals.com/manuscript, according to the following timetable:

Manuscript Due: June 15, 2012 Extended to October 15, 2012

First Round of Reviews: September 1, 2012

Publication Date: December 1, 2012

Guest editors: Jen-Tzung Chien (E-mail: jtchien@mail.ncku.edu.tw) National Cheng Kung University, Tainan, Taiwan Bhuvana Ramabhadran (E-mail: bhuvana@us.ibm.com) IBM T. J. Watson Research Center, Yorktown Heights, NY, USA Tomoko Matsui (E-mail: tmatsui@ism.ac.jp) The Institute of Statistical Mathematics, Tokyo, Japan

Top

7-4Phonetica: Speech Production and Perception across the Segment-Prosody Divide: Data – Theory – Modelling

Call for Papers
Fax 41 61 306 12 34
E- Mail karger@karger.ch
www.karger.com
© 2011 S. Karger AG, Basel
0031–8388/11/0683–0117
$38.00/0
Accessible online at:
www.karger.com/pho
Phonetica 2011;68:117–119
DOI: 10.1159/000334686
Speech Production and Perception across
the Segment-Prosody Divide:
Data – Theory – Modelling
The concept of sound segments has traditionally played a central role in the
phonetic representation of words. It underlies the development of alphabetic writing
systems, of phonetic transcription and of phonemic theory. Other sound aspects,
especially pitch, but also energy, voice quality, rhythm have been conceptualized as
being superpositioned on segments in a broader frame of syllables and utterances. The
segment is associated with the short-time window of opening and closing movements
of the vocal tract, and simultaneously, with the differentiation of lexical and propositional
meaning, whereas prosodies are generally associated with long-time windows
of pitch, energy and voice quality control, and predominantly with attitudinal and
expressive utterance meaning, including the functions of attention seeking, intensity
signalling, and syntagmatic phrasing. This different substance-meaning duality
in sound segments and prosodies accounts for the current dichotomous mainstream
research paradigms of sounds and prosodies. The sound-prosody dichotomy has,
however, been repeatedly called into question, among others in the Firthian School
of Prosodic Analysis, following Firth’s seminal article ‘Sounds and Prosodies’ from
1948, for example in the study of such suprasegmental phenomena as vowel harmony
or long articulatory components of, e.g., palatalization, velarization, nasalization,
glottalization in the linguistic function of distinctively marking words and morphological
structures. Moreover, it has always been bridged in the analysis of lexical
stress, where segmental aspects of vowel duration and vowel spectrum, and prosodic
aspects of fundamental frequency and energy have jointly been taken into account in
a vast array of experimental investigations.
The reliance on linguistic form and phonetic substance in the analysis of sound
segments and prosodies reflects the tenets of 20th century structural linguistics, as it
relegates the functional aspect of speech communication to a post hoc level. Such a
dichotomous formal approach is a useful heuristics to come to grips with the enormous
complexity of speech, especially in the initial stages in the investigation of a language.
Yet, the formal manifestations, analytically separated as sounds and prosodies, are
the joint expression of the manifold communicative functions in speech: semantic,
information-structural, expressive and attitudinal. If these functions are taken as the
superordinate control variable, the axiomatic formal dichotomy of sounds and prosodies
fades away because they interact, with varying weights, in the coding of specific
communicative functions. This functional approach to phonetic detail in segmentprosody
interaction was the empirical and theoretical theme of two recent plenary
talks at the 17th ICPhS in Hong Kong: ‘Does Phonetic Detail Guide Situation-Specific
Speech Recognition?’ by Sarah Hawkins and ‘On the Interdependence of Sounds and
Prosodies in Communicative Functions’ by Klaus Kohler. They were preceded by papers
01_PHO334686.indd 117 17/11/11 22:08:26
118 Phonetica 2011;68:117–119 Call for Papers
in Journal of the Acoustical Society of America, Journal of Phonetics, and Phonetica:
Niebuhr, O.: Coding of intonational meanings beyond F0: evidence from
utterance-fi nal /t/ aspiration in German. J. acoust. Soc. Am. 124: 1252–1263
(2008).
Hawkins, S.: Roles and representations of systematic fi ne phonetic detail in
speech understanding. J. Phonet. 31: 373–405 (2003).
Local, J.: Variable domains and variable relevance: interpreting phonetic
exponents. J. Phonet. 31: 321–339 (2003).
Kohler K.: Communicative functions integrate segments in prosodies and
prosodies in segments. Phonetica 68: 26–56 (2011).
Kohler, K.; Niebuhr O.: On the role of articulatory prosodies in German message
decoding. Phonetica 68: 57–87 (2011).
On the one hand, these investigations showed systematic phonetic detail in talkin-
interaction as well as acoustic effects of segments on pitch patterns and of pitch
patterns on segments in the perceptual identification of semantic functions, and, on the
other hand, demonstrated the perceptual importance of long phonetic components of,
e.g., palatalization that are not linked to a segmentable sound unit but are superimposed
as an articulatory prosody on a wider stretch of speech.
We would like to make this sound-prosody relationship the theme of a special
issue of Phonetica and raise the central question:
How are sounds and prosodies intertwined, mutually shaping each other,
as a reflection of different communicative functions in speech interaction?
The papers we solicit are to take a renewed look in greater breadth and detail at this
interweaving of the threads of sounds and prosodies in a tapestry of speech communication
in a variety of languages, incorporating all forms of meaning – propositional,
attitudinal and expressive. The guiding principles for submissions are as follows:
• Papers present single-language or comparative analyses of new data in a variety of
languages that highlight the interdependence of short- and long-time windows of
speech production and/or perception in relation to specifi c communicative functions
or
they discuss aspects of the theory of segment-prosody interdependence based
on language-specifi c, typological or universal relations between communicative
function and phonetic substance
or
they attempt to model segment-prosody interaction in these function-substance
relations, for example in developing algorithms for contextually and situationally
adequate high-quality speech synthesis.
• Data can be either experimental or from corpora, unscripted ones in particular, and
experimentally collected items of speech need to be functionally and situationally
anchored, which rules out the widespread metalinguistic sentence frame of the
type ‘Say X again.’, commonly used in EMMA and EPG data acquisition.
• Potential topics may include:
– segment-prosody interdependence in talk-in-interaction,
– prosodic and segmental properties in the manifestation of speech functions,
for example different types of emphasis, in production and perception,
– contribution of vowel spectrum to lexical stress perception,
01_PHO334686.indd 118 17/11/11 22:08:26
Speech Production and Perception across the Phonetica 2011;68:117–119 119
Segment-Prosody Divide
– spectral shaping of segments, for example fricatives and plosive releases, in
falling or rising f0 contours, and perceptual effects,
– articulatory prosodies in speech reduction, especially of function words, and
their importance in speech decoding,
– creation of rhythmic fl ow by preferred segmental patterns, such as high
versus low vowels, rather than the reverse, in fl ip-fl op, sing song, ping-pong,
zig zag, wishy washy, or avoidance of phrase-internal obstruent breaks in
sonorant stretches, as in thunder and lightning against the semantically
obvious *lightning and thunder, or mum and dad, German Mama und Papa,
Oma und Opa,
– signalling of tone and intonation in whispered speech
– contribution of segments and prosody to the generation of high-quality
speech synthesis and to (online) spoken word recognition.
Editorial Guidelines and Schedule
The total space available will be a double issue of the Journal. We expect to publish
approximately 12 contributions of 12 printed pages each on average. Submissions
need to follow the Phonetica style sheet (cf. ‘Instructions to Authors’ in any recent
issue and www.karger.com/electronic_submission) and should include Word and pdf
files. The dates of the editing schedule are as follows:
By 28 January, 2012: Submission by e-mail attachment to kjk@ipds.uni-kiel.de of
an 800-word abstract, giving title, author(s), affiliation(s),
e-mail address of main author.
29 February, 2012: Notification of authors whether the proposed papers have been
recommended as potential contributions to the theme by the
Editorial Team, and, if so, invitation to submit full versions for
review.
By 31 May, 2012: Electronic submission of pdf files as e-mail attachments to
kjk@ipds.uni-kiel.de, to be sent out for review.
31 July, 2012: Intimation of final decision about acceptance for publication in
the special issue, including reviewers’ comments and suggestions
for revision. Due to the tight publication schedule only
papers requiring minor or moderate revision can be included
in the special issue. If major revision is necessary, authors will
be encouraged to resubmit for publication in an ordinary issue
of Phonetica.
By 20 August, 2012: Submission of final versions in Word and pdf by e-mail attachment
to kjk@ipds.uni-kiel.de.
End of 2012: Publication.

Top

7-5ACM Transactions on Speech and Language Processing/Special Issue on Multiword Expressions: from Theory to Practice and Use

ACM Transactions on Speech and Language Processing

  Special Issue on Multiword Expressions:

           from Theory to Practice and Use

                 multiword.sf.net/tslp2011si

Deadline for Submissions: May, 15th, 2012
--------------------------------------------------------------------------

Call for Papers

Multiword expressions (MWEs) range over linguistic constructions like
idioms (a frog in the throat, kill some time), fixed phrases (per se,
by and large, rock'n roll), noun compounds (traffic light, cable car),
compound verbs (draw a conclusion, go by [a name]), etc. While easily
mastered by native speakers, their interpretation poses a major challenge
for computational systems, due to their flexible and heterogeneous nature.
Surprisingly enough, MWEs are not nearly as frequent in NLP resources
(dictionaries, grammars) as they are in real-word text, where they have
been reported to account for half of the entries in the lexicon of a speaker
and over 70% of the terms in a domain. Thus, MWEs are a key issue and
a current weakness for tasks like natural language parsing and generation,
as well as real-life applications such as machine translation.

In spite of several proposals for MWE representation ranging along the
continuum from words-with-spaces to compositional approaches connecting
lexicon and grammar, to date, it remains unclear how MWEs should be
represented in electronic dictionaries, thesauri and grammars. New
methodologies that take into account the type of MWE and its properties
are needed for efficiently handling manually and/or automatically acquired
expressions in NLP systems. Moreover, we also need strategies to represent
deep attributes and semantic properties for these multiword entries. While
there is no unique definition or classification of MWEs, most researchers
agree on some major classes such as named entities, collocations, multiword
terminology and verbal expressions. These, though, are very heterogeneous
in terms of syntactic and semantic properties, and should thus be treated
differently by applications. Type-dependent analyses could shed some light
on the best methodologies to integrate MWE knowledge in our analysis and
generation systems.

Evaluation is also a crucial aspect for MWE research. Various evaluation
techniques have been proposed, from manual inspection of top-n candidates
to classic precision/recall measures. The use of tools and datasets freely
available on the MWE community website (multiword.sf.net/PHITE.php?sitesig=FILES)
is encouraged when evaluating MWE treatment. However, application-oriented
techniques are needed to give a clear indication of whether the acquired MWEs
are really useful. Research on the impact of MWE handling in applications such
as parsing, generation, information extraction, machine translation, summarization
can help to answer these questions.

We call for papers that present research on theoretical and practical aspects
of the computational treatment of MWEs, specifically focusing on MWEs in
applications such as machine translation, information retrieval and question
answering. We also strongly encourage submissions on processing MWEs in
the language of social media and micro-blogs. The focus of the special issue,
thus, includes, but is not limited to the following topics:

* MWE treatment in applications such as the ones mentioned above;
* Lexical representation of MWEs in dictionaries and grammars;
* Corpus-based identification and extraction of MWEs;
* Application-oriented evaluation of MWE treatment;
* Type-dependent analysis of MWEs;
* Multilingual applications (e.g. machine translation, bilingual dictionaries);
* Parsing and generation of MWEs, especially, processing of MWEs in the
 language of social media and micro-blogs;
* MWEs and user interaction;
* MWEs in linguistic theories like HPSG, LFG and minimalism and their
 contribution to applications;
* Relevance of research on first and second language acquisition of MWEs for
 applications;
* Crosslinguistic studies on MWEs.

Submission Procedure

Authors should follow the ACM TSLP manuscript preparation guidelines
described on the journal web site http://tslp.acm.org and submit an
electronic copy of their complete manuscript through the journal manuscript
submission site http://mc.manuscriptcentral.com/acm/tslp. Authors are required
to specify that their submission is intended for this special issue by including
on the first page of the manuscript and in the field 'Author's Cover Letter' the
note 'Submitted for the special issue on Multiword Expressions'.

Schedule

Submission deadline: May, 15th, 2012
Notification of acceptance: September, 15th , 2012
Final manuscript due: November, 31st, 2012

Program Committee

*      Iñaki Alegria, University of the Basque Country (Spain)
*      Dimitra Anastasiou, University of Bremen (Germany)
*      Eleftherios Avramidis, DFKI GmbH (Germany)
*      Timothy Baldwin, University of Melbourne (Australia)
*      Francis Bond, Nanyang Technological University  (Singapore)
*      Aoife Cahill, ETS (USA)
*      Helena Caseli, Federal University of Sao Carlos (Brazil)
*      Yu Tracy Chen, DFKI GmbH (Germany)
*      Paul Cook, University of Melbourne (Australia)
*      Ann Copestake, University of Cambridge (UK)
*      Béatrice Daille, Nantes University (France)
*      Gaël Dias, University of Caen Basse-Normandie (France)
*      Stefan Evert, University of Darmstadt (Germany)
*      Roxana Girju, University of Illinois at Urbana-Champaign (USA)
*      Chikara Hashimoto, National Institute of Information and Communications Technology (Japan)
*      Kyo Kageura, University of Tokyo (Japan)
*      Martin Kay, Stanford University and Saarland University (USA & Germany)
*      Su Nam Kim, University of Melbourne (Australia)
*      Dietrich Klakow, Saarland University (Germany)
*      Philipp Koehn, University of Edinburgh (UK)
*      Ioannis Korkontzelos, University of Manchester (UK)
*      Brigitte Krenn, Austrian Research Institute for Artificial Intelligence (Austria)
*      Evita Linardaki, Hellenic Open University (Greece)
*      Takuya Matsuzaki, Tsujii Lab, University of Tokyo (Japan)
*      Yusuke Miyao, Japan National Institute of Informatics (NII) (Japan)
*      Preslav Nakov , Qatar Foundation (Qatar)
*      Gertjan van Noord, University of Groningen (The Netherlands)
*      Diarmuid Ó Séaghdha, University of Cambridge (UK)
*      Jan Odijk, University of Utrecht (The Netherlands)
*      Pavel Pecina, Charles University (Czech Republic)
*      Scott Piao, Lancaster University (UK)
*      Thierry Poibeau, CNRS and École Normale Supérieure (France)
*      Maja Popovic,  DFKI GmbH  (Germany)
*      Ivan Sag, Stanford University (USA)
*      Agata Savary, Université François Rabelais Tours (France)
*      Violeta Seretan, University of Geneva (Switzerland)
*      Ekaterina Shutova, University of Cambridge (UK)
*      Joaquim Ferreira da Silva, New University of Lisbon (Portugal)
*      Lucia Specia, University of Wolverhampton (UK)
*      Sara Stymne, Linköping University (Sweden)
*      Stan Szpakowicz, University of Ottawa (Canada)
*      Beata Trawinski, University of Vienna (Austria)
*      Kyioko Uchiyama, National Institute of Informatics (Japan)
*      Ruben Urizar, University of the Basque Country (Spain)
*      Tony Veale, University College Dublin (Ireland)
*      David Vilar,  DFKI GmbH  (Germany)
*      Begoña Villada Moirón, RightNow  (The Netherlands)
*      Tom Wasow, Stanford University (USA)
*      Shuly Wintner,  University of Haifa (Israel)
*      Yi Zhang, DFKI GmbH and Saarland University (Germany)


Guest Editors

*      Valia Kordoni, DFKI GmbH and Saarland University (Germany)
*      Carlos Ramisch, University of Grenoble (France) and Federal University of Rio Grande do Sul (Brazil)
*      Aline Villavicencio, Federal University of Rio Grande do Sul (Brazil) and Massachusetts Institute of Technology (USA)


Contact

For any inquiries regarding the special issue, please send an email
to mweguesteditor@gmail.com

Top

7-6CfP CSL on Information Extraction and Retrieval from Spoken Documents

Call for Papers
Special Issue of
Computer Speech and Language
on
Information Extraction and Retrieval from Spoken Documents

In addition to increased storage space and connection bandwidths, advances in computing power resulted in extremely large amounts of multimedia data being available. In addition to text and metadata-based search, content-based retrieval is emerging as a means to provide easy access to multimedia, especially for data containing speech. Furthermore, as research in speech processing has matured, there has been a shift in research focus from speech recognition to its applications including retrieval of spoken content.

Over the last decade, significant advances have resulted from a combination of advances in automatic speech recognition (ASR), and information retrieval (IR). In addition, innovative methods for tighter coupling of the component technologies together with novel work in query-by-example have been crucial for robustness to errors in speech retrieval. Moving towards open vocabulary search is necessitated by the increased presence of many heterogeneous sources. This requires detection and recovery of out-of-vocabulary terms as a first step in the discovery of relations to known terms and topics in information extraction.

The main objective of this Special Issue is to bring together current advances in the field and provide a glimpse of the future of the field. Topics of interest include, but are not limited to:
        ? Novel techniques for coupling speech recognition and information retrieval
        ? Multilingual/cross-lingual
        ? Query by example techniques
        ? Relevance feedback
        ? Large heterogeneous archives
        ? Multimedia archives
        ? Voice queries
        ? Audio indexing
        ? Spoken Term Discovery
        ? Spoken Term Detection
        ? Open vocabulary search
        ? OOV detection and recovery
        ? Phonetic approaches
        ? Metadata
        ? Information extraction

Authors should follow the Elsevier Computer Speech and Language manuscript format described at the journal site. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://ees.elsevier.com/csl/ according to the following timetable:
Extended Submission Deadline:          August 1, 2012
First Round of Reviews:                November 15, 2012
Final Version of Manuscripts:        January 15, 2013
Target Publication Date:                May 1, 2013
During Submission authors must select ?Special Issue: Info Extraction & Retrieval?.

Guest Editors
Murat Saraçlar, Bo?aziçi University, Turkey, murat.saraclar@boun.edu.tr
Bhuvana Ramabhadran, IBM, USA, bhuvana@us.ibm.com
Ciprian Chelba, Google, USA, ciprianchelba@google.com

Top

7-7CfP Revue TAL

PREMIER APPEL À CONTRIBUTIONS

DU BRUIT DANS LE SIGNAL : GESTION DES ERREURS EN TRAITEMENT AUTOMATIQUE DES LANGUES

UN NUMÉRO SPÉCIAL DE LA REVUE 'TRAITEMENT AUTOMATIQUE DES LANGUES' (TAL)

La langue que les applications de traitement automatique des langues
ont à traiter ressemble assez peu aux exemples parfaitement
grammaticaux que l'on rencontre dans les livres de grammaire. Dans l'usage
quotidien, les énoncés à traiter se présentent sous une forme
imparfaite : les textes dactylographiés contiennent des erreurs de
saisie, ainsi que de fautes d'orthographe et de grammaire ; les énoncés
oraux correspondent souvent à des phrases incomplètes et contiennent
des disfluences; les sorties des systèmes d'OCR contiennent de
multiples confusion entre caractères, et celles des systèmes de
reconnaissance vocale contiennent des transcriptions inexactes de ce
qui a réellement été prononcé.

Le bruit est donc inhérent au données langagières et ignorer cette
réalité ne peut que nuire à la qualité de nos systèmes de
traitement. Pour certaines applications, l'enjeu est de développer des
mécanismes robustes vis-à-vis de ces erreurs. Par exemple, un système
de dialogue pourra utiliser des mesures de confiance portant sur les
hypothèses de reconnaissance vocale pour décider s'il doit demander à
l'utilisateur de répéter. Pour d'autres applications, il sera
nécessaire de faire appel à des techniques de correction automatique
des erreurs; ainsi, par exemple, un système d'OCR pourra post-traiter
les textes avec des modèles de correction contextuels pour valider
l'orthographe des mots.

Ce numéro spécial vise à rassembler des contributions portant sur la
gestion des erreurs en traitement des langues. De nombreux
sous-domaines du TAL ont besoin de prendre en compte le bruit et les
erreurs dans les signaux linguistiques qu'ils considèrent, mais il est
rare que des chercheurs issus de ces diverses communautés aient
l'occasion de comparer leurs méthodes et leurs résultats. Notre
ambition est de mettre en perspective des travaux issus de ces
différents domaines de manière à encourager la fertilisation croisée
des idées.

Pour ce numéro spécial, nous considérons donc comme pertinent tout
travail touchant au traitement automatique de données bruitées. Les
sous-domaines les plus développés sont probablement la correction
orthographique, et, dans une moindre mesure, la correction
grammaticale; aucun de ces problèmes n'est pourtant complètement
résolu, et la situation est encore moins satisfaisante quand on
considère des erreurs plus profondes, touchant par exemple au style ou
à l'organisation du discours. Les traitements robustes, qui visent à
extraire le maximum d'informations utiles d'entrées potentiellement
erronées, seront aussi favorablement considérés, que ces entrées
se présentent sous forme écrite ou orale ; plus généralement, lesétudes
portant sur les stratégies de réparation d'erreur, par exemple dans les
systèmes de dialogue ou d'autres systèmes analogues, sont également
pertinentes pour ce numéro.

Nous invitons donc les contributions portant sur tout aspect relatif
au traitement des erreurs en TAL, et en particulier (liste non
exclusive):
* correction automatique de l'orthographe et de la grammaire
* erreurs sémantiques et logiques
* correction d'erreurs dans le style ou l'organisation du discours
* correction d'erreurs 'artificielles' (OCR, reconnaissance vocale, etc.)
* correction automatique de requêtes à des moteurs de recherche
* acquisition, annotation et analyse d'erreurs dans les textes réels
* corpus d'erreurs
* traitement des erreurs dans les langages contrôlés *****
* erreurs en apprentissage des langues
* erreurs de performance
* normalisation d'écrits non standards
* TAL robuste
* traitement de parole disfluente
* traitement des erreurs en reconnaissance vocale
* apprendre avec des données bruitées
* mesures de la gravité des erreurs
* mesures de confiance
* fouille et analyse d'erreurs
* auto-évaluation et diagnostic d'erreurs

ÉDITEURS INVITÉS
- Robert Dale (Macquarie University, Australia)
- François Yvon (LIMSI/CNRS and Univ. Paris Sud, France)

COMITÉ SCIENTIFIQUE
(TBA)

DATES IMPORTANTES
- soumission des contributions : 15 octobre 2012
- première notification aux auteurs : 15 décembre 2012
- date limite pour les versions révisées : 1er février 2013
- décisions finales : 15 avril 2013
- versions finales : 15 juin 2013
- publication : été 2013

LE JOURNAL

Depuis 40 ans, TAL (Traitement Automatique des Langues) est un journal
international publié par l'ATALA (Association pour le Traitement
Automatique des Langues) avec le soutien du CNRS. Depuis quelques
années, il s'agit d'un journal en ligne, des versions papier pouvant
être obtenues sur commande. Ceci n'affecte en rien le processus de
relecture et de sélection.

INFORMATIONS PRATIQUES

Les articles (25 pages environ, format PDF) doivent être déposés sur
la plateforme http://tal-53-3.sciencesconf.org/ Les feuilles de style
sont disponibles sur le site web du journal
(http://www.atala.org/-Revue-TAL).  Le journal ne publie que des
contributions originales, en français ou en anglais.

Top

7-8IEEE Journal of Selected Topics in Signal Processing (JSTSP) Special Issue on 'Advances in Spoken Dialogue Systems and Mobile Interfaces: Theory and Applications'
Call For Papers : IEEE Journal of Selected Topics in Signal Processing (JSTSP) Special Issue on 'Advances in Spoken Dialogue Systems and Mobile Interfaces: Theory and Applications' http://www.signalprocessingsociety.org/uploads/special_issues_deadlines/spoken_dialogue.pdf Recently, there have been an array of advances in both the theory and practice of spoken dialog systems, especially on mobile devices. On theoretical advances, foundational models and algorithms (e.g., Partially Observable Markov Decision Process (POMDP), reinforcement learning, Gaussian process models, etc.) have advanced the state-of-the-art on a number of fronts. For example, techniques have been presented which improve system robustness, enabling systems to achieve high performance even when faced with recognition inaccuracies. Other methods have been proposed for learning from interactions, to improve performance automatically. Still other methods have shown how systems can make use of speech input and output incrementally and in real time, raising levels of naturalness and responsiveness. On applications, interesting new results on spoken dialog systems are becoming available from both research settings and deployments 'in the wild', for example on deployed services such as Apple's Siri, Google's voice actions, Bing voice search, Nuance's Dragon Go!, and Vlingo. Speech input is now commonplace on smart phones, and is well-established as a convenient alternative to keyboard input, for tasks such as control of phone functionalities, dictation of messages, and web search. Recently, intelligent personal assistants have begun to appear, via both applications and features of the operating system. Many of these new assistants are much more than a straightforward keyboard replacement - they are first-class multi-modal dialogue systems that support sustained interactions, using spoken language, over multiple turns. New system architectures and engineering algorithms have also been investigated in research labs, which have led to more forward-looking spoken dialog systems. This special issue seeks to draw together advances in spoken dialogue systems from both research and industry. Submissions covering any aspect of spoken dialog systems are welcome. Specific (but not exhaustive) topics of interest include all of the following in relation to spoken dialogue systems and mobile interfaces: - theoretical foundations of spoken dialogue system design, learning, evaluation, and simulation - dialog tracking, including explicit representations of uncertainty in dialog systems, such as Bayesian networks; domain representation and detection - dialog control, including reinforcement learning, (PO)MDPs, decision theory, utility functions, and personalization for dialog systems - foundational technologies for dialog systems, including acoustic models, language models, language understanding, text-to-speech, and language generation; incremental approaches to input and output; usage of affect - applications, settings and practical evaluations, such as voice search, text message dictation, multi-modal interfaces, and usage while driving Papers must be submitted online. The bulk of the issue will be original research papers, and priority will be given to the papers with high novelty and originality. Papers providing a tutorial/overview/survey are also welcome, although space is available for only a limited number of papers of this type. Tutorial/overview/survey papers will be evaluated on the basis of overall impact. - Manuscript submission: http://mc.manuscriptcentral.com/jstsp-ieee - Information for authors: http://www.signalprocessingsociety.org/publications/periodicals/jstsp/jstsp-author-info/ Dates Extended submission date of papers: July 22, 2012 First review September 15, 2012 Revised Submission: October 10, 2012 Second review: November 10, 2012 Submission of final material: November 20, 2012 Guest editors: - Kai Yu, co-lead guest editor (Shanghai Jiao Tong University) - Jason Williams, co-lead guest editor (Microsoft Research) - Brahim Chaib-draa (Laval University) - Oliver Lemon (Heriot-Watt University) - Roberto Pieraccini (ICSI) - Olivier Pietquin (SUPELEC) - Pascal Poupart (University of Waterloo) - Steve Young (University of Cambridge) 
Top

7-9Journal of Speech Sciences (JoSS)
Call for Papers
Journal of Speech Sciences (JoSS)
ISSN: 2236-9740
<http://journalofspeechsciences.org/&gt;
indexed in Linguistics Abstracts and Directory of Open Acess Journals
 
Volume 2, number 2 (special issue)
 
This is the CFP for the fourth issue of the Journal of Speech Sciences (JoSS). The JoSS covers experimental aspects that deal with scientific aspects of speech, language and linguistic communication processes. Coverage also includes articles dealing with pathological topics, or articles of an interdisciplinary nature, provided that experimental and linguistic principles underlie the work reported. Experimental approaches are emphasized in order to stimulate the development of new methodologies, of new annotated corpora, of new techniques aiming at fully testing current theories of speech production, perception, as well as phonetic and phonological theories and their interfaces.
This issue the journal team will receive original, previously unpublished contributions on Corpora building for experimental prosody research or related themes. The purpose of this Special Issue is to present new architectures, new challenges, new databases related to experimental prosody to contribute to the exchange of data, to the elaboration of parallel corpora, to the proposition of common environments to do cross-linguistic research.
The contributions should be sent through the journal website (www.journalofspeechsciences.org) until August 15th, 2012. The primary language of the Journal is English. Contributions in Portuguese, in Spanish (Castillan) and in French are also accepted, provided a 1-page (between 500-600 words) abstract in English be given. The goal of this policy is to ensure a wide dissemination of quality research written in these three Romance languages. The contributions will be reviewed by at least two independent reviewers, though the final decision as to publication is taken by the two editors. For preparing the manuscript, please follow the instructions at the JoSS webpage. If accepted, the authors must use the template given in the website for preparing the paper for publication.
Important Dates
Submission deadline:                August 15th, 2012*
Notification of acceptance:       September, 2012
Final manuscript due:               November, 2012
Publication date:                      December, 2012
 
* If arrived after that date, the paper will follow the schedule for the next issue.
 
About the JoSS
The Journal of Speech Sciences (JoSS) is an open access journal which follows the principles of the Directory of Open Access Journals (DOAJ), meaning that its readers can freely read, download, copy, distribute, print, search, or link to the full texts of any article electronically published in the journal. It is accessible at <http://www.journalofspeechsciences.org&gt;.
The JoSS covers experimental aspects that deal with scientific aspects of speech, language and linguistic communication processes. The JoSS is supported by the initiative of the Luso-Brazilian Association of Speech Sciences (LBASS), <http://www.lbass.org&gt;. Founded in the 16th of February 2007, the LBASS aims at promoting, stimulating and disseminating research and teaching in Speech Sciences in Brazil and Portugal, as well as establishing a channel between sister associations abroad.
 
Editors
Plinio A. Barbosa (Speech Prosody Studies Group/State University of Campinas, Brazil)
Sandra Madureira (LIACC/Catholic University of São Paulo, Brazil)
 
E-mail: {pabarbosa, smadureira}@journalofspeechsciences.org
Top

7-10CfP Revue TAL DU BRUIT DANS LE SIGNAL : GESTION DES ERREURS EN TRAITEMENT AUTOMATIQUE DES LANGUES

SECOND APPEL À CONTRIBUTIONS

DU BRUIT DANS LE SIGNAL : GESTION DES ERREURS EN TRAITEMENT AUTOMATIQUE DES LANGUES

UN NUMÉRO SPÉCIAL DE LA REVUE 'TRAITEMENT AUTOMATIQUE DES LANGUES' (TAL)

La langue que les applications de traitement automatique des langues
ont à traiter ressemble assez peu aux exemples parfaitement
grammaticaux que l'on rencontre dans les livres de grammaire. Dans l'usage
quotidien, les énoncés à traiter se présentent sous une forme
imparfaite : les textes dactylographiés contiennent des erreurs de
saisie, ainsi que de fautes d'orthographe et de grammaire ; les énoncés
oraux correspondent souvent à des phrases incomplètes et contiennent
des disfluences; les sorties des systèmes d'OCR contiennent de
multiples confusion entre caractères, et celles des systèmes de
reconnaissance vocale contiennent des transcriptions inexactes de ce
qui a réellement été prononcé.

Le bruit est donc inhérent au données langagières et ignorer cette
réalité ne peut que nuire à la qualité de nos systèmes de
traitement. Pour certaines applications, l'enjeu est de développer des
mécanismes robustes vis-à-vis de ces erreurs. Par exemple, un système
de dialogue pourra utiliser des mesures de confiance portant sur les
hypothèses de reconnaissance vocale pour décider s'il doit demander à
l'utilisateur de répéter. Pour d'autres applications, il sera
nécessaire de faire appel à des techniques de correction automatique
des erreurs; ainsi, par exemple, un système d'OCR pourra post-traiter
les textes avec des modèles de correction contextuels pour valider
l'orthographe des mots.

Ce numéro spécial vise à rassembler des contributions portant sur la
gestion des erreurs en traitement des langues. De nombreux
sous-domaines du TAL ont besoin de prendre en compte le bruit et les
erreurs dans les signaux linguistiques qu'ils considèrent, mais il est
rare que des chercheurs issus de ces diverses communautés aient
l'occasion de comparer leurs méthodes et leurs résultats. Notre
ambition est de mettre en perspective des travaux issus de ces
différents domaines de manière à encourager la fertilisation croisée
des idées.

Pour ce numéro spécial, nous considérons donc comme pertinent tout
travail touchant au traitement automatique de données bruitées. Les
sous-domaines les plus développés sont probablement la correction
orthographique, et, dans une moindre mesure, la correction
grammaticale; aucun de ces problèmes n'est pourtant complètement
résolu, et la situation est encore moins satisfaisante quand on
considère des erreurs plus profondes, touchant par exemple au style ou
à l'organisation du discours. Les traitements robustes, qui visent à
extraire le maximum d'informations utiles d'entrées potentiellement
erronées, seront aussi favorablement considérés, que ces entrées
se présentent sous forme écrite ou orale ; plus généralement, lesétudes
portant sur les stratégies de réparation d'erreur, par exemple dans les
systèmes de dialogue ou d'autres systèmes analogues, sont également
pertinentes pour ce numéro.

Nous invitons donc les contributions portant sur tout aspect relatif
au traitement des erreurs en TAL, et en particulier (liste non
exclusive):
* correction automatique de l'orthographe et de la grammaire
* erreurs sémantiques et logiques
* correction d'erreurs dans le style ou l'organisation du discours
* correction d'erreurs 'artificielles' (OCR, reconnaissance vocale, etc.)
* correction automatique de requêtes à des moteurs de recherche
* acquisition, annotation et analyse d'erreurs dans les textes réels
* corpus d'erreurs
* traitement des erreurs dans les langages contrôlés *****
* erreurs en apprentissage des langues
* erreurs de performance
* normalisation d'écrits non standards
* TAL robuste
* traitement de parole disfluente
* traitement des erreurs en reconnaissance vocale
* apprendre avec des données bruitées
* mesures de la gravité des erreurs
* mesures de confiance
* fouille et analyse d'erreurs
* auto-évaluation et diagnostic d'erreurs

ÉDITEURS INVITÉS
- Robert Dale (Macquarie University, Australia)
- François Yvon (LIMSI/CNRS and Univ. Paris Sud, France)

COMITÉ SCIENTIFIQUE
(TBA)

DATES IMPORTANTES
- soumission des contributions : 15 octobre 2012
- première notification aux auteurs : 15 décembre 2012
- date limite pour les versions révisées : 1er février 2013
- décisions finales : 15 avril 2013
- versions finales : 15 juin 2013
- publication : été 2013

LE JOURNAL

Depuis 40 ans, TAL (Traitement Automatique des Langues) est un journal
international publié par l'ATALA (Association pour le Traitement
Automatique des Langues) avec le soutien du CNRS. Depuis quelques
années, il s'agit d'un journal en ligne, des versions papier pouvant
être obtenues sur commande. Ceci n'affecte en rien le processus de
relecture et de sélection.

INFORMATIONS PRATIQUES

Les articles (25 pages environ, format PDF) doivent être déposés sur
la plateforme http://tal-53-3.sciencesconf.org/ Les feuilles de style
sont disponibles sur le site web du journal
(http://www.atala.org/-Revue-TAL).  Le journal ne publie que des
contributions originales, en français ou en anglais.

Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA