ISCA - International Speech
Communication Association


ISCApad Archive  »  2016  »  ISCApad #221  »  Events

ISCApad #221

Friday, November 11, 2016 by Chris Wellekens

3 Events
3-1 ISCA Events
3-1-1(2017-08-20) Interspeech 2017, Stockholm, Sweden

                                  INTERSPEECH 2017

                                   Situated interaction

                                                                   20-24 August, 2017

                                           Stockholm, Sweden

                                 http://www.interspeech2017.org







Home Admin_Interspeech2017 2016-10-14T10:59:05+00:00

Welcome

INTERSPEECH 2017 will be held August 20–24, 2017 in Stockholm, Sweden on the campus of Stockholm University!

Theme

The theme of INTERSPEECH 2017 is Situated interaction.
Face-to-face interaction is the primary use of speech, and arguably also the richest, most effective and most natural kind of speech communication. The situation, context and human behaviors are intrinsic parts that continuously shape and form the interaction. In attempting to understand this situated interaction, we face a fruitful and real challenge for the speech communication community: To investigate what kinds of situational awareness, social sensibilities and conversational abilities we should endow machines with for them to engage in conversations with humans on human terms.

INTERSPEECH 2017 emphasizes an interdisciplinary approach covering all aspects of speech science and technology spanning basic theories to applications. In addition to regular oral and poster sessions, the conference will also feature plenary talks by internationally renowned experts, tutorials, special sessions, show & tell sessions, and exhibits. A number of satellite events will take place immediately before and after the conference. Please follow the details which will be published at the INTERSPEECH website interspeech2017.org.

Organizers

INTERSPEECH 2017 is jointly organized by:

  • Department of Linguistics, Stockholm University

  • Department of Speech, Music and Hearing, KTH Royal Institute of Technology

  • Division for Speech and Language Pathology, Karolinska Institutet

  • PCO: Akademikonferens

  • with support from the entire Swedish speech community!

  • General chair: Francisco Lacerda (Stockholm University)

  • General co-chair: David House (KTH Royal Institute of Technology)

  • Technical program chairs: Mattias Heldner (Stockholm University), Joakim

 

Back  Top

3-1-2(2017-08-25) 7th ISCA (International Speech Communication Association) Workshop on Speech and Language Technology in Education (SLaTE),Stockholm, Sweden

the Seventh ISCA (International Speech Communication Association) Workshop on Speech and
Language Technology in Education (SLaTE).
See http://www.slate2017.org/

Back  Top

3-1-3(2018-09-02) Interspeech 2018, Hyderabad, India

                                                           INTERSPEECH 2018

               Speech Research for Emerging Markets in Multilingual Societies

                                                2-6 September 2018

                                                    Hyderabad, India

                                           http://www.interspeech2018,org

 

Gen. Chair: B. Yegnanarayama

Back  Top

3-1-4INTERSPEECH 2017 Stockholm Sweden CALL FOR PAPERS

INTERSPEECH 2017 CALL FOR PAPERS

 

Apologies for cross-postings

 

The 18th Annual Conference of the International Speech Communication Association (INTERSPEECH 2017) will be held August 20–24, 2017 on the campus of Stockholm University, Stockholm, Sweden. INTERSPEECH is the world’s largest and most comprehensive conference on the science and technology of spoken language processing. INTERSPEECH emphasizes an interdisciplinary approach covering all aspects of speech science and technology, spanning basic theories to clinical and technological applications.

 

INTERSPEECH 2017 will be organized around the theme SITUATED INTERACTION. Face-to-face interaction is the primary use of speech, and arguably also the richest, most effective and most prevalent kind of speech communication. The situation, context and human behaviors are intrinsic parts that continuously shape and form the interaction. In attempting to understand this situated interaction, we face a fruitful and real challenge for the speech communication community: To investigate what kinds of situational awareness, social sensibilities and conversational abilities we should endow machines with for them to engage in conversations with humans on human terms. Contributions to all other aspects of speech science and technology are also welcome.

 

In addition to regular oral and poster sessions, the conference will also feature plenary talks by internationally renowned experts, tutorials, special sessions, show & tell sessions, and exhibits. A number of satellite events will take place immediately before and after the conference. INTERSPEECH papers get digital object identifiers (DOI) and they are indexed in ISI, Engineering Index, Scopus, and Google Scholar.

 

The Calls for INTERSPEECH 2017 papers, special sessions & challenges, tutorials, and satellite workshops, are now open. Please follow the details of these and other news at the conference website www.interspeech2017.org.

 

IMPORTANT DATES

 

Thursday, 1 December 2016 Satellite Workshops proposals deadline

Thursday, 8 December 2016 Special Sessions and Challenges proposals deadline

Friday, 16 December 2016 Notification of pre-selection of Special Sessions

Wednesday, 1 February 2017 Tutorial proposals submission deadline

Wednesday, 1 March 2017 Tutorial notification of acceptance

Tuesday, 14 March 2017 Paper submission deadline

Tuesday, 21 March 2017 Final paper submission PDF upload

Monday, 22 May 2017 Paper notification of acceptance

Monday, 5 June 2017 Camera-ready paper due

Wednesday, 21 June 2017 Early registration deadline

20-24 August 2017 Conference in Stockholm, Sweden

 

CALL FOR PAPERS

 

Prospective authors are invited to submit full-length original papers in any related area, including but not limited to:

1 Speech Perception, Production and Acquisition

2 Phonetics, Phonology, and Prosody

3 Analysis of Paralinguistics in Speech and Language

4 Speaker and Language Identification

5 Analysis of Speech and Audio Signals

6 Speech Coding and Enhancement

7 Speech Synthesis and Spoken Language Generation

8 Speech Recognition: Signal Processing, Acoustic Modeling, Robustness, Adaptation

9 Speech Recognition -Architecture, Search, and Linguistic Components

10 Speech Recognition -Technologies and Systems for New Applications

11 Spoken dialog systems and conversational analysis

12 Spoken Language Processing: Translation, Information Retrieval, Summarization, Resources and Evaluation

 

Papers for the INTERSPEECH 2017 proceedings should be up to 4 pages of text, plus one page (maximum) for references only. Paper submissions must conform to the format defined in the INTERSPEECH 2017 Author’s Kit (available now). Paper submission will open in late January 2017, and the Paper submission deadline is Tuesday, 14 March 2017. See http://interspeech2017.org/papers for further details.

 

CALL FOR TUTORIALS

 

We encourage proposals for tutorials addressing introductory topics or advanced topics in an introductory style, and tutorials targeting experienced researchers who want to dig deeper into a given new topic. Tutorials may introduce an emerging area of speech-related research, or present an overview of an established area of research. Tutorials will be held on Sunday, 20 August 2017. Brief proposals should be submitted by Wednesday, 1 February 2017. See http://interspeech2017.org/tutorials for further details.

 

CALL FOR SPECIAL SESSIONS & CHALLENGES

 

We encourage proposals for special sessions & challenges, covering interdisciplinary topics and/or important new emerging areas of interest related to the main conference topics. Submissions related to the special focus of the conference, Situated Interaction, are particularly welcome. Apart from supporting a particular theme, special sessions may also have a format that is different from a regular session. Brief proposals should be submitted by Thursday, 8 December 2016. See http://interspeech2017.org/special-sessions for further details.

 

CALL FOR SATELLITE WORKSHOPS

We encourage proposals for Satellite Workshops, to be held in proximity to the main conference. The Organizing Committee will work to facilitate the organization of such satellite workshops, to stimulate discussion in research areas related to speech and language, at locations in Europe, and around the same time as INTERSPEECH 2017. We are particularly looking forward to proposals from neighboring countries. The (extended) deadline for Satellite Workshops proposals is Thursday, 1 December 2016. See http://interspeech2017.org/satellites for further details.

 

 

We look forward to welcoming you in Stockholm next summer!

 

The Organizing Committee of INTERSPEECH 2017

Back  Top

3-1-5Interspeech 2019, Graz, Austria

Interspeech 2019 will be held in Graz, Austria.

Back  Top

3-2 ISCA Supported Events
3-2-1Forthcoming ISCA Supported Events

Forthcoming ISCA Supported Events

to be updated

Back  Top

3-3 Other Events
3-3-1(2016-11-12) CfP 18th International Conference on Multimodal Interaction (ICMI 2016), Tokyo, Japan

ICMI 2016 call for Long and Short Papers

 

ICMI 2016, Tokyo, Japan (November 12-16, 2016)

http://icmi.acm.org/2016/

 

The 18th International Conference on Multimodal Interaction (ICMI 2016) will be held in Tokyo, Japan.  ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.

 

This year, ICMI welcomes contributions on machine learning for multimodal interaction as a special topic of interest. ICMI 2016 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits and doctoral spotlight papers. The conference will also feature workshops and grand challenges. The proceedings of ICMI'2016 will be published by ACM as part of their series of International Conference Proceedings and Digital Library. Topics of interest include but are not limited to:

 

- Affective Computing and interaction

- Cognitive modeling and multimodal interaction

- Gesture, touch and haptics

- Healthcare, assistive technologies

- Human communication dynamics

- Human-robot/agent multimodal interaction

- Interaction with smart environment

- Machine learning for multimodal interaction

- Mobile multimodal systems

- Multimodal behavior generation

- Multimodal datasets and validation

- Multimodal dialogue modeling

- Multimodal fusion and representation

- Multimodal interactive applications

- Speech behaviors in social interaction

- System components and multimodal platforms

- Visual behaviors in social interaction

- Virtual/augmented reality and multimodal interaction

 

Important dates

Long and short paper submission: May 6th, 2016

Reviews available for rebuttal: July 21st, 2016

Paper notification: August 24th, 2016

Main Conference: November 13-15, 2016

Back  Top

3-3-2(2016-11-23) ALBAYZIN 2016 SEARCH ON SPEECH EVALUATION, Lisbon, Portugal


======================================================================
ALBAYZIN 2016 SEARCH ON SPEECH EVALUATION

The Spanish Thematic Network on Speech Technology (RTTH) and the ISCA Special Interest Group on Iberian Languages (SIG-IL) are pleased to announce the ALBAYZIN 2016 Search on Speech Evaluation, which will be carried out as part of Iberspeech 2016, a biennial event gathering the Spanish researchers on speech Technology. This year?s event will take place in Lisbon (Portugal), on November 23-25, 2016 (see http://iberspeech2016.inesc-id.pt/ for details).

Research groups worldwide are invited to participate in this evaluation.
Here we just provide the key points. The full evaluation plan can be found attached.

**TASKS**

The ALBAYZIN 2016 Search on Speech evaluation involves searching in audio content a list of terms/queries. This evaluation focuses on retrieving the appropriate audio files that contain any of those terms/queries. Two different tasks are defined:

1) SPOKEN TERM DETECTION (STD), where the input to the system is a list of terms,
but this is unknown when processing the audio. This is the same task as in NIST STD 2006
evaluation [2] and Open Keyword Search in 2013 [3], 2014 [4], 2015 [5], and 2016 [6].

2) QUERY-BY-EXAMPLE SPOKEN TERM DETECTION (QbE STD), where the input to the system is an acoustic example per query and hence a prior knowledge of the correct word/phone transcription corresponding to each query cannot be made. This task must generate a set of occurrences for each query detected in the audio files, along with their timestamps as output, as in the STD task.QbE STD is the same task as those proposed in MediaEval 2011, 2012 and 2013 [1].

**REGISTRATION**

Interested groups must register for the evaluation before July 15th 2016, by contacting the organizing team at:

javiertejedornoguerales@gmail.com

with CC to Iberspeech 2016 Evaluation organizers at:

luisjavier.rodriguez@ehu.eus
lapiz@die.upm.es
alberto.abad@l2f.inesc-id.pt
ortega@unizar.es
ajst@ua.pt

and providing the following information:

  Research group (name and acronym)
  Institution (university, research center, etc.)
  Contact person (name)
  Email

**SCHEDULE**

?      June 30, 2016: Release of training and development data

?      July 15, 2016: Registration deadline

?      September 15, 2016: Release of evaluation data

?      October 15, 2016: Deadline for the submission of system outputs and description papers

?      October 31, 2016: Results distributed to participants

?      November 23-25, 2016: Evaluation Workshop at Iberspeech 2016


**REFERENCES**

[1] Metze, F., Anguera, X., Barnard, E., Davel, M., Gravier, G.: Language independent search in
mediaeval?s spoken web search task. Computer Speech and Language (2014).

[2] NIST: The spoken term detection (STD) 2006 evaluation plan. National Institute of Standards
and Technology (NIST), Gaithersburg, MD, USA, 10 edn. (September 2006), http://www.nist.gov/speech/tests/std

[3] NIST: NIST Open Keyword Search 2013 Evaluation (OpenKWS13). National Institute of Standards and Technology (NIST), Washington DC, USA, 1 edn. (July 2013), http://www.nist.gov/itl/iad/mig/openkws13.cfm

[4] NIST: NIST Open Keyword Search 2014 Evaluation (OpenKWS14). National Institute of Standards and Technology (NIST), Washington DC, USA, 1 edn. (July 2014), http://www.nist.gov/itl/iad/mig/openkws14.cfm

[5] NIST: NIST Open Keyword Search 2015 Evaluation (OpenKWS15). National Institute of Standards and Technology (NIST), Washington DC, USA, 1 edn. (July 2015), http://www.nist.gov/itl/iad/mig/openkws15.cfm

[6] NIST: NIST Open Keyword Search 2016 Evaluation (OpenKWS16). National Institute of Standards and Technology (NIST), Washington DC, USA, 1 edn. (July 2016), http://www.nist.gov/itl/iad/mig/openkws16.cfm

Back  Top

3-3-3(2016-12-10) NIPS 2016 Workshop: Let's Discuss: Learning Methods for Dialogue? ?, Barcelona, Spain

Let's Discuss: Learning Methods for Dialogue? ?

NIPS 2016 Workshop

10 December 2016

Barcelona, Spain

http://letsdiscussnips2016.weebly.com/

 

Overview

 

Humans conversing naturally with machines is a staple of science fiction. Building agents capable of mutually coordinating their states and actions via communication, in conjunction with human agents, would be one of the greatest engineering feats of human history. In addition to the tremendous economic potential of this technology, the ability to converse appears intimately related to the overall goal of AI. 

 

Although dialogue has been an active area within the linguistics and NLP communities for decades, the wave of optimism in the machine learning community has inspired increased interest from researchers, companies, and foundations.  The NLP community has enthusiastically embraced and innovated neural information processing systems, resulting in substantial relevant activity published outside of NIPS.  The goal of this forum is increased interaction (dialogue!) between these communities at NIPS to accelerate creativity and progress.

 

Call For Papers

 

The workshop will consist of a mixture of invited talks and contributed talks, with panel sessions.  To avoid the 'mini-conference effect', there is no poster session.

We anticipate a total of six contributed talks of 20 minutes each, distributed evenly over the following three high-level areas:

 

-          Being data-driven.

  • What can and cannot be done with ?offline evaluation on fixed data sets?  How can we facilitate development of these offline evaluation tasks in the public domain?
  • What is the role of online evaluation (e.g. as a benchmark?), and how would we make it accessible to the general community?
  • What can be done with simulated environments, or tasks where machines communicate solely with each other?

 

-          Build complete applications.

  • Do we need to build a irreducible end-to-end system, or can we define modules with abstractions that do not leak?
  • ?How do we ease the burden on the human designer of specifying or bootstrapping the system?

 

-          Model innovation.

  • What are the requisite capabilities for learning architectures, ?and where are the deficiencies in our current architectures?
  • How can we beneficially incorporate linguistic knowledge into our architectures?

 

The papers should be typeset according to NIPS format.

The paper should not exceed more than 4 pages (including references).

The authors of all the accepted papers will be expected to give a 20 minute talk (15 for the talk + 5 min for questions) and participate in a panel session.

Accepted papers will be displayed on the website.

There will be no posters.

 

Key Dates

 

-          10/09/2016: Submissions Due

-          10/23/2016: Acceptance Notification

 

Organizers:

 

-          Hal Daume III

-          Paul Mineiro

-          Amanda Stent

-          Jason Weston

 

Paper submission and more information : http://letsdiscussnips2016.weebly.com/

Back  Top

3-3-4(2016-12-10) Symposium on the role of predictability in shaping human language sound patterns, Sydney, Australia
Symposium on the role of predictability in shaping human language sound patterns
Date: 10-11 Dec, 2016
Place: Sydney, Australia
Back  Top

3-3-5(2016-12-11) CfP 1st Workshop on 'Computational Linguistics for Linguistic Complexity' (CL4LC), Osaka (Japan)

 

First Call for Papers

 

=============================================================================================

 

1st Workshop on 'Computational Linguistics for Linguistic Complexity' (CL4LC).

 

Collocated with COLING 2016 in Osaka (Japan) on Sunday, 11 December 2016.

 

https://sites.google.com/site/cl4lc2016/home

 

=============================================================================================

 

====================

Workshop Description

====================

 

CL4LC aims at investigating 'processing' aspects of linguistic complexity both from a machine point of view and from the perspective of the human subject to promote a common reflection on approaches for the detection, evaluation and modelling of linguistic complexity.

 

The term linguistic complexity is highly polysemous and several definitions have been advanced according to different standpoint theories. One major standpoint considers the 'theoretical' distinction between absolute complexity (i.e. the formal properties of linguistic systems) and relative complexity (i.e. covering issues such as cognitive cost, difficulty, level of demand for a user/learner).

CL4LC aims at investigating a complementary standpoint which has long attracted great interest in the Computational Linguistics community. This is focused on 'processing' aspects related to linguistic complexity both from a machine point of view and from the perspective of the human subject.

 

The objective of the workshop is to promote a common reflection on approaches for the detection, evaluation and modeling of linguistic complexity, with a particular emphasis on  research questions such as:

whether, and to what extent, a machine and human subject perspective can be combined or share commonalities; whether, and to what extent, linguistic complexity metrics specific for the human subject perspective can be extended for handling complexity for machine and vice versa; whether, and to what extent, linguistic phenomena hampering human processing correlate with difficulties in the automatic processing of language.

Despite the two perspectives have been separately treated, the interest for the “processing” aspects of linguistic complexity is shared by several initiatives and workshops within the NLP community where the emphasis has been put more on the achievement of specific tasks than on an overt reflection of linguistic complexity underlying the treated phenomena. From the machine point of view, this is the case, for instance, of initiatives focusing on linguistic complexity raised by e.g. the automatic processing of typologically different languages or language varieties deviant with respect to the standard language or by the challenges of parsing languages with morphology richer than English, or non-canonical varieties of language (e.g. spoken language, the language of social media, historical data etc.).

 

From the human subject perspective the attention is directed to what is complex (i.e. difficult) for a speaker, hearer, reader, learner with the aim of both modeling the cognitive processing underlying language usage and developing human-oriented applications. This is the case e.g. of computational linguistics methods devoted to unravel the difficulties in online language processing or to build applications to improve text accessibility in different scenarios, e.g. education, social inclusion.

 

==============

List of Topics

==============

 

We encourage the submission of long and short research papers including, but not limited to the following topics:

 

Detection and Measurement of Linguistic Complexity:

- methods to measure and modeling human comprehension difficulty, in terms of e.g. Dependency Locality and Surprisal frameworks;

- methods to measure complexity in linguistic systems with respect to different linguistic dimensions (e.g. morphology, syntax);

- methods to measure the distance between texts and learners' competences, according to their literacy skills, native language or language impairments;

- methods and models to measure text quality, in terms e.g. of grammaticality, style, accessibility, readability;

- methods to measure the distance between training corpora and texts in machine learning perspective;

- approaches to compute the processing perplexity of machine learning systems.

Processing of Linguistic Complexity:

- models of human language acquisition in specific linguistic environments, e.g. atypical language acquisition scenarios, Second Language Acquisition (SLA), learning of domain specific sub-languages;

- methods to reduce linguistic complexity for improving human understanding, e.g. text simplification and normalization to improve human comprehension;

- methods to reduce linguistic complexity for improving machine processing, e.g. text simplification for machine translation, word reordering to improve semantic and syntactic parsing;

- experimental approaches to CL4LC: experimental platforms and designs, experimental methods, resources;

- automatic processing of non-canonical languages and cross-lingual model transfer approaches; NLP tools and resources for CL4LC; Vision papers discussing the link between human and machine oriented perspectives on linguistic complexity.

 

 

===========

Submissions

===========

 

We invite submissions of both long and short papers, including opinion statements. All of the papers will be included in conference proceedings, this time in electronic form only.

 

Long papers may consist of up to eight pages (A4), plus two extra pages for references. Short papers may consist of up to four pages (A4), plus two extra pages for references. Authors of accepted papers will be given additional space in the camera-ready version to reflect space needed for changes stemming from reviewers comments.

 

Papers shall be submitted in English, anonymised with regard to the authors and/or their institution (no author-identifying information on the title page nor anywhere in the paper), including referencing style as usual. Authors should also ensure that identifying meta-information is removed from files submitted for review.

 

Papers must conform to official COLING 2016 style guidelines, which are available in coling2016.zip. coling2016.zip has LaTeX files, Microsoft Word template file, and sample PDF file.

 

Submission and reviewing will be managed online by the START system. The only accepted format for submitted papers is in Adobe's PDF. Submissions must be uploaded on the START system (to be anounced soon) by the submission deadlines.

 

 

===============

Important Dates

===============

 

 

June 2016: First call for workshop papers September 25, 2016: Workshop paper due October 16, 2016: Notification of acceptance October 30, 2016: Camera-ready due November 30, 2016: Official proceedings publication date December 11, 2016: Workshop date

 

 

=================

Program committee

=================

 

Delphine Bernhard (LilPa, France)

Nicoletta Calzolari (European Language Resources Association (ELRA), France) Angelo Cangelosi, (Centre for Robotics and Neural Systems at the University of Plymouth, UK) Benoît Crabbé (Université Paris 7, INRIA, France) Matthew Crocker (Department of Computational Linguistics, Saarland University, Germany) Scott Crossley (Georgia State University, USA) Rodolfo Delmonte (Department of Computer Science, Università Ca’ Foscari, Italy) Piet Desmet (KULeuven, Belgium) Arantza Díaz de Ilarraza (IXA NLP Group, University of the Basque Country) Cédrick Fairon (Université catholique de Louvain, Belgium) Marcello Ferro (Istituto di Linguistica Computazionale “Antonio Zampolli”, ILC-CNR, Italy) Nuria Gala (Aix-Marseille Université, France) Ted Gibson (MIT, USA) Itziar Gonzalez-Dios (IXA NLP Group, University of the Basque Country) Alex Housen (Vrije Universiteit Brussel, Belgium) Frank Keller (University of Edinburgh, UK) Kristopher Kyle (Georgia State University, USA) Alessandro Lenci (Università di Pisa, Italy) Annie Louis (University of Essex, UK) Xiaofei Lu (Pennsylvania State University, USA) Ryan Mcdonald (Google) Detmar Meurers (University of Tübingen, Germany) Simonetta Montemagni (Istituto di Linguistica Computazionale “Antonio Zampolli”, ILC-CNR, Italy) Frederick J. Newmeyer (University of Washington, USA, University of British Columbia, Simon Fraser University, CA) Joakim Nivre (Uppsala University, Sweden) Gabriele Pallotti (Università di Modena e Reggio Emilia, Italy) Magali Paquot (Université catholique de Louvain, Belgium) Katerina Pastra (Cognitive Systems Research Institute, Greece) Vito Pirrelli (Istituto di Linguistica Computazionale “Antonio Zampolli”, ILC-CNR, Italy) Barbara Plank (University of Groningen, Netherlands) Massimo Poesio (University of Essex, UK) Horacio Saggion (Universitat Pompeu Fabra, Spain) Advaith Siddharthan (University of Aberdeen, UK) Paul Smolensky (John Hopkins University, USA) Benedikt Szmrecsanyi (KULeuven, Belgium) Kumiko Tanaka-Ishii (University of Tokyo, Japan) Joel Tetreault (Yahoo! Labs) Sara Tonelli (FBK, Trento, Italy) Sowmya Vajjala (Iowa State University, USA) Aline Villavicencio (Institute of Informatics Federal University of Rio Grande do Sul, Brazil) Elena Volodina (University of Gothenburg, Sweden) Daniel Wiechmann (University of Amsterdam, Netherlands) Victoria Yaneva (University of Wolverhampton, UK)

 

 

==========

Organisers

==========

 

Dominique Brunato,  Felice Dell'Orletta,  Giulia Venturi

 

    ItaliaNLP Lab @ Istituto di Linguistica Computazionale 'A. Zampolli', Pisa (Italy)

 

Thomas François

 

    CENTAL, IL&C, Université catholique de Louvain, Louvain-la-Neuve (Belgium)

 

Philippe Blache

 

    Laboratoire Parole et Langage, CNRS & Université de Provence, Aix-en-Provence (France)

 

 

=======

Contact

=======

 

For any inquiries regarding the workshop please send an email to: cl4lc.ws@gmail.com

Back  Top

3-3-6(2016-12-12) Cognitive Aspects of the Lexicon (CogALex-v), Osaka, Japan

Cognitive Aspects of the Lexicon (CogALex-v)

https://sites.google.com/site/cogalex2016/home

 

Workshop co-lated with coling (the 26th International Conference on

Computational Linguistics, Osaka, Japan), December 12, 2016

 

Invited speaker :  Chris Biemann (Technische Universität, Darmstadt)

 

We are pleased to announce the 5th Workshop on ‘Cognitive Aspects of the Lexicon’ (Cogalex-V), taking place just before coling (Osaka, Japan), december 12, 2016.

1        Context and background

The way we look at the lexicon (creation and use) has changed dramatically over the past 30 years. While in the past being considered as an appendix to grammar, the lexicon has now moved to centre stage. Indeed, there is hardly any task in NLP which can be conducted without it. Also, rather than considering it as a static entity (database view), dictionaries are now viewed as dynamic networks, akin to the human brain, whose nodes and links (connection strengths) may change over time.

Linguists work on products, while psychologists and computer scientists deal with processes. They decompose the task into a set of subtasks, i.e. modules between  which information flows. There are inputs, outputs and processes in between. A typical task in language processing is to go from meanings to sound or vice versa, the two extremes of language production and language understanding. Since this mapping is hardly ever direct, various intermediate steps or layers (syntax, morphology) are necessary.

Most of the work done by psycholinguists has dealt with the information flow from meaning (or concepts) to sound or the other way around. What has not been addressed though is the creation of a map of the mental lexicon, that is a  represention of the way how words are organized or connected.

In this respect WordNet and Roget's Thesaurus are probably closest to what one can expect these days. This being said, to find a word in a resource one has to reduce the search space (entire lexicon) and this is done via the knowledge one has at the onset of search. While the information stored in the lexicon is a product, its access is clearly a (cognitive, i.e. knowledge-based) process.

1.1      Goal

The goal of Cogalex is to provide a forum for researchers in NLP, psychologists, computational lexicographers and users of lexical resources to share their knowledge and needs concerning the construction, organization and use of a lexicon by people (lexical access) and machines (NLP, IR, data-mining).

 

Like in the past (2004, 2008, 2010, 2012 and 2014), we will invite researchers to address various unsolved problems, by putting this time stronger emphasis though on distributional semantics (DS). Indeed, we would like to see work showing the relevance of DS as a cognitive model of the lexicon. The interest in distributional approaches has grown considerably over the last few year, both in computational linguistics and cognitive sciences. A further boost has been provided by the recent hype around deep learning and neural embeddings. While all these approaches seem to have great potential, their added value to address cognitive and semantic aspects of the lexicon still needs to be shown.

 

This workshop is about possible enhancements of lexical resources and electronic dictionaries, as well as on any aspect relevant to the achieve a better understanding of the mental lexicon and semantic memory.We solicit contributions including but not limited to the topics listed here below, topics, which can be considered from any of the following points of view:

  • (computational, corpus) linguistics,
  • neuro- or psycholinguistics (tip of the tongue problem, associations),
  • network related sciences (sociology, economy, biology),
  • mathematics (vector-based approaches, graph theory, small-world problem), etc.

We also plan to organize a “friendly competition” for corpus-based models of lexical networks and navigation, i.e. lexical access (see below).

1.2      Possible Topics

1.2.1   Analysis of the conceptual input of a dictionary user

  • What does a language producer start out with and how does this input relate to the target form? (meaning, collocation, topically related, etc.)
  • What is in the authors' minds when they are generating a message and looking for a word?
  • What does it take to bridge the gap between this input and the desired output (target word)?
  • Lexical representation (holistic, decomposed)
  • Meaning representation (concept based, primitives)
  • Distributional semantics (count models, neural embeddings, etc. )
  • Neurocomputational theories of content representation.
  • Discovering structures in the lexicon: formal and semantic point of view (clustering, topical structure)
  • Evolution, i.e. dynamic aspects of the lexicon (changes of weights)
  • Neural models of the mental lexicon (distribution of information concerning words, organization of words)
  • Manual, automatic or collaborative building of dictionaries and indexes (crowd-sourcing, serious games, etc.)
  • Impact and use of social networks (Facebook, Twitter) for building dictionaries, for organizing and indexing the  data (clustering of words), and for allowing to track navigational strategies, etc.
  • (Semi-) automatic induction of the link type (e.g. synonym, hypernym, meronym, association, collocation, ...)
  • Use of corpora and patterns (data-mining) for getting access to words, their uses, combinations and associations
  • Search based on sound, meaning or associations
  • Search (simple query vs. multiple words)
  • Search-space determination based on user's knowledge, meta-knowledge and cognitive state (information available at the onset, knowledge concerning the relationship between the input and the target word, ...)
  • Context-dependent search (modification of users’ goals during search)
  • Navigation (frequent navigational patterns or search strategies used by people)
  • Interface problems, data-visualization
  • Creative ways of getting access to and using word associations (reading between the lines, subliminal communication).

1.2.2   The meaning of words

1.2.3    Structure of the lexicon

1.2.4     Methods for crafting dictionaries or indexes

1.2.5     Dictionary access (navigation and search strategies), interface issues,

2        Description of the shared tasks associated with the workshop.

We plan to organize a “friendly competition” of corpus-based models of lexical access and semantic/associative relations between words.  This competition will be based on an existing, publicly available data set.  We provide an official separation of the data set into training, development and test data as well as a detailed specification of the task and evaluation metrics (implemented as easy-to-use scripts), so that the results obtained by different participants can be compared directly.

 

The precise design of the task has not been finalized yet, but it will be based on one or more of the following data sets:

  • free association norms from the Edinburgh Associative Thesaurus (EAT)
  • free association norms from the University of South Florida (USF)
  • prime-target pairs from the Semantic Priming Project (SPP)
  • semantically related word pairs from EVALution 1.0 (https://github.com/esantus/EVALution)

3        INVITED SPEAKER

Chris Biemann, leader of the LT research group in Darmstadt, and well known for his work on graph-based-approaches for NLP, has kindly accepted to give the invited presentation.

4        Deadlines.

  • September 25:   Submission deadline
  • October 16:       Author notification
  • October 30:       Camera ready due by Authors
  • November 6:     Proceedings due by Workshop Organisers to Workshop & Publication Chairs.
  • December 12 :   Workshop

5        Submission

The submissions should be written in English and be anonymized for review. They must comply with the style-sheets provided by Coling: http://coling2016.anlp.jp/#instructions

  • Long papers may consist of 8 pages of content, plus 2 pages for references;
  • Short paper may consist of up to 4 pages of content, plus 2 pages for references
  • The respective final versions may be up to 9 pages for long papers and 5 pages for short ones. In both cases the number of pages for references is limited to 3 pages

 

Papers should be in PDF format and have to be submitted electronically via the START submission system (https://www.softconf.com/coling2016/ CogALex-V/). You probably have to register first, and then choose: submission, i.e. (https://www.softconf.com/coling2016/CogALex-V/user/scmd.cgi?scmd=submitPaperCustom&pageid=0).

6       Organizers.

  • Michael Zock (LIF, CNRS, Aix-Marseille University, Marseille, France)
  • Alessandro Lenci (Computational Linguistics Laboratory, University of Pisa,, Italy)
  • Stefan Evert (FAU, Erlangen-Nürnberg, Germany)

7       Contact persons

For general questions, please get in touch with Michael Zock (michael.zock@lif.univ-mrs.fr), for questions concerning the shared task, send an e-mail to Stefan Evert (stefan.evert@fau.de)

8       Program committee

Bieman Chris (Technische Universität, Darmstadt, Germany)

Babych, Bogdan (University of Leeds, UK)

Brysbaert, Marc (Experimental Psychology, Ghent University, Belgium)

Cristea Dan ('Al. I. Cuza' University, Iasi, Romania)

deDeyne Simon (University of Adelaide, Australia)

de Melo Gerard (IIIS, Tsinghua University, Beijing, China)

Evert, Stefan (University of Erlangen, Germany)

Ferret Olivier (CEA LIST, France)

Fontenelle Thierry (CDT, Luxemburg)

Gala Nuria (University of Aix-Marseille, France)

Geeraerts Dirk (University of Leuven, Belgium)

Granger Sylviane (Université Catholique de Louvain, Belgium)

Grefenstette Gregory (Inria, Paris, France)

Hirst Graeme (University of Toronto, Canada)

Hovy Ed (CMU, Pittsburgh, USA)

Hsieh, Shu-Kai (National Taiwan University, Taipei, Taiwan)

Joyce Terry (Tama University, Kanagawa-ken, Japan)

Lafourcade, Matthieu (LIRMM, université de Montepellier, France

Lapalme Guy (RALI, University of Montreal, Canada

Lebani Gianluca (University of Pisa, Italy)

Lenci Alessandro (University of Pisa, Italy)

L'Homme Marie Claude (University of Montreal, Canada)

Mititelu Verginica (RACAI, Bucharest, Romania)

Navigli, Roberto  (Sapienza, Rome, Italy)

Paradis Carita (Centre for Languages and Literature Lund University, Sweden)

Pihlevar, Taher  (university of Cambridge, UK)

Pirrelli, Vito (ILC, Pisa, Italy)

Polguère Alain (ATILF-CNRS, Nancy, France)

Purver, Matthew (King's College, London, UK)

Ramisch Carlos (AMU, Marseille, France)

Rayson Paul (UCREL, university of Lancaster, UK

Rosso, Paol (NLEL, Universitat Politècnica de València, Spain)

Sahlgren, Magnus (Gavagai Inc. & SICS, Sweden)

Schulte im Walde Sabine (University of Stuttgart, Germany)

Schwab Didier (LIG, Grenoble, France)

Sharoff Serge (University of Leeds, UK)

Stella Massimo (Institute for Complex Systems Simulation, university of Southhampton, UK)

Tokunaga Takenobu (TITECH, Tokyo, Japan)

Tufis Dan (RACAI, Bucharest, Romania)

Zarcone, Alessandra (Saarland University, Germany)

Zock Michael (LIF-CNRS, Marseille, France)

Back  Top

3-3-7(2016-12-13) 6th IEEE Workshop on Spoken Language Technology (SLT), San Juan, Porto Rico

Call For Papers

The Sixth IEEE Workshop on Spoken Language Technology (SLT) will be held from December 13?16, 2016 in San Juan, Puerto Rico. The theme for this year will be ?machine learning from signal to concepts?. The workshop is expected to provide researchers around the world the opportunity to interact and present their newest and most advanced research in the fields of speech and language processing. The program for SLT 2016 will be include oral and posters sessions, keynotes, plus invited speakers in the field of spoken language as well as tutorials and multiple special sessions.

Topics

Submission of papers is desired on a large variety of areas of spoken language technology, with emphasis on the following topics on previous workshops:

  • Speech recognition and synthesis
  • Spoken language understanding
  • Spoken document retrieval
  • Question answering from speech
  • Assistive technologies
  • Natural language processing
  • Educational and healthcare applications
  • Human/computer interaction
  • Spoken dialog systems
  • Speech data mining
  • Spoken document summarization
  • Spoken language databases
  • Speaker/language recognition
  • Multimodal processing

Venue

IEEE SLT 2016 will take place in San Juan, Puerto Rico at the InterContinental Hotel in the tourist area of Isla Verde. These areas feature beautiful beaches and a vibrant night life besides a large number of dining options. Additional, the Old San Juan area is just a few miles away.

Important Dates

Paper Submission

July 22, 2016

Notifications:

September 14, 2016

Demo submission:

September 16, 2016

Demo notification:

September 25, 2016

Special Session proposals:

June 8, 2016

Special Session notification:

June 17, 2016

Early Registration Deadline:

October 14, 2016

Workshop:

December 13-16, 2016

Submission Details

Authors are invited to prepare a full-length manuscript of 4-6 pages, including reference materials and figures, to the SLT 2016 website: www.slt2016.org

Back  Top

3-3-8(2016-12-xx) CfP Dialog State Tracking Challenge 5 (DSTC5)
Dialog State Tracking Challenge 5 (DSTC5) @SLT 2016 San Juan Porto Rico
Call for Participation
=================================
 
* MOTIVATION
Dialog state tracking is one of the key sub-tasks of dialog management, which defines the representation of dialog states and updates them at each moment on a given on-going conversation. To provide a common testbed for this task, the first Dialog State Tracking Challenge (DSTC) was initiated [1], and then two more challenges (DSTC 2&3) [2][3] had been organized keeping the aim at human-machine conversations. On the other hand, the fourth challenge (DSTC 4) which has been most recently completed [4] has shifted the target of state tracking to human-human dialogs. In the challenge, a dialog state was defined for each sub-dialog segment level as a frame structure filled with slot-value pairs representing the main subject of the segment. Then, trackers were required to fill out the frame considering all dialog history prior to each turn in a given segment.
The previous DSTCs have contributed to the spoken dialog research community by providing opportunities for sharing the resources, comparing results among the proposed algorithms, and improving the state-of-the-art. However, the impacts of the outcomes from the challenges could be restricted to English dialogs only, because all the resources including the corpora, ontologies, and databases were collected under monolingual settings in English.
 
In the fifth challenge, we introduce a cross-lingual dialog state tracking task addressing the problem of adaptation to a new language. The goal of this task is to build a tracker in the target language with given the existing resources in the source language and their translations generated automatically by machine translation technologies to the target language. In addition to this main task, we propose a series of pilot tracks for the core components in developing end-to-end dialog systems also in the same cross-lingual settings. We expect that these shared efforts on cross-lingual tasks would contribute to progress in improving the language portability of state-of-the-art monolingual technologies and reducing the costs for building resources from the scratch to develop dialog systems in a resource-poor target language.
 
 
* DATASETS
At the beginning of the challenge, TourSG corpus which was used in DSTC 4 will be provided as a training set in the source language English. TourSG consists of 35 dialog sessions on touristic information for Singapore collected from Skype calls between three tour guides and 35 tourists. All the recorded dialogs have been manually transcribed and annotated with various labels.
 
In addition to the original dialogs in English, their translations generated by a machine translation system to Chinese which is the target language in the challenge will be also given along with the word alignment information, so that participants will not need to run their own system to generate the translated pairs of the dialogs.
 
Then, a test set will be released to evaluate the trackers developed in the first phase. It consists of Chinese dialogs collected and annotated under the equivalent conditions to the English dataset TourSG. At the beginning of the test phase, only the unlabelled set will be given with their English translations which were also generated by machine translation. The full annotations for the test set will be available after the challenge period.
 
 
* PROPOSED TASKS
Main task:
- Dialog state tracking at sub-dialog level: Fill out the frame of slot-value pairs for the current sub-dialog considering all dialog history prior to the turn.
 
Pilot tasks (optional):
- Spoken language understanding: Tag a given utterance with speech acts and semantic slots.
- Speech act prediction: Predict the speech act of the next turn imitating the policy of one speaker.
- Spoken language generation: Generate a response utterance for one of the participants.
- End-to-end system: Develop an end-to-end system playing the part of a guide or a tourist.
 
Open track (optional):
- Proposed by teams willing to work on any task of their interest over the provided dataset.
 
 
* IMPORTANT DATES
- 01 Apr 2016: Registration opens
- 14 Apr 2016: Training set is released
- 18 Jul 2016: Registration closes
- 21 Jul 2016: Test set is released
- 27 Jul 2016: Entry submission deadline
- 29 Jul 2016: Evaluation results are released
- 19 Aug 2016: Paper submission deadline
- December 2016: Workshop is held @ SLT 2016
 
 
* ORGANIZING COMMITTEE
Seokhwan Kim (I2R, Singapore)
Luis Fernando D?Haro (I2R, Singapore)
Rafael E. Banchs (I2R, Singapore)
Matthew Henderson (Google, USA)
Jason D. Williams (Microsoft, USA)
Koichiro Yoshino (NAIST, Japan)
 
 
* CONTACT DETAILS
Seokhwan Kim: kims AT i2r.a-star.edu.sg
Luis Fernando D?Haro: luisdhe AT i2r.a-star.edu.sg
1 Fusionopolis Way, #21-01, Singapore 138632
Fax: (+65) 6776 1378
 
 
* REFERENCES
[1] Jason D. Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The Dialog State Tracking Challenge. In Proceedings of the 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), Metz, France.
[2] Matthew Henderson, Blaise Thomson, and Jason D. Williams. 2014. ?The Second Dialog State Tracking Challenge?. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), Philadelphia, USA.
[3] Matthew Henderson, Blaise Thomson, and Jason D. Williams. 2014. ?The Third Dialog State Tracking Challenge?. In Proceedings of IEEE Spoken Language Technology Workshop, South Lake Tahoe, USA.
[4] Seokhwan Kim, Luis Fernando D'Haro, Rafael E. Banchs, Jason D. Williams, Matthew Henderson. 2016. ?The Fourth Dialog State Tracking Challenge?. In Proceedings of the 7th International Workshop on Spoken Dialogue Systems (IWSDS 2016), Saariselkä, Finland.
Back  Top

3-3-9(2017-01-03) 3rd Conference on New Advances in Acoustics (NAA 2017), Bangkok, Thailand

The 3rd Conference on New Advances in Acoustics (NAA 2017)
January 3-5, 2017 Bangkok, Thailand
Conference Website

Dear Colleagues,

Greetings from the 3rd Conference on New Advances in Acoustics (NAA 2017) which will be held in Bangkok, Thailand during January 3-5, 2017.  Please submit a contribution and register through Registration System.

Topics include, but not limited to:

  • Bioacoustics 
  • Computational Acoustics 
  • Communication Acoustics
  • Environmental Acoustics
  • Electro-acoustic and Audio Engineering
  • Musical Acoustic
  • Noise: Sources and control 
  • Physical Acoustic
  • Physiological and Psychological Acoustics
  • Room and Building Acoustic
  • Speech
  • Structural Acoustic and Vibration 
  • Ultrasonic
  • Underwater Acoustic
  • Other Related Topics

 

About Bangkok

Bangkok, the city once known as Siam, remarkably known to be exotic and rich in culture, is the cultural, economic and political capital of Thailand. The city features both old-world charm and modern convenience. Amidst the gleaming skyscrapers of Bangkok, one would still see traditional architectures such as temples, illustrating the retention of its identity whilst being a cosmopolitan city.

Contact Us

Tel: 86 132 6470 2250.
If you are interested in this matter, you also can directly reply this email.

We would extend our highest appreciation and warmest welcome to your attention and attendance.

Back  Top

3-3-10(2017-01-09) Winter School: Speech Perception and Production; Learning and Memory, Chorin, Germany (extended deadline)

Deadline for submission extended to November 18th


Winter School: Speech Perception and Production

This fifth winter school on Speech Perception and Production focuses on Learning and Memory. We invite PhD students and researchers of phonetics, linguistics, psychology, speech & language therapy and related disciplines to present their own work, or work in progress. How to participate ?

FOLLOW US ON TWITTER @SPP_LM2017

 

Context and objectives

Spoken communication is part of being human yet its apparent ease in everyday life contrasts with its complexity as a scientific object. Although most people learn to speak effortlessly, spoken communication is often altered in atypical development, aging or degenerative pathologies or even in typical development or specific disorders such as stuttering. This complexity is related to the fact that speech is a perceptuo-motor activity whose aims are defined in linguistic systems and social interactions. Learning and memory are important aspects of speech that have as yet been poorly connected to the representations of speech. Students, but also scientists and professionals in linguistics and psychology, will benefit from a better knowledge of recent scientific developments in long-term memory and learning.

The winter school will cover a diverse range of topics related to learning and memory in speech production and perception in both children and adults. In the last decades these topics have been of key interest in the field of language and psychology, motivated by:

  • A shift of theoretical conception in psychology and linguistics toward embodied and situated cognition: speakers may not store only words in memory, but also context-specific details covered under the term “fine phonetic details” (i.e. speaker information, situational contexts, reduced word forms etc.). This paradigm shift challenges more traditional models of linguistic representations.
  • A growing interest in studying not only speech production and perception on its own, but a move to considering meaningful face-to-face communication as a multifaceted process, integrating oral communication with manual, or other body gestures.
  • Recent evidence in the literature that learning is speaker-specific, and that multimodal approaches make an important contribution to the understanding of language acquisition or adaptation in typical and disordered speakers (clinical populations).
  • A global tendency for human migration that goes hand in hand with a growing number of multilingual children and adults, who may be more or less proficient in learning to speak a new language. The invited scholars have been working on these topics from various perspectives including neuroscience, psychology, multimodality, speech acquisition, speech pathology and linguistics.

They have been chosen to address the following issues:

 
  1. Which linguistic units are stored in the lexicon? Which linguistic units do children acquire during speech acquisition or adults during second language learning?
  2. How do children or adults with sensorimotor or anatomical difficulties learn speech /language or compensate for their problems? Which techniques and tools could be used in the rehabilitation process to improve the learning process?
  3. How can different modalities and their combination contribute to learning and memorizing a language?
  4. What are the neurophysiological substrates of short and long term memory, and how are they taken into account during learning?

The school combines perspectives from researchers and lecturers working interdisciplinarily: it will be beneficial to a broad audience and in particular to students in linguistics and psychology who want to extend their knowledge and discuss their own ideas about learning and memory in speech production and perception. The school will involve tutorials, which will provide a larger overview over a selected research area, but will also go into specific questions and recent developments. The program, size of the group, and the location are intended to allow for an extensive exchange in particular between student and senior researchers.

 

Financial supports

Back  Top

3-3-11(2017-02-15) (Dis)Fluency2017: Fluency and disfluency across languages and language varieties, Catholic University of Louvain, Louvain-la-Neuve, Belgique

(Dis)Fluency2017: Fluency and disfluency across languages and language varieties

15-17 February 2017
Catholic University of Louvain, Louvain-la-Neuve (Belgium)

Fluency and disfluency have attracted a great deal of attention in different areas of linguistics such as language acquisition or psycholinguistics. They have been investigated through a wide range of methodological and theoretical frameworks, including corpus linguistics, experimental pragmatics, perception studies and natural language processing, with applications in the domains of language learning, teaching and testing, human/machine communication and business communication.

Spoken and signed languages are produced and comprehended online, with typically very little time to plan ahead. As a result, they are often characterized by features such as (filled and unfilled) pauses, discourse markers, repeats and self-repairs, which can be said to reflect on-going mechanisms of processing and monitoring. The role of these items is ambivalent, as they can both be a symptom of encoding difficulties and a sign that the speaker is trying to help the hearer decode the message. They should thus be interpreted in context to identify their contribution to fluency and/or disfluency, which can be viewed as two faces of the same phenomenon.

Within the frame of a research project entitled ?Fluency and disfluency markers. A multimodal contrastive perspective? (see http://www.uclouvain.be/en-415256.html), the universities of Louvain and Namur have been involved in a large-scale usage-based study of (dis)fluency markers in spoken French, L1 and L2 English, and French Belgian Sign Language (LSFB), with a focus on variation according to language, speaker and genre. To close this five-year research project, an international conference will be organized in Louvain-la-Neuve on the subject of fluency and disfluency across languages and language varieties.

The conference aims at bringing together scholars and researchers from different disciplines in order to discuss and confront different conceptions and perspectives on fluency and disfluency, in both spoken and sign languages. We particularly welcome abstracts for oral or poster presentations on the following topics:

        ?  theoretical insights gained from the study of fluency and disfluency;

        ?  methodological issues raised by the investigation of (dis)fluency markers;

        ?  acquisitional perspectives on (dis)fluency and pedagogical implications;

        ?  contrastive analyses of (dis)fluency markers;

        ?  variationist approaches to fluency and disfluency;

        ?  (dis)fluency in the Sign Language of native, near-native and late signers;

        ?  applications of fluency research (NLP, testing, etc.)

Keynote Speakers:

Martin Corley, University of Edinburgh
Sandra Götz, Justus Liebig University, Giessen
Helena Moniz, Institute for Systems and Computer Engineering: Research and Development, Lisbon
David Quinto-Pozos, The University of Texas at Austin

Abstracts (1000 words, excluding references) should be submitted via Easy chair at the following address: https://easychair.org/conferences/?conf=disfluency2017

Important dates:

        ? Deadline submission of abstracts: 15 September 2016

        ? Notification of acceptance: 31 October 2016

        ? Early-bird registration: 30 November 2016

        ? Deadline registration: 15 January 2017

Scientific committee

Nicolas Ballier (Université Paris Diderot)
Roxane Bertrand (Université Aix-Marseille)
Philippe Blache (Université Aix-Marseille)
Catherine Bolly (Universität zu Köln)
Hans Rutger Bosker (Max Planck Institute for Psycholinguistics, Nijmegen)
Maria Candéa (Université Sorbonne Nouvelle Paris)
Sylvie De Cock (Université catholique de Louvain)
Nivja de Jong (Utrecht University)
Robert Eklund (Linköping University)
Kerstin Fischer (University of Southern Denmark)
Thomas François (Université catholique de Louvain)
Lorenzo Garcia-Amaya (University of Michigan, USA)
Jonathan Ginzburg (Université Paris Diderot)
Pascale Goutéraux (Université Paris Diderot)
Heather Hilton (Université de Lyon 2)
Judit Kormos (University of Lancaster)
Anne Lacheret (Université Paris Ouest)
Bertille Pallaud (Université Aix-Marseille)
Laurent Prévot (Université Aix-Marseille)
Helmer Strik (Radboud Universiteit)
Parvaneh Tavakoli (University of Reading, UK)
Gunnel Tottie (University of Zurich)
Mieke Van Herreweghe (Universiteit Gent)
Ioana Vasilescu (Université Sorbonne Nouvelle Paris)
Myriam Vermeerbergen (Katholieke Universiteit Leuven)

Organizing committee

Liesbeth Degand (UCL)
Cédrick Fairon (UCL)
Gaëtanelle Gilquin (UCL)
Sylviane Granger (UCL)
Laurence Meurant (UNamur)
Anne Catherine Simon (UCL)
George Christodoulides (UCL)
Ludivine Crible (UCL)
Amandine Dumont (UCL)
Iulia Grosman (UCL)
Ingrid Notarrigo (UNamur)
Lucie Rousier-Vercruyssen (UCL & Université de Neuchâtel)

Back  Top

3-3-12(2017-03-01) HSCMA Hands Free Communication and Microphone Arrays, San Francisco, CA, USA

HSCMA Hands Free Communication and Microphone Arrays

March 1–3, 2017 • San Francisco, CA, USA

Call for Papers

The Fifth Joint Workshop on Hands-free Speech Communication and Microphone Arrays will

be held on March 1-3, 2017 at the Google Offices in downtown San Francisco, California.

The workshop is devoted to presenting recent advances in distant-talking speech communication

and human/machine interaction with an emphasis on multi-microphone systems. It will bring

together researchers and practitioners from universities and industry working in distant speech

and speaker recognition, speech enhancement, high-quality sound capture, and multiple-input/

multiple-output (MIMO) acoustic signal processing. Demonstrations of experimental systems,

applications, and prototypes are especially welcome.

HSCMA 2017 is being held with technical sponsorship by the IEEE Signal Processing Society

and will immediately precede ICASSP 2017.

Workshop Topics

Papers in all areas of distant-talking human/human and human/machine interaction are

encouraged, including:

• Multi-channel and single-channel approaches for speech acquisition, noise suppression,

source localization and separation, dereverberation, echo cancellation, and acoustic event

detection

• Speech and speaker recognition technology for hands-free scenarios, including robust

acoustic modeling, novel features, feature enhancement, dereverberation, and model

adaptation

• Microphone array technology and architectures, especially for distant-talking speech

recognition and acoustic scene analysis

• Multi-channel rendering, including spatial audio for immersive environments, improvements

to intelligibility in noisy environments, and privacy of speech communications

• Speech corpora for training and evaluation of distant-talking speech systems

• Applications based on microphone arrays and hands-free speech systems.

Special Sessions

The program will also feature special sessions on new or emerging topics of interest. Proposals

for special sessions must include the session title, rationale, outline, and a list of four invited

papers.

Paper & demo submission

The workshop technical program will consist of oral presentations, poster sessions, and

demonstrations. Prospective authors are invited to submit full-length papers up to four pages,

with a fifth page permitted for references only. Submissions for proposed demonstrations may be

up to two pages in length. Manuscripts should be prepared using the same format as for ICASSP

submissions using the ICASSP author kit for LaTeX or Word. Accepted papers will be published

in IEEEXplore.


Committee

General Co-chairs Jerome Bellegarda, Apple Malcolm Slaney, Google Ivan Tashev, Microsoft Technical Program Chairs Shoko Araki, NTT Jacob Benesty, INRS-EMT, University of Quebec Bastiaan Kleijn, Victoria University of Wellington Mike Seltzer, Microsoft

Finance Chair Mark Thomas, Dolby

Publicity/Publications Chair Ozlem Kalinli,

Sony Demo/Special Sessions Chair Shiva Sundaram,

Amazon Local Arrangements Chair Horacio Franco, SRI


Important Dates
Special Session Proposals Due ..........................September 1, 2016 Notification for Special Sessions ........................September 15, 2016 Paper Submission Deadline .........................December 13, 2016 Final Upload of Submitted Papers .........................December 18, 2016 Notification of Paper Decisions .............................January 20, 2017 Camera Ready Papers Due .............................January 23, 2017


Back  Top

3-3-13(2017-03-05) 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2017), New Orleans, USA

2017 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2017)
New Orleans, USA
March 5-9, 2017

http://www.ieee-icassp2017.org/

CALL FOR PAPERS

Just as music and rhythm are the heartbeats of life, signal and information processing are the heartbeats of IT development. Having both of them capture the attendees' hearts and souls is the goal of the 42th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2017) which will be held at the Hilton Conference Center, in the jazz-music capital of the world (New Orleans, USA) on March 5-9, 2017. ICASSP is the world's largest and most comprehensive technical conference focused on signal processing and its applications. The conference provides, both for researchers and developers, an engaging forum to exchange ideas and proposed new developments in this field. The theme of ICASSP 2017 is 'The Internet of Signals' which is the real technology and world behind the Internet of Things. The conference will feature world-class international speakers, tutorials, exhibits, lectures and poster sessions from around the world. Topics include but are not limited to:
+ Audio and acoustic signal processing
+ Sensor array & multichannel signal processing
+ Bio-imaging and biomedical signal processing
+ Signal processing education
+ Design & implementation of signal processing systems
+ Signal processing for communication & networking
+ Image, video & multidimensional signal processing
+ Signal processing theory & methods
+ Industry technology tracks
+ Signal processing for Big Data
+ Information forensics and security
+ Internet of Things and RFID
+ Machine learning for signal processing
+ Speech processing
+ Multimedia signal processing
+ Spoken language processing
+ Remote Sensing and signal processing
+ Signal Processing for Brain Machine Interface
+ Signal Processing for Smart Systems
+ Signal Processing for Cyber Security

SUBMISSION OF PAPERS
Prospective authors are invited to submit full-length papers, with up to four pages for technical content including figures and possible references, and with one additional optional 5th page containing only references. A selection of best papers will be made by the ICASSP 2017 committee upon recommendations from Technical Committees.

SPECIAL SESSIONS
Special-session proposals should be submitted by July 11, 2016. Proposals for special sessions must include a topical title, rationale, session outline, contact information for the session chair, a list of authors, and a tentative title and abstract. Additional information can be found at the ICASSP 2017 website (http://www.ieee-icassp2017.org).

TUTORIALS
Will be held on March 5, 2017. Brief proposals should be submitted by September 15, 2016. Proposal for tutorials must include a title, an outline of the tutorial and its motivation, a two-page CV of the presenter(s), and a short description of the material to be covered.

SIGNAL PROCESSING LETTERS
Authors of IEEE Signal Processing Letters (SPL) papers will be given the opportunity to present their work at ICASSP 2017, subject to space availability and approval by the ICASSP Technical Program Chairs. SPL papers published between January 1, 2016 and December 31, 2016 are eligible for presentation at ICASSP 2017. Because they are already peer-reviewed and published, SPL papers presented at ICASSP 2017 will neither be reviewed nor included in the ICASSP proceedings. Requests for presentation of SPL papers should be made through the ICASSP 2017 website on or before December 12, 2016. Approved requests for presentation must have one author/presenter register for the conference.

DEMOS
Offers a perfect stage to showcase innovative ideas in all technical areas of interest at ICASSP. All demo sessions are highly interactive and visible. Please refer to the ICASSP 2017 website for additional information regarding demo submission.

IMPORTANT DEADLINES
  Special-Session proposals:                    July 11, 2016
  Tutorials proposals:                          September 15, 2016
  Notification of Special Session acceptance:   August 15, 2016
  Notification of Tutorial acceptance:          October 15, 2016
  Submission of regular papers:                 September 12, 2016
  Signal Processing Letters:                    November 21, 2016
  Notification of paper acceptance:             December 12, 2016
  Author registration:                          January 9, 2017

Back  Top

3-3-14(2017-03-06) 11th INTERN. CONF. ON LANGUAGE AND AUTOMATA THEORY AND APPLICATIONS, Umea, Sweden

11th INTERNATIONAL CONFERENCE ON LANGUAGE AND AUTOMATA THEORY AND APPLICATIONS
 

LATA 2017
 
Umeå, Sweden
 
March 6-10, 2017
 
Organized by:
           
Department of Computing Science
Umeå University
 
Research Group on Mathematical Linguistics (GRLMC)
Rovira i Virgili University
 
http://grammars.grlmc.com/LATA2017/
*************************************************************************
 
AIMS:
 
LATA is a conference series on theoretical computer science and its applications. Following the tradition of the diverse PhD training events in the field organized by Rovira i Virgili University since 2002, LATA 2017 will reserve significant room for young scholars at the beginning of their career. It will aim at attracting contributions from classical theory fields as well as application areas.
 
VENUE:
 
LATA 2017 will take place in Umeå, a university town in North Sweden which was European Capital of Culture in 2014. The venue will be the Faculty of Science and Technology.
 
SCOPE:
 
Topics of either theoretical or applied interest include, but are not limited to:
 
algebraic language theory
algorithms for semi-structured data mining
algorithms on automata and words
automata and logic
automata for system analysis and programme verification
automata networks
automatic structures
codes
combinatorics on words
computational complexity
concurrency and Petri nets
data and image compression
descriptional complexity
foundations of finite state technology
foundations of XML
grammars (Chomsky hierarchy, contextual, unification, categorial, etc.)
grammatical inference and algorithmic learning
graphs and graph transformation
language varieties and semigroups
language-based cryptography
mathematical and logical foundations of programming methodologies
parallel and regulated rewriting
parsing
patterns
power series
string processing algorithms
symbolic dynamics
term rewriting
transducers
trees, tree languages and tree automata
weighted automata
 
STRUCTURE:
 
LATA 2017 will consist of:
 
invited talks
invited tutorials
peer-reviewed contributions
 
INVITED SPEAKERS:
 
tba
 
PROGRAMME COMMITTEE (to be completed):
 
Eric Allender (Rutgers University, Piscataway, US)
Christel Baier (Technical University of Dresden, DE)
Armin Biere (Johannes Kepler University Linz, AT)
Avrim Blum (Carnegie Mellon University, Pittsburgh, US)
Liming Cai (University of Georgia, Athens, US)
Alessandro Cimatti (Bruno Kessler Foundation, Trento, IT)
Rocco De Nicola (IMT School for Advanced Studies Lucca, IT)
Rodney Downey (Victoria University of Wellington, NZ)
Frank Drewes (Umeå University, SE)
Zoltán Fülöp (University of Szeged, HU)
Gregory Z. Gutin (Royal Holloway, University of London, UK)
Lane A. Hemaspaandra (University of Rochester, US)
Dorit S. Hochbaum (University of California, Berkeley, US)
Marek Karpinski (University of Bonn, DE)
Joost-Pieter Katoen (RWTH Aachen University, DE)
Evangelos Kranakis (Carleton University, Ottawa, CA)
Lars M. Kristensen (Bergen University College, NO)
Kim G. Larsen (Aalborg University, DK)
Axel Legay (INRIA, Rennes, FR)
Leonid Libkin (University of Edinburgh, UK)
Carsten Lutz (University of Bremen, DE)
João Marques Silva (University of Lisbon, PT)
Carlos Martín-Vide (Rovira i Virgili University, Tarragona, ES, chair)
Mitsunori Ogihara (University of Miami, Coral Gables, US)
Arlindo Oliveira (Instituto Superior Técnico, Lisbon, PT)
David Parker (University of Birmingham, UK)
Madhusudan Parthasarathy (University of Illinois, Urbana-Champaign, US)
Doron A. Peled (Bar-Ilan University, Ramat Gan, IL)
Jean-Éric Pin (Paris Diderot University, FR)
Wojciech Rytter (University of Warsaw, PL)
Kunihiko Sadakane (University of Tokyo, JP)
Jens Stoye (Bielefeld University, DE)
Wing-Kin Sung (National University of Singapore, SG)
Dimitrios M. Thilikos (National and Kapodistrian University of Athens, GR)
Ioannis G. Tollis (University of Crete, Heraklion, GR)
Bianca Truthe (University of Giessen, DE)
Frits Vaandrager (Radboud University, Nijmegen, NL)
 
ORGANIZING COMMITTEE:
 
Yonas Demeke (Umeå)
Frank Drewes (Umeå, co-chair)
Petter Ericson (Umeå)
Anna Jonsson (Umeå)
Carlos Martín-Vide (Tarragona, co-chair)
Manuel Jesús Parra Royón (Granada)
Bianca Truthe (Giessen)
Florentina Lilica Voicu (Tarragona)
Niklas Zechner (Umeå)
 
SUBMISSIONS:
 
Authors are invited to submit non-anonymized papers in English presenting original and unpublished research. Papers should not exceed 12 single-spaced pages (including eventual appendices, references, proofs, etc.) and should be prepared according to the standard format for Springer Verlag's LNCS series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0).
 
Submissions have to be uploaded to:
 
https://easychair.org/conferences/?conf=lata2017
 
PUBLICATIONS:
 
A volume of proceedings published by Springer in the LNCS series will be available by the time of the conference.
 
A special issue of a major journal will be later published containing peer-reviewed substantially extended versions of some of the papers contributed to the conference. Submissions to it will be by invitation.
 
REGISTRATION:
 
The registration form can be found at:
 
http://grammars.grlmc.com/LATA2017/Registration.php
 
DEADLINES (all at 23:59 CET):
 
Paper submission: October 21, 2016
Notification of paper acceptance or rejection: November 25, 2016
Final version of the paper for the LNCS proceedings: December 5, 2016
Early registration: December 5, 2016
Late registration: February 20, 2017
Submission to the journal special issue: June 10, 2017
 
QUESTIONS AND FURTHER INFORMATION:
 
florentinalilica.voicu (at) urv.cat
 
POSTAL ADDRESS:
 
LATA 2017
Research Group on Mathematical Linguistics (GRLMC)
Rovira i Virgili University
Av. Catalunya, 35
43002 Tarragona, Spain
 
Phone: +34 977 559 543+34 977 559 543
Fax: +34 977 558 386
 
ACKNOWLEDGEMENTS:
 
Umeå universitet
Universitat Rovira i Virgili

Back  Top

3-3-15(2017-03-30) CfP Workshop (In)Coherence of Discourse 4, LORIA, Nancy, France


 ************************************
Call for Papers
************************************
Workshop (In)Coherence of Discourse 4

March 30th and 31th, 2017
LORIA - Nancy

http://discours.loria.fr
************************************

The fourth (In)Coherence of discourse workshop will be held in the
University of Lorraine on March 30th and 31th, 2017.
The objective of the workshop is to discuss the latest advances in the
modelling of discourses, in particular the kind held with pathological
patients (e.g. schizophrenics). The adopted modelling paradigm is that
of formal semantics, which falls within the scope of both linguistics
and logic while also making ties to the philosophy of language.

Topics of interest include (but are not limited to):
discourse comprehension and representation
experimental studies of non standard dialogues
formal accounts of dialogues
logic and reasoning
semantics / pragmatics interfaces
goals, intentions and commitments in dialogues
Cognitive Psychology
Psycholinguistics        mental illness and cognitive (in)coherence
radical interpretation
logicality and cognitive (in)coherence

Like the previous (In)Coherence of discourse workshops, the fourth
edition is organised by the SLAM (Schizophrenia and Language: Analysis
and Modelling) project. The SLAM project aims to systematize the study
of pathological conversations as part of an interdisciplinary approach
combining Psychology, Linguistics, Computer Science and Philosophy. It
focuses particularly on conversations involving people with psychiatric
disorders (schizophrenia, bipolar disorder).

Important dates:
        * January 9th, 2017: Submission deadline
        * February 6th, 2017: Notification
        * March 30th-31th, 2017: Workshop

Submission:
Authors are invited to submit a two-page PDF abstract (including
references), anonymously prepared for review, in English or French,
using easychair:
https://easychair.org/conferences/?conf=incoherence4

Keynote speakers:
Alain Lecomte, professeur émérite Université Paris 8
Ellen Breitholtz and Christine Howes, University of Gothenburg

Scientific committee:
Maxime Amblard - Université de Lorraine
Nicholas Asher - CNRS Toulouse
Valérie Aucouturier - Université Paris Descartes
Patrick Blackburn - University of Roskilde
Mathilde Dargnat - Université de Lorraine
Felicity Deamer - Durham University
Hans van Ditmarsch - CNRS Nancy
Bart Geurts - University of Nijmegen
Philippe de Groote - INRIA Nancy
Klaus von Heusinguer - Universität zu Köln
Michel Musiol - Université de Lorraine
Denis Paperno - CNRS Nancy
Sylvain Pogodalla - INRIA Nancy
Manuel Rebuschi - Université de Lorraine
Christian Retoré - Université de Montpellier
Laure Vieu - CNRS Toulouse
Sam Wilkinson - Durham University


Organisation committee:
Maxime Amblard (Loria, INRIA, CNRS, Université de Lorraine)
Stefan Jokulsson (LHSP-Archives Poincaré, CNRS, Université de
 Lorraine)
Michel Musiol (ATILF, UFR SHS Nancy, CNRS, Université de Lorraine)
Marie-Hélène Pierre (ATILF, UFR SHS Nancy, CNRS, Université de
 Lorraine)
Manuel Rebuschi (LHSP-Archives Poincaré, CNRS, Université de
 Lorraine)

Supported by: CNRS, Université de Lorraine, INRIA, MSH Lorraine,
LORIA, LHSP? Archives Henri-Poincaré and ATILF

Back  Top

3-3-16(2017-04-26) Workshop on Speech perception and production across the lifespan (SPPL 2017), London, UK

Workshop on Speech perception and production across the lifespan (SPPL 2017)

26/27 April 2017, UCL, London, UK

Workshop website: www.sppl2017.org

Contact email: sppl2017@pals.ucl.ac.uk

Abstract submission deadline: 15 January 2017


Workshop description

Although the focus of much research into speech development has been to establish when ?adult-like? performance is reached (with young adult speakers taken as a ?norm?), it is increasingly clear that speech perception and production abilities are undergoing constant change across the lifespan as a result of physical changes, exposure to language variation, and cognitive changes at various periods of our lives.  Few studies have examined changes in speech production or perception measures across the lifespan using common materials and experimental designs. Lifespan studies can further our understanding of the extent and direction of these changes for key measures of speech communication and of how these changes interact with cognitive, social or sensory factors. Such knowledge is essential to refine and extend models of speech perception and production.

 

The workshop will provide an opportunity for interactions between researchers from areas of speech and language sciences research that may be focused on different developmental stages, e.g. early development and ageing. It will also discuss methodological issues, such as how to overcome the difficulty of developing tests that are equally appropriate for children, younger and older adults, and will consider ?missing gaps? in the developmental trajectory, e.g. data for older teenagers and middle-aged adults.


Invited speakers

Paul FOULKES (University of York)

Sandra GORDON-SALANT (University of Maryland)

Mitchell SOMMERS (Washington University)

Hayo TERBAND (University of Utrecht)


Call for papers

We invite submissions for oral and poster presentations. Presentations can include or consist of demonstrations of tests and software. We expect submitted papers to report experimental and modelling studies relating to more than one age group or longitudinal work. See further detail of topics at http://sppl2017.org/call-for-papers

Back  Top

3-3-17(2017-06-05) ACM International Conference on Multimedia Retrieval (ICMR 2017), Bucharest, Romania

ACM ICMR 2017, June 5-8, Bucharest, Romania
International Conference on Multimedia Retrieval
http://www.icmr2017.ro/
https://www.facebook.com/icmr2017/
https://twitter.com/icmr17


*** CALL FOR CONTRIBUTIONS ***

ACM ICMR-2016 is the premier conference for multimedia information
retrieval. We are calling for papers presenting significant and
innovative research in multimedia retrieval and related fields. Papers
should extend the state of the art by addressing new problems or
proposing insightful solutions. The scope of the conference includes
core topics in multimedia retrieval and recommendation, as well as the
broader set of topics that must be addressed to ensure that multimedia
retrieval technologies are of practical use in real-world use cases.
Special emphasis is placed on topics related to large-scale indexing,
user interaction, exploiting diverse and multimodal data, and
domain-specific challenges.

Topics of interest include (but are not limited to):
? Multimedia content-based search and retrieval
? Multimedia-content-based (or hybrid) recommender systems
? Large-scale and web-scale multimedia retrieval
? Multimedia content extraction, analysis, and indexing
? Multimedia analytics and knowledge discovery
? Multimedia machine learning, deep learning, and neural nets
? Relevance feedback, active learning, and transfer learning
? Zero-shot learning and fine-grained retrieval for multimedia
? Event-based indexing and multimedia understanding
? Semantic descriptors and novel high- or mid-level features
? Crowdsourcing, community contributions, and social multimedia
? Multimedia retrieval leveraging quality, production cues, style, framing, affect
? Narrative generation and narrative analysis
? User intent and human perception in multimedia retrieval
? Query processing and relevance feedback
? Multimedia browsing, summarization, and visualization
? Multimedia beyond video, including 3D data and sensor data
? Mobile multimedia browsing and search
? Multimedia analysis/search acceleration, e.g., GPU, FPGA
? Benchmarks and evaluation methodologies for multimedia analysis/search
? Applications of multimedia retrieval, e.g., medicine, sports, commerce, lifelogs,
travel, security, environment.


*IMPORTANT DATES*

-Full/short papers-
Paper Submission: January 27, 2017
Notification of Acceptance: March 29, 2017

-Open software papers-
Paper Submission: January 27, 2017
Notification of Acceptance: March 29, 2017

-Demo papers-
Paper Submission: January 27, 2017
Notification of Acceptance: March 29, 2017

-Brave new ideas papers-
Paper Submission: February 10, 2017
Notification of Acceptance: March 29, 2017

-Doctoral symposium papers-
Paper Submission: February 17, 2017
Notification of Acceptance: March 29, 2017

-Special session proposals-
Proposals due: November 30, 2016
Notification of Acceptance: December 12, 2016

-Tutorial proposals-
Proposals due: February 12, 2017
Notification of Acceptance: February 26, 2017

-Workshop proposals-
Proposals due: November 30, 2016
Notification of Acceptance: December 12, 2016


Details of each track are available on the conference website:
http://www.icmr2017.ro/call-for-contributions.php

Looking forward to hosting you in Bucharest!

Back  Top

3-3-18(2017-06-12) CfP Phonetics and Phonology in Europe 2017, Cologne, Germany

*First Call for Papers and Workshops: Phonetics and Phonology in Europe 2017*

University of Cologne, 12-14 June 2017

The Phonetics and Phonology in Europe (PaPE) conference is an interdisciplinary forum
bringing together researchers interested in all areas of phonetics and phonology, both
theoretical and applied, with a special focus on Laboratory Phonology. The series covers
a wide variety of topics including tone and intonation, phonological theory, language
acquisition, linguistic typology, and methodologies from fields as diverse as
psycholinguistics, neurolinguistics and speech technology.

The Cologne PaPE conference, scheduled for June 12-14 2017, complements this broad
mission with a more specific scientific aim, namely to contribute towards a fundamental
integration of the fields of phonetics and phonology, highlighting the intrinsic
relationship between the two. Submissions in any area of phonetics and phonology are
welcome with special consideration given to papers addressing the conference's
integrative goal.

Confirmed keynote speakers

Jonathan Barnes (Boston University)

Bettina Braun (Universität Konstanz)

Mirjam Ernestus (Max Planck Institute for Psycholinguistics, Nijmegen)

Maria Josep Solé (Universitat Autònoma de Barcelona)

Satellite workshops will be held on June 11 (whole day) and June 14 (afternoon) 2017.

Conference web site: http://pape2017.uni-koeln.de/


Important Dates
Abstract submission deadline: December 9, 2016
Notification of acceptance: January 21, 2017
Submission of revised abstracts: February 25, 2017
PaPE 2017 Conference: June 12-14, 2017


Submission Information
Abstracts should be written in English and not exceed one page of text (A4). In addition,
references, examples and/or figures can optionally be included on a second page.
Abstracts can be submitted from November 1 until December 9, 2016, using the EasyChair
link that will be provided on the conference website. Abstracts may be submitted either
for a 'talk/poster', or as a 'poster only'. Authors may submit one abstract as first
author and up to three abstracts as a co-author.


Call for Satellite Workshop proposals

University of Cologne, June 11 (whole day) and June 14 (afternoon) 2017

The PaPE 2017 Organizing Committee invites proposals for workshops to be held in
conjunction with the conference. There is no restriction regarding topics, as long as
there is a clear relevance to phonetics and phonology. Rooms for the workshops will be
provided.
The workshop proposal should be sent to pape-2017@uni-koeln.de by October 1 and should
not exceed two A4 pages and include information on the topic, the organizers (including
affiliation and e-mail address) and the paper selection process.


PaPE Organization
Martine Grice, Stefan Baumann, Francesco Cangemi, Anna Bruggeman (University of Cologne)


Contact
For enquiries please contact us at pape-2017@uni-koeln.de.

Back  Top

3-3-19(2017-06-21) International Conference Subsidia: Tools and Resources for Speech Sciences, Málaga (Costa del Sol, Spain).

The Phonetics Laboratory of the Spanish National Research Council (CSIC) and the University of Málaga are happy to announce the upcoming celebration of the International Conference SubsidiaTools and Resources for Speech Sciences, which will take place on June 21-23, 2017, in the city of Málaga (Costa del Sol, Spain).
During this quarter of 2016 we will be sending further information concerning the Conference, as well as its website address.
If you have any questions, please contact Juana Gil: 
juana.gil@cchs.csic.es, or José Villa:jovillavilla@hotmail.com

Back  Top

3-3-20(2017-06-29) 7èmes Journées de Phonétique clinique, Paris, France
7èmes Journées de Phonétique clinique

Paris,  29 juin - 30 juin 2017

 

Organisées pour la première fois à Paris en 2005 puis rééditées successivement à Grenoble (2007), Aix-en-Provence (2009), Strasbourg (2011), Liège  (2013) et Montpellier (2015), les Journées de Phonétique Clinique (JPC) reviennent à Paris en 2017.

Elles réunissent des chercheurs et des ingénieurs mais aussi des médecins (ORL, phoniatres, chirurgiens,…) ainsi que des orthophonistes s’intéressant tous aux questions liées aux pathologies de la voix, de la parole et du langage.

Les 7èmes Journées de Phonétique Clinique,  se dérouleront à Paris du 29 juin au 30 juin 2017, organisées par le Laboratoire de Phonétique et de Phonologie (LPP-UMR7018), l’Université Sorbonne Nouvelle Paris3, l’Université Paris Descartes, l’Hôpital Européen Georges Pompidou HEGP (service ORL et Unité d’exploration fonctionnelle des troubles de la voix, de la parole et de la déglutition), le Département Universitaire d’Orthophonie (Université Pierre et Marie Curie UPMC).

 

Les thèmes de ces 7èmes Journées de Phonétique Clinique incluront, de façon non exhaustive, les problématiques suivantes :

- troubles phonétiques / phonologiques

- troubles de la production / de la perception

- troubles de la voix / de la parole

- communication verbale / non verbale

- troubles moteurs de la parole  

- Instrumentation, ressources et modélisations en phonétique clinique

 

 

Bientôt plus d’'infos...

Back  Top

3-3-21(2017-07-06) 9th Conference on Speech Technology and Human-Computer Dialogue, at Bucharest, Romania.

http://www.sped2017.ro

 

The SpeD 2017 Organizing Committee invites you to attend the 9th Conference on Speech Technology and Human-Computer Dialogue, at Bucharest, Romania. SpeD 2017 will bring together academics and industry professionals from universities, government agencies and companies to present their achievements in speech technology and related fields.

?SpeD 2017? is a conference and international forum which will reflect some of the latest tendencies in spoken language technology and human-computer dialogue research as well as some of the most recent applications in this area.

?SpeD 2017? is intended to be an IEEE and EURASIP sponsored Conference. As all previous editions since 2009, the Proceedings are intended to be indexed by the IEEE Xplore database and Thomson Conference Proceedings Citation Index.

Organized by

  • University POLITEHNICA of Bucharest
    Faculty of Electronics, Telecommunications and Information Technology
  • Institute for Computer Science ? Romanian Academy, Ia?i Branch
  • Research Institute for Artificial Intelligence ?Mihai Draganescu?- Romanian Academy, Bucharest

Under the Aegis of

  • Romanian Academy ? Section of Information Science and Technology

Technical sponsorship

  • IEEE
  • The European Association for Signal and Image Processing (EURASIP)

Topics

  • Speech Analysis, Representations and Models
  • Spoken Language Recognition and Understanding
  • Text-to-Speech Synthesis
  • Machine Translation for Speech
  • Affective Speech Recognition, Interpretation and Synthesis
  • Speaker Identification and Verification in Biometric Systems and Security
  • Audio Based Solutions for Intruder Detection
  • Algorithms for Acoustic Echo Cancellation
  • Filtering and Transforms for Speech Technology
  • Spoken Language Based Systems
  • Speech Interface Design and Human Factors Engineering
  • Speech Interface Implementation for Embedded / Network-Based Applications
  • Natural Language Processing
  • Speech Data Mining
  • Spoken Dialogue Systems
  • Educational/Healthcare Applications
  • Assistive Technologies
  • Multimodal Processing
  • Spoken Language Databases
  • Speech Analysis for Linguistics and Phonetics

Preliminary Schedule

  • Submission of camera-ready papers (information for authors is provided on the Conference WEB site): February 13, 2017.
  • Notification of acceptance and reviewers? comments: April 10, 2017.
  • Submission of final papers: April 24, 2017.
  • Conference: July 6-9, 2017.
 
------------------------
Laurent Besacier
Professeur à l'Univ. Grenoble Alpes (UGA)
Laboratoire d'Informatique de Grenoble (LIG)
Membre Junior de l'Institut Universitaire de France (IUF 2012-2017)
Responsable équipe GETALP du LIG
Directeur de l'école doctorale (ED) MSTII
-------------------------
!! Nouvelles coordonnées !!: LIG 
Laboratoire d'Informatique de Grenoble
Bâtiment IMAG
700 avenue Centrale
Domaine Universitaire - 38401 St Martin d'Hères
Pour tout contact concernant ED MSTII: passer par ed-mstii@univ-grenoble-alpes.fr
Nouveau tel: 0457421454
--------------------------
 
Back  Top

3-3-22(2017-09-20) 11th Disfluency Conference, Oxford, UK
Home Committee Speakers Programme Location Commercial Opportunities
Save the Date!
20-23 September 2017
Submit Abstract >
Register >
Sign up for updates >
Commercial Opportunities >
Organised by
Elsevier
Supported by
Supporting Journal
Supporting Journal
Supporting Publication
Supporting Journal
ODC

 

We are delighted to announce that the 11th Oxford Dysfluency Conference (ODC), under the theme ?Challenge and Change?, is to be held at St Catherine?s College Oxford from 20-23 September, 2017.

ODC has a reputation as one of the leading international scientific conferences in the field of dysfluency.

The conference brings together researchers and clinicians, providing a showcase and forum for discussion and collegial debate about the most current and innovative research and clinical practices. Throughout the history of ODC, the primary aim has been to bridge the gap between research and clinical practice.

The conference seeks to promote research that informs management, with interventions that are supported by sound theory and which inform future research.

In 2017, the goal is to encourage discussion and debate that will challenge and enhance our perspectives and understanding of research; the nature of stuttering and / or cluttering; and management across the ages.

The 2017 conference will enable delegates to:

  1. Present and learn from the latest research developments and findings
  2. Explore issues relating to the nature of stuttering and / or cluttering and its treatment
  3. Develop knowledge and clinical skills for working with children and adults who stutter and / or clutter
  4. Advance research in the field of dysfluency
  5. Consider ways to integrate research into clinical practice
  6. Support and encourage new researchers in the field
  7. Develop collaborations with researchers working in dysfluency
  8. Provide informal opportunities to meet and discuss ideas with leading experts in the field in a friendly environment

We invite you to visit the conference website, sign up for updates and don't forget to add the dates to your calendar.
We look forward to meeting you in Oxford!

Regards,

Conference Chairs
Sharon Millard, The Michael Palin Centre for Stammering, UK
Shelley B. Brundage, George Washington University, USA


To receive email updates for this event please sign up now! For further information and to register for email updates visit: www.dysfluencyconference.com
Submit Abstract
Important Dates
31 March 2017
Abstract Submission Deadline
26 May 2017
Author notification deadline
16 June 2017
Author registration deadline
Co-sponsor
Stuttering foundation

This message has been sent to christian.wellekens@eurecom.fr from Elsevier Communications on behalf of Elsevier Conferences.

Back  Top

3-3-23(2017-09-28) Workshop at BICLCE2017 (7th Biennial International Conference on the Linguistics of Contemporary English), Vigo, Spain

Speech Rhythm in L1, L2 and Learner Varieties of English

Workshop at BICLCE2017 (7th Biennial International Conference on the Linguistics of Contemporary English) in Vigo, 28-30 September 2017

https://sites.google.com/site/rflinguistics/workshops/rhythm2017

Convenor: Robert Fuchs (Hong Kong Baptist University)

 

Speech rhythm has long been recognised as an important supra-segmental category of speech, yet its measurement, relevance and the theoretical soundness of the concept continue to be hotly debated. The arguably most widely supported approach considers speech rhythm to consist of a continuum ranging from (1) a syllable-timed pole, with relatively small differences in prominence between syllables, to (2) a stress-timed pole, with relatively large differences in prominence between syllables. Most L1 varieties of English are widely regarded to be more stress-timed than most L2 and learner varieties, and this is supported by a considerable amount of empirical evidence (e.g. Deterding 1994, 2001, Fuchs 2016, Gut 2005, Gut and Milde 2002, Low 1998).

Yet, upon closer inspection, many of the concepts underlying this research appear to be contested. For one, L1 varieties of English are themselves heterogeneous in their rhythm. There is, for example, regional variation, with some dialects spoken in the British Isles being more syllable-timed than others (Ferragne 2008, Ferragne and Pellegrino 2004, White and Matty 2007a, 2007b, White et al. 2007). Similarly, in L2 varieties, sociolinguistic differences such as that between acrolect and basilect might go hand in hand with a difference in speech rhythm. As for learner Englishes, while there is good evidence of the transfer of rhythmic characteristics from L1 to L2 (e.g. Dellwo et al. 2009, Gut 2009, Jang 2008, Sarmah et al. 2009), more research is needed to show that this has consequences in terms of foreign accent and accent recognition. More generally, research on speech rhythm would benefit from studies showing that quantitative measures of speech rhythm (so-called rhythm metrics) are perceptually relevant and psychologically ?real? in the sense that what is measured is reflected in a certain kind of percept. Finally, the very nature and reliability of these rhythm metrics has been discussed extensively, but arguably inconclusively, in the past years, with some researchers attempting to identify those duration-based metrics that are most reliable (White and Mattys 2007a, White et al.2007, Wiget et al. 2010), others concluding that none of them are reliable (Arvaniti 2009, 2012, Arvaniti et al. 2008), and yet others suggesting metrics that focus on acoustic correlates of prominence other than duration, such as intensity (Fuchs 2016, He 2012, Low 1998), loudness (Fuchs 2014a), f0 (Cumming 2010, 2011, Fuchs 2014b) and sonority (Galves et al. 2012).

 

In order to address these issues, this workshop aims to bring together researchers working on one or more of the following aspects:

  • Applications of rhythm metrics that measure speech rhythm based on acoustic correlates of prominence other than duration
  • Comparative tests of the validity and reliability of existing rhythm metrics
  • Perceptual relevance and psychological reality of speech rhythm
  • Relevance of speech rhythm in Second Language Acquisition/learner Englishes, e.g. its contribution to foreign accent as well as pedagogical approaches
  • Differences in speech rhythm between varieties previously thought to be in the same 'rhythm class'
  • Sociolinguistic relevance of speech rhythm in indexing e.g. lectal differences or ethnic subvarieties within the same national variety of English

Apart from addressing one or more of the issues above, papers need be concerned with (a variety of) English or a language contact situation involving English (in keeping with the scope of the conference).

 

The workshop will consist of full papers and work in progress reports, which will be allotted 20 minutes for presentation (plus 10 minutes for discussion). The deadline for submission of abstracts (ca. 500 words, excluding title, references and keywords) is 15 December 2016. Notification of acceptance will be sent out by the end of January 2017. Abstracts should be sent to rfuchs@hkbu.edu.hk .

 

References

 

Arvaniti, Amalia. 2009. Rhythm, timing and the timing of rhythm. Phonetica 66(1/2): 46?63.

Arvaniti, Amalia. 2012. The usefulness of metrics in the quantification of speech rhythm. Journal of Phonetics 40: 351?373.

Arvaniti, Amalia, Tristie Ross, and Naja Ferjan. 2008. On the reliability of rhythm metrics. Journal of the Acoustical Society of America 124(4): 2495.Dellwo, Volker, Francisco Gutiérrez Diez, and Nuria Gavalda. 2009. The development of measurable speech rhythm in Spanish speakers of English. In Actas de XI Simposio Internacional de Comunicacion Social, Santiago de Cuba, 594?597.

Cumming, Ruth E. 2010. The language-specific integration of pitch and duration. PhD thesis. University of Cambridge.

Cumming, Ruth E. 2011. Perceptually informed quantification of speech rhythm in pairwise variability indices. Phonetica 68(4): 256?277.

Deterding, David. 1994. The rhythm of Singapore English. In Proceedings of the fifth Australian international conference on speech science and technology, ed. Roberto Togneri, 316?321. Perth: Uniprint.

Deterding, David. 2001. The measurement of rhythm: A comparison of Singapore and British English. Journal of Phonetics 29: 217?230.

Ferragne, Emmanuel. 2008. Etude Phonétique des Dialectes Modernes de l?Anglais des Iles Britanniques: Vers l?Identification Automatique du Dialecte. PhD thesis. Université Lumière Lyon 2.

Ferragne, Emmanuel, and François Pellegrino. 2004. A comparative account of the suprasegmental and rhythmic features of British English dialects. Actes de Modelisations pour l?Identification des Langues, Paris, 121?126.

Fuchs, Robert. 2014a. Integrating variability in loudness and duration in a multidimensional model of speech rhythm: Evidence from Indian English and British English. In Proceedings of speech prosody 7, Dublin, ed. Nick Campbell, Dafydd Gibbon, and Daniel Hirst, 290?294.

Fuchs, Robert. 2014b. Towards a perceptual model of speech rhythm: Integrating the influence of f0 on perceived duration. In Proceedings of interspeech 2014, ed. Haizhou Li, Helen Meng, Bin Ma, Eng Siong Chng, and Lei Xie, Singapore, 1949?1953.

Fuchs, Robert. 2016. Speech Rhythm in Varieties of English: Evidence from Educated Indian English and British English. Singapore: Springer.

Galves, Antonio, Jesus Garcia, Denise Duarte, and Charlotte Galves. 2002. Sonority as a basis for rhythmic class discrimination. In Proceedings of speech prosody 2002, Aix-en-Provence, 323?326.

Gut, Ulrike. 2005. Nigerian English prosody. English World-Wide 26(2): 153?177.

Gut, Ulrike. 2009. Non-native speech. A corpus-based analysis of phonological and phonetic properties of L2 English and German. Frankfurt: Peter Lang.

Gut, Ulrike, and Jan-Torsten Milde. 2002. The prosody of Nigerian English. In Proceedings of the speech prosody 2002 conference, ed. Bel Bell and Isabelle Marlien, 367?370. Aix-en-Provence: Laboratoire Parole et Langage.

He, Lei. 2012. Syllabic intensity variations as quantification of speech rhythm: Evidence from both L1 and L2. In Proceedings of the 6th international conference on speech prosody, Shanghai, 22?26 May 2012, ed. Qiuwu Ma, Hongwei Ding, and Daniel Hirst, 466?469. Shanghai: Tongji University Press.

Jang, Tae-Yeoub. 2008. Speech rhythm metrics for automatic scoring of English speech by Korean EFL learners. Malsori Speech Sounds 66: 41?59.

Low, Ee Ling. 1998. Prosodic Prominence in Singapore English. PhD thesis. University of Cambridge.

Sarmah, Priyankoo, Divya Verma Gogoi, and Caroline Wiltshire. 2009. Thai English. Rhythm and vowels. English World-Wide 30(2): 196?217.

White, Laurence, and Sven L. Mattys. 2007a. Calibrating rhythm: First language and second language studies. Journal of Phonetics 35(4): 501?522.

White, Laurence, and Sven L. Mattys. 2007b. Rhythmic typology and variation in first and second languages. Segmental and Prosodic Issues in Romance Phonology 282: 237?257.

White, Laurence, Sven L. Mattys, Lucy Series, and Suzi Gage. 2007. Rhythm metrics predict rhythmic discrimination. In  Proceedings of the 16th international congress of phonetic sciences, Saarbrücken, 1009?1012.

Wiget, Klaus, Laurence White, Barbara Schuppler, Izabelle Grenon, Oleysa Rauch, and Sven L. Mattys. 2010. How stable are acoustic metrics of contrastive speech rhythm? Journal of the Acoustical Society of America 127(3): 1559?1569.

 

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA