ISCA - International Speech
Communication Association

ISCApad Archive  »  2015  »  ISCApad #206  »  Events

ISCApad #206

Thursday, August 20, 2015 by Chris Wellekens

3 Events
3-1 ISCA Events
3-1-1(2015-09-06) Doctoral Consortium at Interspeech 2015

Call for Applications: Doctoral Consortium at Interspeech 2015

Prospective and current doctoral students from all speech-related disciplines are invited to apply for admission to the Doctoral Consortium to be held at Interspeech 2015 in Dresden, Germany. The doctoral consortium is aimed at providing students working on speech-related topics with an opportunity to discuss their doctoral research with experts from their fields, and to receive feedback from experts and peers on their PhD projects. The format of the Doctoral Consortium will be a one-day workshop prior to the main conference (6th September). It will involve short presentations by participants summarizing their projects, followed by an intensive discussion of these presentations. The Doctoral Consortium is a new format at Interspeech, and is held this year in exchange for the ?Students Meet Experts? lunch event that was held at previous Interspeech Conferences. It is organized by the Student Advisory Committee of the International Speech Communication Association (ISCA-SAC).

How to apply:
Students who are interested in participating are asked to submit an extended abstract of their thesis plan, between 2-4 pages according to Interspeech template format, to:

Who should apply:
While we encourage applications from students at any stage of doctoral training, the doctoral consortium will benefit most those students who are in the middle of their PhD, i.e. those who have already received some initial results but would still benefit from feedback.

Important dates:
? Application deadline: June 20th, 2015
? Notification of acceptance: July 20th, 2015
? Date of the Doctoral Consortium: Sunday 6th September 2015

For further questions please contact:


3-1-2(2015-09-06) INTERSPEECH 2015 Special Session on Synergies of Speech and Multimedia Technologies

Call for paper: submission for INTERSPEECH 2015 Special Session on
Synergies of Speech and Multimedia Technologies

Paper submission deadline: March 20, 2015
Special Session page:

Growing amounts of multimedia content is being shared or stored in
online archives. Alternative research directions in the speech
processing and multimedia analysis communities are developing and
improving speech or multimedia processing technologies in parallel,
often using each others work as ?black boxes?. However, genuine
combination would appear to be a better strategy to exploit the
synergies between the modalities of content containing multiple
potential sources of information.

This session seeks to bring together the speech and multimedia research
communities to report on current work and to explore potential synergies
and opportunities for creative research collaborations between speech
and multimedia technologies. From the speech perspective the session
aims to explore how fundamentals of speech technology can be benefit
multimedia applications, and from the multimedia perspective to explore
the crucial role that speech can play in multimedia analysis.

The list of topics of interest includes (but is not limited to):

- Navigation in multimedia content using advanced speech analysis features;
- Large scale speech and video analysis
- Multimedia content segmentation and structuring using audio and visual
- Multimedia content hyperlinking and summarization;
- Natural language processing for multimedia;
- Multimodality-enhanced metadata extraction, e.g. entity extraction,
keyword extraction, etc;
- Generation of descriptive text for multimedia;
- Multimedia applications and services using speech analysis features;
- Affective and behavioural analytics based on multimodal cues;
- Audio event detection and video classification;
- Multimodal speaker identification and clustering.

Important dates:

20 Mar 2015 paper submission deadline
01 Jun 2015 paper notification of acceptance/rejection
10 Jun 2015 paper camera-ready
20 Jun 2015 early registration deadline
6-10 Sept 2015 Interspeech 2015, Dresden, Germany

Submission takes place via the general Interspeech submission
system. Paper contributions must comply to the INTERSPEECH paper
submission guidelines, cf.
There will be no extension to the full paper submission deadline.
We are looking forward to receive your contribution!


- Maria Eskevich, Communications Multimedia Group, EURECOM, France
( <>)
- Robin Aly, Database Management Group, University of Twente, The
Netherlands ( <>)
- Roeland Ordelman, Human Media Interaction Group, University of Twente,
The Netherlands (
- Gareth J.F. Jones, CNGL Centre for Global Intelligent Content, Dublin
City University, Ireland (


3-1-3(2015-09-06) Announcement of the Pleanary sessions at Interspeech 2015

  Dear Colleagues,


Interspeech-2015 will start in about 3 months and it is time to announce our plenary speakers.


Following the tradition of most past Interspeech conferences, the organizing committee of Interspeech-2015 has decided to present four keynote talks in Dresden, one on each day of the conference. It is also tradition that the first keynote talk will be presented by the ISCA medaillist just at the end of the opening ceremony on Monday morning, Sept. 7, 2015. Information on this year’s ISCA medaillist will be published later in June.


Information on the other three plenary speakers will be however very soon available on the Interspeech-2015 website and here we provide already a brief introduction of the candidates:


Prof. Klaus Scherer from the University of Geneva will present a keynote talk about vocal communication as major carrier of information about a person’s physique, enduring dispositions, strategic intention and current emotional state. He will also discuss the importance of voice quality in comparision to other modalities, e.g. facial expressions, in this context. Prof. Scherer is one of the most prominent researchers in the area of emotion psychology and was holder of an ERC Advanced Grant covering these research topics.


Prof. Katrin Amunts from the Institut of Neuroscience and Medicine at the Research Centre Juelich/Germany will deliver a talk about her research within the European FET-Flagship „The Human Brain Project“. The expectations from this presentation are twofold: Firstly, many of the Interspeech attendants might have heard about the huge „EU Flagship Projects“ which are funded with more than $ 1 Billion each, but does not know what exactly is going on in such a project and how it is organized and structured. This talk will introduce the FET-Flagship project that is mostly relevant to the Interspeech-Community, the „Human Brain Project“. Prof. Amunts is not only the leader of the Subproject 2 “Strategic Human Brain Data”, but is additionally very well-known for her work on how language is mapped in regions of the human brain and how to create a 3D atlas of the human brain.. From her talk we can expect the perfect combination of speech & language research with neural brain research within the probably most prominent project in this area.


Everybody in the Speech Community knows the popular „Personal Digital Assistants“, such as Siri, Cortana, or Google Now. However, many of those people might not know exactly what detailed technology is behind these highly commercially successful systems. The answer to this question will be given by Dr. Ruhi Sarikaya from Microsoft in his keynote address. His group has been building the language understanding and dialog management capabilities of both Cortana and Xbox One. In his talk, he will give an overview of personal digital assistants and describe the system design, architecture and the key components behind them. He will highlight challenges and describe best practices related to bringing personal assistants from laboratories to the real-world.


I hope you agree that we will have truly exciting plenary talks at this year’s Interspeech and the Interspeech-2015 team is looking forward to sharing this experience with you soon in September in Dresden.



Gerhard Rigoll

Plenary Chair, Interspeech-2015


3-1-4(2015-09-06) Call for Applications: Doctoral Consortium at Interspeech 2015

Call for Applications: Doctoral Consortium at Interspeech 2015

Prospective and current doctoral students from all speech-related disciplines are invited to apply for admission to the Doctoral Consortium to be held at Interspeech 2015 in Dresden, Germany. The doctoral consortium is aimed at providing students working on speech-related topics with an opportunity to discuss their doctoral research with experts from their fields, and to receive feedback from experts and peers on their PhD projects. The format of the Doctoral Consortium will be a one-day workshop prior to the main conference (6th September). It will involve short presentations by participants summarizing their projects, followed by an intensive discussion of these presentations. The Doctoral Consortium is a new format at Interspeech, and is held this year in exchange for the ?Students Meet Experts? lunch event that was held at previous Interspeech Conferences. It is organized by the Student Advisory Committee of the International Speech Communication Association (ISCA-SAC).

How to apply:
Students who are interested in participating are asked to submit an extended abstract of their thesis plan, between 2-4 pages according to Interspeech template format, to:

Who should apply:
While we encourage applications from students at any stage of doctoral training, the doctoral consortium will benefit most those students who are in the middle of their PhD, i.e. those who have already received some initial results but would still benefit from feedback.

Important dates:
? Application deadline: June 20th, 2015
? Notification of acceptance: July 20th, 2015
? Date of the Doctoral Consortium: Sunday 6th September 2015

For further questions please contact:


3-1-5(2016-09-08) INTERSPEECH 2016, San Francisco, CA, USA

Interspeech 2016 will take place

from September 8-12 2016 in San Francisco, CA, USA

General Chair is Nelson Morgan.

You may from now on be tempted by the nice pictures of the cover page of its tentative website



3-1-6INTERSPEECH 2015 Update 1 (December 2014)

Updates from INTERSPEECH 2015

Dear colleague,

Interspeech 2015 in Dresden is approaching at an increasing pace, and the entire team of organizers is trying to ensure that you will get a conference which meets all, and hopefully surpasses some, of your expectations. Regarding the usual program of oral and poster sessions, special sessions and challenges, keynotes, tutorials and satellite workshops, the responsible team is working hard to ensure that you will get a program which is not only of respectable breadth and depth, but which also tackles a couple of innovative topics, some of them centered around the special topic of the conference “Speech beyond speech: Towards a better understanding of our most important biosignal”, some of them also addressing other emergent topics.

We would particularly like to draw your attention to the approaching deadlines:
          30 Nov 2014: Special sessions submission deadline (passed)
          15 Dec 2014: Notification of pre-selected special sessions
          20 Mar 2015: Tutorial submission deadline
          20 Mar 2015: Paper submission deadline (not extensible)
          17 Apr 2015: Show and tell paper submission deadline.

In addition to regular papers, we will also experiment with a virtual attendance format for persons who are – mainly for visa or health reasons – not able to come to Dresden to present their paper. For these persons, a limited number of technology-equipped poster boards will be available where online presentations can be held. The number of virtual attendance slots is strictly limited (thus potentially leading to a lower acceptance rate). The corresponding papers have to pass the normal review process, but the deadline will most probably be around 14 days before the normal paper submission deadline. More details on this format will be announced soon.

In the upcoming months, we will keep you updated via this thread, and we will present some historical instruments and techniques related to speech technology which nicely illustrate that Dresden has a rich history in speech science and technology. Interspeech 2015 will hopefully contribute to this history with the latest scientific and technological advances. The entire organizing team is looking forward to welcoming you in Dresden.

On behalf of the organizing team,

Sebastian Möller (General Chair)


3-1-7INTERSPEECH 2015 Update 2 (January 2015)


+++ INTERSPEECH 2015 Update – and a look back! +++

A View from Dresden onto the History of Speech Communication

Part 1: The historic acoustic-phonetic collection

Information Technology at the TU Dresden goes back to Heinrich Barkhausen (1881–1956), the 'father of the electron valve', who taught from 1911 to 1953. Speech research in a narrower sense started with the development of a vocoder in the 1950s. Walter Tscheschner (1927–2004) performed his extensive investigations on the speech signal using components of the vocoder. In 1969, a scientific unit for Communication and Measurement was founded in Dresden. It is the main root of the present Institute of Acoustics and Speech Communication. W. Tscheschner was appointed Professor of Speech Communication and started with research in speech synthesis and recognition, which today continues.

Numerous objects from the history of Speech Communication in Dresden, but also from other parts of Germany, are preserved at the historic acoustic-phonetic collection of the TU Dresden. Until the opening of Interspeech 2015, we will present interesting exhibits from the collection in this newsletter monthly. Today, we give an introduction.

The historic acoustic-phonetic collection of the TU Dresden consists of three parts:

• Objects that illustrate the development of acoustics and speech technology at the TU Dresden. The most interesting devices are speech synthesizers of various technologies.

• Objects illustrating the development of experimental phonetics from 1900 until the introduction of the computer. The items of this part were collected by D. Mehnert from different phonetics laboratories and rehabilitation units throughout Germany.

• Objects which were formerly collected at the Phonetics Institute of Hamburg University. This important collection, which was founded by Giulio Panconcelli-Calzia, was transferred to Dresden in 2005 in accordance with a contract due to the closing of the Hamburg institute.

The collection is presented in the Barkhausenbau at the main campus of the TU Dresden. Recently, it is moving to new rooms which are more convenient for the presentation. The newly installed collection will be re-opened at the opportunity of Interspeech 2015.

For this purpose, we cordially invite to a workshop on the history of speech communication, called HSCR2015, which will be held as a satellite event of Interspeech 2015 at September 4/5, 2015, in the Technical Museum of the City of Dresden. It is organized by the special interest group (SIG) on 'The History of Speech Communication Sciences', which is supported by the International Speech Communication Association (ISCA) and the International Phonetic Association (IPA). More information on the workshop is presented on

Rüdiger Hoffmann (Local Chair)



3-1-8INTERSPEECH 2015 Update 3 (February 2015)

+++ INTERSPEECH 2015 – February Update ++

A View from Dresden onto the History of Speech Communication

Part 2: Von Kempelen's 'Sprachmaschine' and the beginning of speech synthesis

Complete article including figures available at:


The speaking machine of Wolfgang von Kempelen (1734-1804) can be considered as the first successful attempt of a mechanical speech synthesiser. The Austrian-Hungarian engineer is still famous for his 'chess turk' but it was his 'Sprachmaschine' that can count as a milestone in (speech) technology. In his book 'Mechanismus der menschlichen Sprache nebst der Beschreibung einer sprechenden Maschine' (published 1791, no English translation yet) he described the function of the machine which was intended to give a voice for deaf people. Contemporary personalities like Goethe confirmed the authenticity of a child voice when the speaking machine was played.


How does the machine work?

The machine consists of bellows that is connected with a tube to a wooden wind chest. On the other side of the wind chest a round wooden block represents the interface to an open rubber funnel (as the vocal tract). In the wind chest there are two modified recorders to produce the fricatives [s] and [S]. The voice generator is located inside the wooden block. The artificial voice is generated with the help of a reed pipe borrowed by the pipe organ. It has an ivory reed vibrating against a wooden hollow shallot (like in a clarinet). The trained human operator plays the machine like a musical instrument. The right elbows control the air pressure by pressing on the bellows, two fingers of the right hand close or open the access for stops and nasals, two other fingers of the right hand for the fricatives. Vowels are performed by the palm of left hand in different ways.



Apart from parts of one of the originals that are hosted at the Deutsches Museum in Munich there are several reconstructions based on Kempelen's quite detailed descriptions. The replicas built in Budapest, Vienna, York and Saarbrücken allow a lively demonstration of the mechanical generation of speech as well its acoustic analysis but also perception tests with today's listeners. Interestingly, the art of constructing artificial voices led to the profession of 'voice makers' in Eastern-German Thuringia (more information in one of the next newsletters). Original products of the Thuringian 'Stimmenmacher' as well as one of the replicas located at TU Dresden are at display of the HAPS (Historische Akustisch-Phonetische Sammlung) available for ears, eyes (and hands) at the re-opening of HAPS at 4 Sept, which is also the start of the Interspeech satellite Workshop on The History of Speech Communication Research (HSCR 2015).


Jürgen Trouvain and Fabian Brackhane


3-1-9INTERSPEECH 2015 Update 4 (March 2015)

A View from Dresden onto the History of Speech Communication

Part 3: Voices for toys - First commercial spin-offs in speech synthesis


Complete article including figures available at:


When Wolfgang von Kempelen died in 1804, his automata (including the speaking machine) came in ownership of Johann Nepomuk Maelzel (1772 – 1838), who demonstrated them at many tours in Europe and America. He was a clever mechanic and applied Kempelen’s ideas in a mechanical voice for puppets, which could pronounce “Mama” and “Papa”. He received a patent on it in 1824 (Figure 1).


The idea of speaking puppets and toys was continued mainly in the area of Sonneberg in Thuringia, Germany. This small town was the world capital of manufacturing puppets and toys in the 19th century. The voices consist of a bellow, a metal tongue for voicing, and a resonator. There are three reasons why we appreciate the mechanical voices as a milestone in the development of speech technology:


1. The mechanical voices established the first commercial spin-off in speech research. The toy manufacturers in Sonneberg recognized the importance of Mälzel’s invention and produced speaking puppets from 1852. The “Stimmenmacher” (voices maker) was a specific profession, and we find eight manufacturers for human and animal voices alone in Sonneberg in 1911. The most important of them was Hugo Hölbe (1844 – 1931), who developed mechanisms which were able to speak not only Mama/Papa (Figure 2), but also words like Emma, Hurrah, etc.


2. The mechanical voices were applied in the first book with multimodal properties. The bookseller Theodor Brand from Sonneberg received a patent for his “speaking picture book” in 1878. This book shows different animals. Pulling a knob, which corresponds to a picture, activates the voice of the animal (Figure 3). The picture book was published in several languages and was a huge commercial success all over the world.


3. The mechanical voices are the first attempt to support the rehabilitation of hard hearing people by means of speech technology. The German otologist Johannes Kessel (1839 – 1907) demonstrated Hölbe’s voices as a training tool in speech therapy at a conference in 1899. The quality of this kind of synthetic speech proved to be not sufficient for this purpose, however.


The samples from Kessel came to the Phonetic Laboratory of Panconcelli-Calzia in Hamburg, who mentioned them in his historic essays. Due to the transfer of the phonetic exhibits from Hamburg to Dresden in 2005, you can visit the mechanical voices in the HAPS of the TU Dresden now.



Rüdiger Hoffmann

Photographs Copyright TU Dresden / HAPS



3-1-10INTERSPEECH 2015 Update 5 (April 2015)


A View from Dresden onto the History of Speech Communication

Part 4: Helmholtz Resonators Complete article including figures available at:

Hermann von Helmholtz (1821 –1894) was a German physician and physicist who made important contributions in many areas of science. One of these areas was acoustics, where he published the famous book 'On the sensations of tone as a physiological basis for the theory of music' in 1863. There he described his invention of a special type of resonator, which is now known as Helmholtz resonator. These resonators were devised as highly sensitive devices to identify the harmonic components (partial tones) of sounds and allowed significant advances in the acoustic analysis of vowel sounds and musical instruments.

Before the invention of Helmholtz resonators, strong partial tones in a sound wave were typically identified by very thin, elastic membranes that were spanned on circular rings similar to drums. Such a membrane has a certain resonance frequency (in fact multiple frequencies) that depends on its material, tension, and radius. If the sound field around the membrane contains energy at this frequency, the membrane is excited and starts to oscillate. The tiny amplitudes of this oscillation can be visually detected when fine grained sand is distributed over its surface. When the membrane is excited with its lowest resonance frequency, the sand accumulates at the rim of the membrane or along specific lines on its surface, when higher order modes are excited. With a set of membranes tuned to different frequencies, a rough spectral analysis can be conducted.

It was also known that the sensitivity of this method could be improved when the membrane was spanned over the (removed) bottom of a bottle with an open neck end. The key idea of Helmholtz was to replace this bottle by a hollow sphere with an open neck at one 'end' and another small spiky opening at the opposite 'end'. The spiky opening had to be inserted into one ear canal. In this way, the eardrum was excited similarly to the membrane with the sand of the previous technique. However, due to the high sensitivity of the ear, partial tones could be detected much more easily. A further advantage of these resonators was that their resonance frequencies can be expressed analytically in terms of the volume of the sphere and the diameter and the length of the neck. Hence these resonators became important experimental tools for the subjective sound analysis in the late 19th century and the early 20th century.

The HAPS at the TU Dresden contains three sets of Helmholtz resonators. The biggest of these sets contains 11 resonators, which are tuned to frequencies between 128 Hz and 768 Hz. The HAPS also contains a related kind of resonators that were invented by Schaefer (1902). These resonators are tubes with one open end and one closed end. The closed end also has a small spiky opening that has to be inserted into the ear canal. These resonators maximally respond to frequencies of which the wavelength is four times the length of the tube.

Helmholtz used his resonators not only for sound analysis, but also for the synthesis of vowels. Therefore, he first had to analyze the resonances of the vocal tract for different vowels. He did this by means of a set of tuning forks, which he placed and excited directly in front of his open mouth when he silently articulated the different vowels. When the frequency of a tuning fork was close to a resonance of the vocal tract, the resulting sound became much louder than for the other frequencies. For each of the vowels /u/, /o/, and /a/, he was only able to detect a single resonance of the vocal tract at the frequencies 175 Hz (note f), 494 Hz (note b’) and 988 Hz (note b’’), respectively. For each of the other investigated German vowels, he even detected two resonances. The single resonances detected for /u/, /o/ and /a/ probably correspond to the clusters of the nearby first and second resonances of the corresponding vowels. Obviously, his method of analysis was not sensitive enough to separate the two individual resonances of each of the vowels.

To synthesize the vowels /u/, /o/, and /a/ with a single resonance, he simply connected a reed pipe to Helmholtz resonators tuned to the corresponding frequencies. For the vowels with two resonances, he selected a Helmholtz resonator for one of the resonances and attached a 6-10 cm long glass tube to the outer opening of the resonator to create the second resonance. These experiments showed that Helmholtz had surprising insight in the source-filter principle of speech production, which was fully elaborated by Gunnar Fant and others 100 years later.

Peter Birkholz


3-1-11INTERSPEECH 2015 Update 6 (May 2015)

A View from Dresden onto the History of Speech Communication


Part 5: Artificial vocal fold models – The investigation of phonation

Complete article including figures available at:

The investigation of the larynx was (and is) one of the predominant topics in phonetic research. In the early times of experimental phonetics, mechanical models of the larynx or, at least, of the vocal folds have been utilized according to the paradigm of analysis-by-synthesis.


The first models used flat parallel elastic membranes or other simple elements to simulate the function of the vocal folds (Fig. 1). However, the geometry of these models was rather different from that of real human vocal folds. A substantial progress was made by Franz Wethlo (1877 – 1960), who worked at the Berlin university as an educationalist and special pedagogue. He realized that the vocal folds should not be modelled by flat parallel membranes, but that the three-dimensional shape of the vocal folds should be taken into account. Hence, he proposed a three-dimensional model, which was formed by two elastic cushions (Fig. 2). The cushions were filled with pressurized air, the pressure of which could be varied for experimental purposes. In particular, the air pressure in the cushion pipes was varied to adjust the tension of the vocal folds. The whole model was known as “Polsterpfeife” (cushion pipe). Wethlo described it in 1913.


The historical collection (HAPS) at the TU Dresden owns several cushion pipes from Wethlo in different sizes, modelling male, female, and children’s voices. A team from the TU Dresden repeated Wethlo’s experiments with his original equipment in 2004. Therefore, the cushion pipes were connected to a historical “vocal tract model”. This vocal tract model was actually a stack of wooden plates with holes of different diameters to model the varying cross-sectional area of the vocal tract between the glottis and the lips (Fig. 3). This “configurable” vocal tract model came to the HAPS collections from the Institute of Phonetics in Cologne. The artificial vocal folds were used to excite vocal tract configurations for the vowels /a/, /i/ and /u/, but listening experiments showed that these artificial vowels were rather difficult to discriminate.


Today, there is renewed interest in mechanical models of the vocal folds. Such models can be used in physical 3d robotic models of the speech apparatus (e. g., the Waseda talker series of talking robots:, to evaluate the accuracy of low-dimensional digital vocal fold models (e. g., or to examine pathological voice production.



Rüdiger Hoffmann & Peter Birkholz



3-1-12INTERSPEECH 2015 Update 7 (June 2015)


A View from Dresden onto the History of Speech Communication

Part 6: Measuring speech respiration

Complete article including figures available at:

The respiratory behaviour of humans provides important bio-signals, in speech communication we are interested in respiration while speaking. The investigation of speech respiration mainly entails the observation of i) activity of muscles relevant for in- and exhalation, ii) lung volume, iii) airflow, iv) sub-glottal air pressure, and v) kinematic movements of the thorax and the abdomen.

In the 'early days' of experimental phonetics the measurements were mainly focused on lung volume and the kinematic behaviour of the rib cage and the belly. We present here three devices that are also part of the historic acoustic-phonetic collection (HAPS) which will be re-opened during the International Workshop on the History of Speech Communication Research.

The Atemvolumenmesser (Figure 1) is an instrument to measure the vital capacity of the lung and the phonatory flow, respectively. The human subject maximally inhales with the help of a mask put onto mouth and nose. The subsequent expelling air arrives into the bellows via a mouthpiece and a rubber tube. The resulting volume can be seen on a vertical scale. A stylus that is mounted on a small tube at the scale allows to register the temporal dynamics of speech breathing with the help of a kymograph.

Figure 2 shows a Gürtel-Pneumograph ('belt-pneumograph') which serves to investigate the respiratory movements. The wave-like surfaced rubber tubes are fixed around the upper part of the body of the human subject in order to measure the thoracic and abdominal respiration. Changes of the kinematics result in changes of the air pressure to be transmitted via tubes of so-called Marey capsules onto the stylus to be registered with a kymograph.

The kymograph was the core instrument of the experimental phonetic research until the 1930s. The 'wave-writer' graphically represents changes over time. A revolving drum was wrapped with a sheet of paper with soot (impure carbon) on the surface and a fine-grained stylus easily writes the measured changes.

A clockwork motor was responsible for the constant revolution of the drum. Time-relevant parameters like speech wave forms, air pressure changes of the pneumograph or air volume changes of the Atemvolumenmesser were transduced into kinematic parameters via the Marey capsules and registered on the time axis.

The drum in Figure 3 has a height of 180 mm and circumference of 500 mm. At the beginning the drum is at the top and sinks continuously downwards during the registration process. This spiral movement allows the graphical recording of longer curves on the paper. The speed of the revolution of the drum could be set between 0.1 and 250 mm per second.

Jürgen Trouvain and Dieter Mehnert


3-1-13INTERSPEECH 2015 Update 8 (July 2015)



A View from Dresden onto the History of Speech Communication


Part 7: Early electronic demonstrators for speech synthesis


Complete article including figures available at:

The development of early vocoders had a large impact on speech research, starting with the patents of Karl Otto Schmidt in Berlin and with Homer Dudley’s vocoder at Bell Labs in the 1930s. In some other places, vocoder prototypes had been designed during and after World War II. In Dresden, the development of a prototype was performed in the framework of the Dr.-Ing. thesis of Eberhard Krocker in the 1950s.

The first channel vocoders had been large and expensive due to the application of electronic valves. There was some doubt whether they could be widely used in commercial applications. Krocker summarized: “The importance of the vocoder is less the frequency band compression than the potential for essential investigations on speech. The analyzer can be combined with registration equipment for the analysis of sounds, whereas the synthesizer can be combined with a control mechanism for the synthetic production of speech.”

This was an exact prognosis. The analysis-synthesis technology proved to be a very powerful tool in speech research. The synthesizer part of a channel vocoder could be used for early attempts in electronic speech synthesis. The analyzer part of the vocoder had to be replaced by a control unit. This was a manual/pedal control in the case of Dudley’s famous voder. Another way for controlling the synthesizer was the optoelectronical reading of a spectrogram. The first of these so-called pattern playback devices were developed at the Bell Labs and the Haskins Labs.

It became clear that there are more effective kinds of parameterization of the speech signal, and other vocoder types than the channel vocoder arose. Formant coding proved to be a very effective approach. Consequently, the early types of speech synthesis terminals followed the principle of formant synthesis. This development was strongly influenced by the work of Gunnar Fant, who developed the vowel synthesizer OVE, which was controlled by moving a pointer across the formant plane.

Both devices, the vowel synthesizer and the playback device, showed huge didactic value. When Walter Tscheschner (1927–2004) was appointed the chair for Electronic Speech Communication at the former Technische Hochschule (later Technische Universität) Dresden, he wanted to have own versions of these devices as demonstrators for his lectures. The vowel synthesizer (Figure 1) was built in 1962. It applies circuitry with electron valves to establish three formants. The two lower formants can be adjusted by a pointer (Figure 2).

The playback device was constructed in 1973 as an optical spectrogram reader, which controlled the 19 channels of the abovementioned vocoder. Because the device was very attractive as a demonstrator, a portable version (PBG 2, Figure 3) was implemented in 1982.

Both devices are now exhibits of the historic acoustic-phonetic collection (HAPS) of the TU Dresden. The most remarkable fact is that they are still working.


Rüdiger Hoffmann and Ulrich Kordon


3-2 ISCA Supported Events
3-2-1(2015-09-02) SIGDIAL 2015 CONFERENCE
Preliminary Call for Papers SIGDIAL 2015 CONFERENCE Wednesday, September 2 to Friday, September 4, 2015 The 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue 
(SIGDIAL 2015) will be located in Prague, Czech Republic. SIGDIAL will be held September 2-4. The SIGDIAL venue provides a regular forum for the presentation of cutting edge research 
in discourse and dialogue to both academic and industry researchers. Continuing with a series 
of fifteen successful previous meetings, this conference spans the research interest areas of 
discourse and dialogue. The conference is sponsored by the SIGdial organization, which serves 
as the Special Interest Group in discourse and dialogue for both ACL and ISCA. TOPICS OF INTEREST We welcome formal, corpus-based, system-building or analytical work on discourse and 
dialogue including but not restricted to the following themes and topics: - Discourse Processing and Dialogue Systems - Corpora, Tools and Methodology - Pragmatic and/or Semantic Modeling - Computational Sociolinguistics - Collaborative Process Analysis - Dimensions of Interaction - Open Domain Dialogue - Style, Voice and Personality in Spoken Dialogue and Written Text - Applications of Dialogue and Discourse Processing Technology - Novel Methods for Generation Within Dialogue SUBMISSIONS Special Session Proposals The SIGDIAL organizers welcome the submission of special session proposals. A SIGDIAL special 
session is the length of a regular session at the conference; may be organized as a poster session,
 a poster session with panel discussion, or an oral presentation session. Special sessions may, 
at the discretion of the SIGDIAL organizers, be held as parallel sessions. Those wishing to 
organize a special session should prepare a two-page proposal containing: a summary of the
 topic of the special session; a list of organizers and sponsors; a list of people who may submit 
and participate; and a requested format (poster/panel/oral session). These proposals should be 
sent to conference[at] by the special session proposal deadline. Special session 
proposals will be reviewed jointly by the general and program co-chairs. Papers The program committee welcomes the submission of long papers, short papers, and 
demonstration descriptions. All accepted submissions will be published in the conference 
proceedings. - Long papers may, at the discretion of the technical program committee, be accepted for oral 
or poster presentation. They must be no longer than 8 pages, including title, content, and 
examples. Two additional pages are allowed for references and appendices, which may include 
extended example discourses or dialogues, algorithms, graphical representations, etc. - Short papers will be presented as posters. They should be no longer than 4 pages, including
 title and content. One additional page is allowed for references and appendices. - Demonstration papers should be no longer than 3 pages, including references. A separate 
one-page document should be provided to the program co-chairs for demonstration descriptions, 
specifying furniture and equipment needed for the demo. Authors of a submission may designate their paper to be considered for a SIGDIAL special 
session, which would highlight a particular area or topic. All papers will undergo regular peer 
review. Papers that have been or will be submitted to other meetings or publications must provide this
 information (see submission format). A paper accepted for presentation at SIGDIAL 2015 must 
not have been presented at any other meeting with publicly available proceedings. Any questions 
regarding submissions can be sent to the program co-chairs at program-chairs[at] Authors are encouraged to submit additional supportive material such as video clips or sound 
clips and examples of available resources for review purposes. Submission is electronic using paper submission software. FORMAT All long, short, and demonstration submissions should follow the two-column ACL 2015 format. 
We strongly recommend the use of ACL LaTeX style files or Microsoft Word style files tailored for 
the ACL 2015 conference. Submissions must conform to the official ACL 2015 style guidelines 
ACL 2015 style guidelines, and they must be electronic in PDF. As in most previous years, submissions will not be anonymous. Papers may include 
authors' names and affiliations, and self-references are allowed. MENTORING SERVICE For several years, the SIGDIAL conference has offered a mentoring service. Submissions with 
innovative core ideas that may need language (English) or organizational assistance will be 
flagged for 'mentoring' and conditionally accepted with recommendation to revise with a mentor.
 An experienced mentor who has previously published in the SIGDIAL venue will then help the 
authors of these flagged papers prepare their submissions for publication. Any questions about 
the mentoring service can be addressed to the mentoring chair Svetlana Stoyanchev at 
mentoring[at] STUDENT SUPPORT SIGDIAL also offers a limited number of scholarships for students presenting a paper accepted 
to the conference. Application materials will be posted at the conference website. BEST PAPER AWARDS In order to recognize significant advancements in dialogue and discourse science and technology,
SIGDIAL will recognize two best paper awards. A selection committee consisting of prominent 
researchers in the fields of interest will select the recipients of the awards. SPONSOR THE CONFERENCE SIGDIAL offers a number of opportunities for sponsors. For more information, email the 
sponsorships chair Kristy Boyer at sponsor-chair[at] DIALOGUE AND DISCOURSE SIGDIAL authors are encouraged to submit their research to the journal Dialogue and Discourse, 
which is endorsed by SIGdial. IMPORTANT DATES Special Session Proposal Deadline: Sunday, 15 March 2015 (23:59, GMT-11) Special Session Notification: Monday, 30 March 2015 Long, Short and Demonstration Paper Submission Deadline: Thursday, 30 April 2015 
(23:59, GMT-11) Long, Short and Demonstration Paper Notification: Friday, 12 June 2015 Final Paper Submission Deadline (mentored papers only): Monday, 13 July 2015 Final Paper Submission Deadline (all types except for mentored papers): Monday, 20 July 2015 Conference: Wednesday, 2 September 2015 to Friday, 4 September 2015 ORGANIZING COMMITTEE General Co-Chairs Alexander Koller, University of Potsdam, Germany Gabriel Skantze, KTH Royal Institute of Technology, Sweden Technical Program Co-Chairs Masahiro Araki, Kyoto Institute of Technology, Japan Carolyn Penstein Rose, Carnegie Mellon University, USA Mentoring Chair Svetlana Stoyanchev, AT&T, USA Local Chair Filip Jurcicek, Charles University, Czech Republic Sponsorships Chair Kristy Boyer, North Carolina State University, USA SIGdial President Amanda Stent, Yahoo! Labs, USA SIGdial Vice President Jason Williams, Microsoft Research, USA SIGdial Secretary/Treasurer Kristiina Jokinen, University of Helsinki, Finland

3-2-2(2015-09-06) Workshop on Rodmap for Conversational Interactive Technologies (Rockit), Dresden, Germany

Dear colleagues,

Rockit ( ) is an EU coordinated support action with the
goal of establishing a roadmap about the future of Conversational Interaction
Technologies. The resulting roadmap will be communicated to the EC to influence the
future European funding priorities setting, as well as it will coordinate and guide
research and business communities efforts. To this end, the Conversational Interaction
Technology Innovation Alliance (CITIA, ) has been founded.

Last year, we have collected input from some experts in the area which has been condensed
into a draft roadmap. You can get an overview from the white paper: .
For the scientific side, you can download a screen shots of the road map from .

At this years Interspeech, we plan to refine the roadmap. To this end, we will hold a
roadmapping workshop on Sunday September 6th from 14:00 to 17:00 in Dresden. The plan is
to introduce you to the present version of the roadmap. We will be discussing in more
details the links between commerce/social goods and the science so that we can make a
better, more coherent case for future funding. To participate, you should be willing to
read the products/services part of the roadmap, look at the science plan in one field,
and critique it.

For planning purposes a quick reply to would be

Hope to see you in Dresden

Steve Renals and Dietrich Klakow


3-2-3(2015-11-09) CFP Conference on multimodal interaction (ICMI 2015), Seattle, USA

Call for Contributions


ACM International Conference on Multimodal Interaction (ICMI 2015)

November 9-13, 2015, Seattle, WA, USA



ICMI is the premier international forum for multidisciplinary research on multimodal interaction and multimodal interfaces. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and integrative, multimodal system development.

ICMI'2015 will take place between November 9th and 13th at Motif Hotel in Seattle (USA). The main conference is single-track and includes: keynote speakers, technical full and short papers (including oral and poster presentations), special sessions, demonstrations, exhibits and doctoral spotlight papers. The conference will also feature workshops and grand challenges. The proceedings of ICMI'2015 will be published by ACM as part of their series of International Conference Proceedings.


Calls for Contributions (chronological order):


* Grand Challenge Proposals. Deadline: February 21, 2015


* Workshop Proposals. Deadline: April 11, 2015


* Long and Short Papers. Deadline: May 15, 2015


* Doctoral Consortium Papers. Deadline: July 14, 2015


* Demonstration Proposals. Deadline: August 14, 2015

* Exhibit Proposals. Deadline: September 4, 2015





* Multimodal signal and interaction processing technologies

                - Multimodal signal processing, inference, and input fusion

                - Combinations of signals and semantic interpretations

                - Multimodal output planning and coordination

                - Machine learning approaches for multimodal signals

* Multimodal models for human-human and human-machine interaction

                - Multimodal models for human communication dynamics

                - Models for physically situated human-robot/computer interaction

                - Models for multiparty, group and social interaction

                - Affective computing and interaction models

                - Models for long-term multimodal interaction

* Multimodal data, evaluation and tools

                - Multimodal corpora, resources and tools

                - Evaluation methodologies, assessment and metrics

                - Multimodal annotation methodologies and coding schemes

                - Design issues, principles and best practices for multimodal interfaces

* Multimodal systems and applications

                - Ambient intelligence and smart environments

                - Human-robot interaction and embodied conversational agents

                - Multimodal interfaces for internet-of-things

                - Meeting spaces and meeting analysis systems

                - Multimodal mobile applications


For more information, please visit the conference website:



3-2-4(2016-05-31) Speech Prosody 2016, Boston University, Boston,MA, USA

Speech Prosody 2016, the eighth international conference on speech prosody,will be held at the Boston University from May 31st till June 3rd 2016. It invites papers addressing any aspect of the science and technology of prosody. Speech Prosody, the biennial meeting of the Speech Prosody Special Interest Group (SProSIG) of the International Speech Communication Association (ISCA), is the only recurring international conference focused on prosody as an organizing principle for the social, psychological, linguistic, and technological aspects of spoken language. Past conferences in Aix-en-Provence, Nara, Dresden, Campinas, Chicago, Shanghai and Dublin have each attracted 300-400 delegates, including experts in the fields of Linguistics, Computer Science, Electrical Engineering, Speech and Hearing Science, Psychology, and related disciplines.



Important Deadlines

  • Submission for Proposals for Special Sessions: Sept 15, 2015
  • Announcement of Special Sessions: Oct 1, 2015
  • Submission of Regular Papers: November 15, 2015
  • Notification of Acceptance (by email): January 15, 2016
  • Early Registration Deadline: February 15, 2016
  • Author's Registration Deadline: March 1, 2016
  • Conference: May 31-June 3,2016

Review Areas


  1. Phonology and phonetics of prosody
  2. Prosody of under-resourced languages  and dialects
  3. Signal processing
  4. Audiovisual prosody modeling and analysis
  5. Rhythm and timing
  6. Communicative situation and speaking style
  7. Prosody in computational linguistics
  8. Prosodic aspects of speech and language pathology
  9. Acquisition of first language prosody
  10. Psycholinguistic, cognitive, neural correlates of prosody
  11. Syntax, semantics, and pragmatics
  12. Meta-linguistic and para-linguistic communication
  13. Prosody in language and music
  14. Voice quality, phonation, and vocal dynamics
  15. Prosody in Tone Languages
  16. Prosody of sign language
  17. Prosody in language contact and second language acquisition
  18. Prosody in automatic speech synthesis, recognition and understanding

3-2-5Forthcoming ISCA Tutorial and Research Workshops (ITRWs) & Sponsored Events

Forthcoming ISCA Tutorial and Research Workshops (ITRWs) & Sponsored Events


  • SIGDIAL 2015 : 16th Annual SIGdial meeting on Discourse and Dialogue (Interspeech 2015 satellite)
    2-4 September 2015, Prague, Czech Republic

  • HSCR2015 : Workshop on the History of Speech Communication Research
    5 September 2015, Technische Sammlungen, Dresden, Germany

  • SLaTE 2015 : Workshop on speech and Language Technology for Education (Interspeech 2015 satellite)
    4-5 September 2015, Leipzig, Germany as a Satellite Workshop to INTERSPEECH 2015

  • SLPAT 2015 : Workshop on Speech and Language Processing for Assistive Technologies
    11 September 2015, Dresden, Germany, as a Satellite Workshop to INTERSPEECH 2015

  • IWSR 2015 : International Workshop on Speech Robotics
    11 September 2015, as a Satellite Workshop to INTERSPEECH 2015

  • SAT4DH 2015 : Speech and Audio Technologies for the Digital Humanities
    11 September 2015, Leipzig, Germany, as a Satellite Workshop to INTERSPEECH 2015

  • ERRARE 2015 : Errors by Humans and Machines in multimedia, multimodal and multilingual data processing
    11-13 September 2015, Sinaia (Romania), as a Satellite Workshop to INTERSPEECH 2015

  • AVSP 2015 (FAAVSP) : The 1st Joint Conference on Facial Analysis, Animation and Auditory-Visual Speech Processing
    11-13 September 2015, Vienna, Austria, as a Satellite Workshop to INTERSPEECH 2015

  • MediaEval 2015 : Multimedia Evaluation Benchmark
    14-15 September 2015, as a Satellite Workshop to INTERSPEECH 2015

  • SPECOM 2015 : 17th international conference on Speech and Computer
    20-24 September 2015, Athens, Greece


3-3 Other Events
3-3-1(2015-08-20) IVA 2015 Doctoral Consortium, Delft, The Netherlands

IVA 2015 Doctoral Consortium, August 24th in Delft, The Netherlands

The Intelligent Virtual Agents (IVA) 2015 Doctoral Consortium will be held on
Monday, the 24th of August, two days before the main IVA conference (26th of
August - 28th of August, 2015). We invite and encourage PhD students working in
the area of virtual agents to present and discuss their work. PhD students
working on human-human and human-robot interaction with a relevance to
intelligent virtual agents are also encouraged to apply.

The doctoral consortium offers PhD students the opportunity to discuss and
present their research plans and progress to peers and experts in an interactive
way. During the doctoral consortium, several distinguished researchers will
provide feedback and guidance on the students' current research and future
research directions. Students who would like to benefit from this feedback and
guidance are welcome to apply. Students who have a clear topic and have already
made some progress are especially encouraged to apply.

Based on the student's application, the organizing committee will select a group
of students that will be invited to participate. The students selected are
expected to attend and to give an oral presentation at the Doctoral Consortium.
Students will also be given the possibility to present a poster during the
poster session of the main conference.

Submission Guidelines and Instructions
In order to apply for the doctoral consortium, the PhD student should follow
these submission guidelines and instructions:

1. An application describing the student’s current and planned research should
be submitted in the IVA paper format (see
2. Applications should be submitted as PDF documents (not exceeding 6 pages)
through the IVA submission system (select the
doctoral consortium option).

Applications should be well written and organized. They should clearly describe:

* Aim(s) and objective(s) of the research
* The challenge(s) that the student's research is addressing
* The approach and method(s) used by the student to address the objective(s) and
* The current status of the student's research and future directions
* Optional: Specific issues or questions the student seeks feedback on at the

Important dates
* Submission deadline: 15th of March 2015
* Notification date: 6th of April 2015

* Cost: 25 EUR

Contact and questions
Khiet Truong, University of Twente,
Hannes Vilhjalmsson, Reykjavik University,


3-3-2(2015-08-24) 4th International Workshop on Cyber Crime (IWCC 2015) (extended deadline)

4th International Workshop on Cyber Crime (IWCC 2015) co-located with 10th International
Conference on
Availability, Reliability and Security (ARES 2015)

Universite Paul Sabatier
Toulouse, France
August 24-28, 2015

All accepted papers will be published in a special issue of the Security and
Communication Networks,
Wiley or EURASIP Journal on Information Security.

IWCC Overview

Today's world's societies are becoming more and more dependent on open networks such as
the Internet -
where commercial activities, business transactions and government services are realized.
This has led to
the fast development of new cyber threats and numerous information security issues which
are exploited
by cyber criminals. The inability to provide trusted secure services in contemporary
computer network
technologies has a tremendous socio-economic impact on global enterprises as well as

Moreover, the frequently occurring international frauds impose the necessity to conduct
investigation of facts spanning across multiple international borders. Such examination
is often subject
to different jurisdictions and legal systems. A good illustration of the above being the
Internet, which
has made it easier to perpetrate traditional crimes. It has acted as an alternate avenue
for the
criminals to conduct their activities, and launch attacks with relative anonymity. The
complexity of the communications and the networking infrastructure is making
investigation of the crimes
difficult. Traces of illegal digital activities are often buried in large volumes of
data, which are
hard to inspect with the aim of detecting offences and collecting evidence. Nowadays, the
digital crime
scene functions like any other network, with dedicated administrators functioning as the

This poses new challenges for law enforcement policies and forces the computer societies
to utilize
digital forensics to combat the increasing number of cybercrimes. Forensic professionals
must be fully
prepared in order to be able to provide court admissible evidence. To make these goals
forensic techniques should keep pace with new technologies.

The aim of 4th International Workshop on Cyber Crime is to bring together the research
provided by the researchers from academia and the industry. The other goal is to show the
research results in the field of digital forensics and to present the development of
tools and
techniques which assist the investigation process of potentially illegal cyber activity.
We encourage
prospective authors to submit related distinguished research papers on the subject of
both: theoretical
approaches and practical case reviews.

The workshop will be accessible to both non-experts interested in learning about this
area and experts
interesting in hearing about new research and approaches.

Topics of interest include, but are not limited to:
Cyber crimes: evolution, new trends and detection Cyber crime related investigations
Computer and
network forensics Digital forensics tools and applications Digital forensics case studies
and best
practices Privacy issues in digital forensics Network traffic analysis, traceback and
Incident response, investigation and evidence handling Integrity of digital evidence and
investigations Identification, authentication and collection of digital evidence
techniques and methods Watermarking and intellectual property theft Social networking
Steganography/steganalysis and covert/subliminal channels Network anomalies detection
Novel applications
of information hiding in networks Political and business issues related to digital
forensics and anti-
forensic techniques


Authors are invited to submit Papers will be accepted based on peer review
(3 per paper) and should contain original, high quality work. All papers must be written
in English.
Authors are invited to submit their papers according the following guidelines: two
columns, single-
spaced, including figures and references, using 10 pt fonts and number each page.

Authors are invited to submit Regular Papers (maximum 8 pages) via EasyChair
( Papers accepted by the workshop will
be published
in the Conference Proceedings published by IEEE Computer Society Press. Failure to adhere
to the page
limit and formatting requirements will be grounds for rejection.

Submission of a paper implies that should the paper be accepted, at least one of the
authors will
register and present the paper in the conference.


April 20, 2015 (extended!): Regular Paper Submission [submission link:]
May 10, 2015: Notification Date
June 1, 2015: Camera-Ready Paper Deadline

Wojciech Mazurczyk, Warsaw University of Technology, Poland
Krzysztof Szczypiorski, Warsaw University of Technology, Poland
Artur Janicki, Warsaw University of Technology, Poland (Publicity and Publication Chair)

Contact IWCC 2015 Chair using this email address:


3-3-3(2015-08-24) LVA 2015 - 12th International Conference on Latent Variable Analysis and Signal Separation,Liberec, Czech Republic (Extended deadline)


LVA 2015 - 12th International Conference on Latent Variable Analysis and Signal Separation

August 24-26, 2015, Liberec, Czech Republic


 *About LVA*

LVA 2015 will be the 12th in a series of international conferences which attracted hundreds of researchers and practitioners over the years. Since its start in 1999 under the banner of Independent Component Analysis and Blind Source Separation (ICA), the conference has continuously broadened its horizons. It encompasses today a host of additional forms and models of general mixtures of latent variables. Theories and tools borrowing from the fields of signal processing, applied statistics, machine learning, linear and multilinear algebra, numerical analysis and optimization, and numerous application fields offer exciting interdisciplinary interactions.


The conference will be preceded by a Summer School on Latent Variable Analysis and Signal Separation and it will feature the much-awaited results of the 5th Signal Separation Evaluation Campaign (SiSEC 2015).Keynote talks will be given by three leading researchers:- Tülay Adali (University of Maryland, Baltimore County, USA)- Rémi Gribonval (Inria, France)- DeLiang Wang (Ohio State University, USA)

*Call for Papers*

The proceedings will be published in Springer-Verlag's Lecture Notes in Computer Science Series (LNCS). Prospective authors are invited to submit original papers (up to 8 pages in LNCS format) in areas related to latent variable analysis and signal separation, including but not limited to:- Theory: sparse coding, dictionary learning; statistical and probabilistic modeling; detection, estimation and performance criteria and bounds; causality measures; learning theory; convex/nonconvex optimization tools- Models: general linear or nonlinear models of signals and data; discrete, continuous, flat, or hierarchical models; multilinear models; time-varying, instantaneous, convolutive, noiseless, noisy, over-complete, or under-complete mixtures- Algorithms: estimation, separation, identification, detection, blind and semi-blind methods, non-negative matrix factorization, tensor decomposition, adaptive and recursive estimation; feature selection; time-frequency and wavelet based analysis; complexity analysis- Applications: speech and audio separation, recognition, dereverberation and denoising; auditory scene analysis; image segmentation, separation, fusion, classification, texture analysis; biomedical signal analysis, imaging, genomic data analysis, brain-computer interface- Emerging related topics: sparse learning; deep learning; social networks; data mining; artificial intelligence; objective and subjective performance evaluation.

*Special Sessions*

The program will also feature special sessions on new or emerging topics of interest. Proposals for special sessions must include the session title, rationale, outline, and a list of 4 to 6 invited papers. To submit, see

*Important Dates*

Jan 16, 2015: Submission of special session proposals

Jan 30, 2015: Special session decisions announced

Apr 10, 2015: Paper submission deadline

May 22, 2015: Notification of acceptance

Jun 12, 2015: Submission of camera-ready papers

Aug 26-28, 2015: Conference dates Jan 16, 2015: Submission of special session proposals


*Organizing Committee*

General chairs:Zbynek Koldovsky (Technical University of Liberec, Czech Republic)Petr Tichavsky (Academy of Sciences, Czech Republic)Program chairs:Arie Yeredor (Tel-Aviv University, Israel)Emmanuel Vincent (Inria, France)Special sessions: Shoji Makino (University of Tsukuba, Japana)SiSEC chair: Nobutaka Ono (NII, Japan)Overseas liaison: Andrzej Cichocki (RIKEN, Japan)



3-3-4(2015-08-27) 25th Annual Conference of the European Second Language Association (EUROSLA 2015),Université d'Aix-Marseille, France

** Second Call for Papers **


UMR 7309 Laboratoire Parole et Langage (Université d?Aix-Marseille), in association with the Département de français langue étrangère (Pôle LLC, UFR ALLSHS, Université d?Aix- Marseille), is pleased to announce that it will host EUROSLA 25, the 25th Annual Conference of the European Second Language Association. The general theme of the Conference is « Second Language Acquisition : Implications for language sciences?. You are kindly invited to submit abstracts for papers, posters, thematic colloquia and doctoral workshop related to this theme or to any other domain and subdomain of second language research.


The Conference will start in the morning of 27 August 2015 and close at 12 a.m on 29 August 2015. Preceding the Conference, there will be a doctoral workshop and a Language Learning roundtable, both on 26 August 2015. The theme of this year?s roundtable is ?SLA and theories of pidginization / creolization?.


Plenary speakers


-          Camilla BARDEL (Stockholm University)

-          Sandra BENAZZO (Université Paris 8)

-          Christine DIMROTH (Westfälische Wilhelms-Universität Münster)

-          Scott H. JARVIS (Ohio University)

-          Gabriele PALLOTI (Università degli Studi di Modena e Reggio Emilia (UNIMORE))


Key dates:


- 1 February 2015: Early bird registration

- 27 February 2015: Abstract submission deadline

- 24 April 2015: Notification of acceptance

- 1 June 2015:  Full fee registration starts

- 18 July 2015: End of registration


Language Policy


EUROSLA 25 will be a bilingual conference (English and French) ; presentations in one of these languages are particularly encouraged. However, following the Eurosla constitution, any other European language may also be used.


Abstract submission policy


Each author may submit no more than one single-authored and one co-authored (i.e. not first-authored) abstract to be considered for oral presentations, including colloquia and doctoral workshops. More than one abstract can be submitted for poster presentations. Paper and poster proposals should not have been previously published. All submissions will be reviewed anonymously by the scientific committee and evaluated in terms of rigour, clarity and significance of the contribution, as well as its relevance to second language research. Abstracts should not exceed 500 words (excluding the title, but including optional references).


Individual papers and posters


Papers will be allocated 20 minutes for presentation plus 5 minutes for discussion.

Poster sessions will be held in two 90-minute slots. In order to foster interaction, all other sessions will be suspended during the poster sessions.


Thematic colloquia


The Thematic colloquia will be organised in two-hour slots running in parallel with other sessions. Each colloquium will focus on one specific topic, and will bring together contributions to the topic. Each thematic colloquium should include a maximum of 4 presentations. Colloquium convenors should allocate time for opening and closing remarks, individual papers, discussants (if included) and general discussion.


Doctoral student workshop


The doctoral student workshop is intended to serve as a platform for discussion of ongoing PhD research within any aspect of second language research. PhD students are invited to submit an abstract for a 10-15-minute presentation. The Doctoral workshop focuses on problems of methodology with regard to either data analysis (interpretation of natural conversation, statistical data, interviews, etc.) or research design (experimental design, corpus design, issues of data collection, etc.). These sessions are not intended as opportunities to present research results, but to discuss future directions. Students whose abstracts are accepted will be required to send their paper to a discussant (a senior researcher). The discussant will lead a 10-15-minute feedback/discussion session on their work.


Student stipends


?As in previous years, several student stipends will be available for doctoral students.?If you wish to apply, please send the following information to before 27 February 2015:


1. Name, institution, and address of institution;

2. Curriculum vitae (attached);

3. Official confirmation of a PhD student status;

4. Statement (email) from supervisor or head of Department that the applicant?s institution cannot (fully) cover the conference-related expenses.


Publication of papers


A selection of papers presented at EUROSLA 2015 will be published in the EUROSLA 25 or 26 Yearbook following a peer-review process. There is an annual prize for the best EUROSLA Yearbook article. This includes a framed certificate presented at the EUROSLA General Assembly, a fee waiver for the following EUROSLA conference and conference dinner, and free EUROSLA membership for a year.   


To submit an abstract please visit


3-3-5(2015-08-31) EUSIPCO 2015, NICE, COTE D' AZUR, FRANCE

31st AUGUST-4th SEPTEMBER 2015

**Advance registration deadline: July 31 2015 - **

EUSIPCO is the flagship conference of the European Association for Signal
Processing (EURASIP).
The 23rd edition will be held in Nice, on the French Riviera, from 31st
August -4th September 2015. EUSIPCO 2015 will feature a high quality program
with world-class speakers
(, oral and poster
sessions (600 accepted papers), 5 exceptional plenary keynotes, 8 great
tutorials, 29 special sessions, 4 publisher stands, 2 company exhibitors, 2
industry workshops and is already attracting over 700 leading researchers
and industry figures from all over the world.

Register online now and avoid the onsite surcharge:

End of August  is
considered peak period for Nice. Finding available hotel rooms may be difficult. If
you have not booked your room yet, please do it as soon as possible. Our PCO
can help you obtain negotiated rates (while stock last),


EUSIPCO's Banquet is not to be missed, check it out:
Banquet ticket price is  lower than  the retail price thanks to
EUSIPCO 2015's generous sponsors, and we still have a limited number of
banquet tickets on sale. Don't delay, reserve yours now:

The welcome reception, to be held on Monday evening, is free but
pre-registration is required.

To register for the conference, or to add an additional item (banquet, welcome
reception, tutorial, hotel) to an existing registration please visit,


With 600 accepted papers presented in oral and poster sessions, EUSIPCO
2015, offers a comprehensive and high-grade technical program

Two 'EUSIPCO Best Student Paper Awards' will be presented at the conference
banquet. Papers will be selected by a committee composed of area and
technical chairs.

The program will feature 5  great plenary talks:
This list may get complemented by talks from newly elected EURASIP Fellows.

There are still seats available  for the  8  tutorials:
Be quick, add a tutorial to your registration and avoid disappointment.

The program will also feature 29 Special Sessions. A list of accepted
Special Sessions can be found here:

EUSIPCO features 2 industry workshop:
Registration is mandatory. Details on the workshop page.

Nestled between the foot of the Alps and the Mediterranean Sea, the location
can be accessed easily from the Nice Cote d'Azur international airport,
France's busiest outside of Paris, with direct connections to almost 100
European destinations and 14 international destinations including New York
(JFK) and Dubai. The conference will be held at the Nice Acropolis
Convention Centre, named 'Europe's number one convention centre' for three
consecutive years. The Acropolis is located in the heart of the city only
minutes away from the Promenade des Anglais and the Baie des Anges.

General chairs
Jean-Luc Dugelay (EURECOM)
Dirk Slock (EURECOM)

Technical chairs
Marc Antonini (I3S/UNS/CNRS)
Nicholas Evans (EURECOM)
Cedric Richard (UNS/OCA)

Plenary Talks
Sergios Theodoridis (UoA)
Josiane Zerubia (INRIA)

Special Sessions
Marco Carli (U. Roma)
Thierry Dutoit (UMONS)
Jean-Yves Tourneret (IRIT/ENSEEIHT)

Touradj Ebrahimi (EPFL)
Patrick Naylor (Imperial)

Bernard Merialdo (EURECOM)

Benoit Huet (EURECOM)
Nikos Nikolaidis (AUTH)

Patrizio Campisi (U. Roma Tre)
Claude Delpha (U. Paris Sud)

Student activities
Christophe Beaugeant (Intel)
Ana Perez (UPC/CTTC)

Marc Moonen (KU Leuven, Belgium)

Lionel Fillatre (I3S/UNS/CNRS)
Mounir Ghogho (U. Leeds)

International Liaisons
Thierry Blu (CUHK)
Mohamed Deriche (KFUPM)
Douglas O'Shaughnessy (INRS)
Kenneth Rose (UCSB)

Local Arrangements
Ludovic Apvrille (Telecom ParisTech)

EURECOM, Campus SophiaTech, 450 Route des Chappes, 06410 Biot, FRANCE


3-3-6(2015-08-31) Joint Conference PEVOC & MAVEBA 2015, Firenze, Italia

Joint Conference PEVOC & MAVEBA 2015:  August 31 - September 4, 2015, Palazzo degli Affari, Piazza Adua 1, Firenze, Italy


3-3-7(2015-08-31) Young Researchers Roundtable on Spoken Dialogue Systems (YRRSDS), Prague, Czech Republic

Deadline Extension / 2nd Call for Papers



Young Researchers Roundtable on Spoken Dialogue Systems (YRRSDS)

August 31st - September 1st, 2015, Prague, Czech Republic


Held in conjunction with SIGdial, 2015



The Young Researchers' Roundtable on Spoken Dialogue Systems (YRRSDS) is an annual 2-day workshop designed for graduate students, postdocs, and junior researchers working in research related to spoken dialogue systems in both academia and industry. YRRSDS provides an international forum where participants can get feedback on their work and ideas, get hands-on experience with new tools and systems, network with future employers, and hear from outstanding invited speakers. YRRSDS 2015 will be hosted by Charles University in Prague, Czech Republic, and will take place from August 31st - September 1st, directly before SIGdial.


YRRSDS 2015 will feature invited talks from senior researchers, a career panel representing both academia and industry, technical talks from sponsors, a demo and poster session, roundtable discussions, tutorials, and other exciting activities.





We invite broad participation. Example submission topics include, but are not limited to:


? Models of dialogue: statistical, symbolic, and hybrid approaches

? Evaluation methodology for dialogue systems

? Semantics, pragmatics, and context in dialogue systems

? Incremental spoken dialogue systems

? Situated interaction with virtual and robotic agents

? Psycholinguistic influences on dialogue system design

? Establishing social relationships and engagement with the user

? Data collection and dataset sharing for statistical models

? Industry development cycles, requirements, and applications


Important Dates


Submission deadline: July 10th

Author notification: July 17th


Submission Information



We invite any researcher who is at a relatively early stage of their career (no age limit) to submit a 2-page position paper latest by 10 July, 2015. This should include their past, present and future work, a short bio, and topic suggestions for discussions.  Acceptance is on a first-come, first-served basis and the number of participants is generally capped at 30. Poster presentation by all participants is expected. However, posters need only present current work and not necessarily be from a published paper.


Formatting directions are provided at:


Please submit via the EasyChair system:


If you experience any problems with the submission process, please contact the organizers at





There is a $60 (USD) fee for organizational purposes and proceedings


3-3-8(2015-09-01) MultiLing 2015:Multilingual Summarization of Multiple Documents, Online Fora and Call Centre Conversations, Prague, Czech Republic

= Call for Participation =

= MultiLing 2015: Multilingual Summarization of Multiple Documents,
          Online Fora and Call Centre Conversations =

= Introduction =

From Caesar's `Veni, Vidi, Vici' to `What might be in a summary?'
(Karen Sparck-Jones, 1993) summarization techniques have been key to
successfully grasping the main points of large amounts of information,
and much research has been devoted to improving such techniques.  In
the past two decades, the progress of summarization research has been
supported by evaluation exercises and shared tasks such as DUC, TAC
and, more recently, MultiLing (2011, 2013). Multiling is a
community-driven initiative for benchmarking multilingual
summarization systems, nurturing further research, and pushing the
state-of-the-art in the area.  The aim of MultiLing 2015 is to
continue this evolution and, in addition, to introduce new tasks
promoting research on summarizing free human interaction in online
fora and customer call centres. With this call we wish to invite the
summarization research community to participate in MultiLing 2015.

= The Tasks =

MultiLing 2015 will feature the Multilingual Multi-document Summarization
task familiar from previous editions and its predecessor, the Multilingual
Single-document Summarization. In addition, we will pilot two new tracks,
Online Forum Summarization (OnForumS) and Call Centre Conversation
Summarization (CCCS), in collaboration with the SENSEI EU project
( We describe each task in turn below.

== Multilingual Multi-document Summarization (MMS) ==

The multilingual multi-document summarization track aims to evaluate the
application of (partially or fully) language-independent summarization
algorithms on a variety of languages. Each system participating in the track
will be called to provide summaries for a range of different languages,
based on a news corpus. Participating systems will be required to
apply their methods to a minimum of two languages.
Evaluation will favor systems that apply their methods to more languages.

The corpus used in the Multilingual multi-document summarization track
will be based on WikiNews texts ( Source
texts will be UTF-8, clean texts (without any mark-up, images,etc.).

The task requires systems to generate a single, fluent, representative
summary from a set of documents describing an event sequence. The language of
the document set will be within a given range of languages and all documents
in a set share the same language. The output summary should be of the same
language as its source documents. The output summary should be 250 words at

== Multilingual Single-document Summarization (MSS) ==

Following the pilot task of 2013, the multi-lingual single-document
task will be to generate a single document summary for all the given Wikipedia
feature articles from one of about 40 languages provided. The provided training
data will be the 2013 Single-Document Summarization Pilot Task data
from MultiLing 2013.
A new set of data will be generated based on additional Wikipedia
feature articles.
For each language 30 documents are given. The documents will be UTF-8
without mark-ups and images.
For each document of the training set, the human-generated summary is
provided. For MultiLing 2015
the character length of the human summary for each document will be
provided, called the target length.
Each machine summary should be as close to the target length provided
as possible. For the purpose of
evaluation all machine summaries greater than the target length will
be truncated to the target length.
The summaries will be evaluated via automatic methods and participants
will be required to perform
some limited summarization evaluations.

The manual evaluation will consist of pairwise comparisons of
machine-generated summaries. Each evaluator
will be presented the human-generated summary and two
machine-generated summaries.  The evaluation task
is to read the human summary and then judge if the one
machine-generated summary is significantly closer to
the human generated summary information content (e.g.  system A >
system B or system B > system A) or if
the two machine-generated summaries contain comparable quanties of
information as the human-generated summary.

== Online Forum Summarization (OnForumS) ==

Most major on-line news publishers, such as The Guardian or Le Monde,
publish articles on different topics and encourage reader engagement
through the provision of an on-line comment facility. A given news
article can often give rise to thousands of reader comments -- some
related to specific points within the article, others that are replies
to previous comments. The great volume of such user-supplied comments
suggests the need for automated methods to summarize this content,
which in turn poses an exciting and novel challenge for the
summarization community.

The purpose of the Online Forum Summarization (OnForumS) track at
MultiLing'15 is to set the ground for investigating how such a  mass
of comments can be summarised. We posit that a crucial initial step in
developing reader comment summarization systems is to determine what
comments relate to, be that either specific points within the text of
the article, the global topic of the article, or comments made by
other users. This constitutes a linking task. Furthermore, a set of
link types or labels may be articulated to capture whether, for
example, a comment agrees with, elaborates, disagrees with, etc., the
point made in the commented-upon text. Solving this labelled linking
problem should facilitate the creation of reader comment summaries by
allowing, for example, that comments relating to the same article
content can be clustered, points attracting the most comment can be
identified, representative comments can be chosen for each key point,
and the implications of labelled links can be digested (e.g., numbers
for or against a particular point), etc.

The SMS task at MultiLing'15 is a particular specification of the
linking task, in which systems will take as input a news article with
a reduced set of comments (sifted, according to predefined criteria,
from what could otherwise be thousands of comments) and are asked to
link and label each comment to sentences in the article (which, for
simplification, are assumed to be the appropriate units here), to the
article topic as a whole, or to preceding comments. Precise guidelines
for when to link and for the link types, will be released as part of
the formal task specification, but we anticipate the condition for
linking will require sentences addressing the same assertion, and that
link types will include at least agreement, disagreement,  and
sentiment indicators. The data will cover at least three
languages (English, Italian, and French); a small set of
link-labelled articles will be provided by the SENSEI project
for each of these languages for illustration and for
development. Additional languages may be covered if the data for these
are provided by the participants in the task. These data could be
either translations of the data for other languages, or comparable
articles *on the same topics*.

Evaluation will be based on the results of a crowd-sourcing exercise,
in which crowd workers are asked to judge whether potential links, and
associated labels, are correct for each given test article plus
associated comments.

== Call Centre Conversation Summarization (CCCS) ==

Speech summarization has been of great interest to the community
because speech is the principal modality of human communications and
it is not as easy to skim, search or browse speech transcripts as it
is for textual messages. Speech recorded from call centers offers a
great opportunity to study goal-oriented and focused conversations
between an agent and a caller. The Call Centre Conversation
Summarization (CCCS) task consists in automatically generating
summaries of spoken conversations in the form of textual synopses that
shall inform on the content of a conversation and might be used for
browsing a large database of recordings. Compared to news
summarization where extractive approaches have been very successful,
the CCCS task's objective is to foster work on abstractive
summarization in order to depict what happened in a conversation
instead of what people actually said.

The MultiLing'15 CCCS track leverages conversations from the DECODA
and LUNA corpora of French and Italian call center recordings, both
with transcripts available in their original language as well as
English translation (both manual and automatic). Recording duration
range from a few minutes to 15 minutes, involving two or sometimes
more speakers. In the public transportation and help desk domains, the
dialogs offer a rich range of situations (with emotions such as anger
or frustration) while staying in a coherent domain.

Given transcripts, participants to the task shall generate abstractive summaries
informing a reader about the main events of the conversations, such as
the objective of the caller, whether and how it was solved by the
agent, and the attitude of both parties. Evaluation will be performed
by comparing submissions to reference synopses written by experts.
Both conversations and reference summaries are kindly provided by the
SENSEI project.

= How can I participate? =
For now you only need to fill in your contact details in the following form:
Make sure you also visit the MultiLing community website:

= Roadmap =
Finalization pending.
(PLEASE PROVIDE FEEDBACK on the submission dates, if you plan to participate,
by e-mailing: ggianna AT iit DOT demokritos DOT gr.)

* Training data ready: (date to be finalized per task) Dec 12th, 2014
* Test data available: Feb 15th, 2015
* System submissions due: Feb 28th, 2015
* Evaluation starts: Mar 1st, 2015
* Evaluation ends: Mar 31st, 2015
* Paper submission due: May 1st, 2015
* Paper reviews due: May 15th, 2015
* Camera-ready due: Jun 15th, 2015
* Workshop:  Sep 1st , 2015

*NOTE*: Individual task dates may differ. Please check the MultiLing
website ( for more information.

= Venue =
(Finalization pending)
Collocated with SIGDIAL, Prague, Czech Republic

= Program Committee Members =
(Full list of PC members pending)

The Program Committee members are:
George Giannakopoulos - NCSR Demokritos (overall chair, MMS Task chair)
Jeff Kubina, John Conroy - IDA Center for Computing Sciences (MSS Task chairs)
Mijail Kabadjov - University of Essex (OnForumS Task co-chair)
Josef Steinberger - University of West Bohemia, Czech Republic
(OnForumS Task co-chair)
Benoit Favre - University of Marseille (CCCS Task co-chair)
Udo Kruschwitz and Massimo Poesio - University of Essex
Emma Barker, Rob Gaizauskas and Mark Hepple - University of Sheffield
Vangelis Karkaletsis - NCSR Demokritos
Fabio Celli - University of Trento

Data Contributors (from MultiLing 2013)
Georgios Petasis, George Giannakopoulos - NCSR 'Demokritos', Greece
Josef Steinberger - University of West Bohemia, Czech Republic
Mahmoud El-Haj - Lancaster University, UK
Ahmad Alharthi - King Saud University, Saudi Arabia
Maha Althobaiti - Essex University, UK
Corina Forascu - Romanian Academy Research Institute for Artificial
         Intelligence (RACAI), and Alexandru Ioan Cuza University of
Iasi (UAIC), Romania
Jeff Kubina, John Conroy, Judith Shleshinger - IDA/Center for
Computing Sciences, USA
Lei Li - Beijing University of Posts and Telecommunications (BUPT), China
Marina Litvak - Sami Shamoon College of Engineering, Israel
Sabino Miranda - Center for Computing Research, Instituto Politécnico
Nacional, Mexico

Delete | Reply | Reply to List | Reply to All | Forward | Redirect | View Thread | Blacklist | Whitelist | Message Source | Save as | Print
Move | Copy

3-3-9(2015-09-02) GESPIN 2015 Gesture and Speech in Interaction, Nantes, France (registration open)


Gesture and Speech in Interaction

2 - 4 September 2015

Universite de Nantes - FRANCE


First Call for Papers

After Poznań in Poland, Bielefeld in Germany and Tilburg in the Netherlands, the fourth edition

of GESPIN will be held in Nantes, France. GESPIN is an international conference on how

gesture and speech work together to achieve various goals. This edition will focus especially on

“combined units of meaning in gesture and speech”. The following issues may be of particular


· Mapping of units in different semiotic modes

· Overlapping of units across modalities

· Affordances and relevance of different unit types

· Multimodal models of cognition

· Transliteration of units

· Gesture and speech in development

· Gesture and speech in dialogue

· Multimodal language learning and teaching

Yet, papers on all other topics related to the combination of speech and gesture are welcome as

well. We also invite proposals for tutorials and hands-on data sessions. Papers and tutorial reports

will be published online.

Keynote speakers

· Alan Cienki (FU Amsterdam)

· Jean-Marc Colletta (U. Grenoble)

· Ellen Fricke (TU Chemnitz, Germany)

· Judith Holler (MPI, Nijmegen)

Important dates

· Deadline for full papers and workshop proposals: April 22, 2015 extended

Info on submission on

· Acceptance of papers & workshops: June 15, 2015

· Revised version of accepted papers: July 15, 2015

· Gespin conference: September 2-4, 2015


Faculte des Langues et Cultures Etrangeres (FLCE)

Universite de Nantes

Chemin de la Censive du Tertre

44312 Nantes


Registration fees

· Students: 80 €

· Academics: 150 €

The conference fee will cover the online publication cost of the proceedings, conference package,

snacks and drinks during breaks as well as the conference dinner and social program.


Please submit full papers (6 pages maximum), written in English (see submission link on website

for submission procedure and paper template). Papers will be sent to two reviewers and final

selection will be discussed collectively by the organizing committee.

Organizing committees

Local organizing committee

· Gaelle Ferre (principle organizer,

· Mark Tutton (principle organizer,

· Manon Lelandais (conference secretary,

· Benjamin Lourenco (conference secretary,

Scientific board

· Mats Andren (U. Lund, Sweden)

· Dominique Boutet (Evry, France)

· Jana Bressem (TU Chemnitz, Germany)

· Heather Brookes (U. Cape Town, South Africa)

· Alan Cienki (FU Amsterdam, The Netherlands)

· Doron Cohen (U. Manchester, UK)

· Jean-Marc Colletta (U. Grenoble, France)

· Gaelle Ferre (U. Nantes, France)

· Elen Fricke (TU Chemnitz, Germany)

· Alexia Galati (U. Cyprus)

· Marianne Gullberg (U. Lund, Sweden)

· Daniel Gurney (U. Hertfordshire, UK)

· Simon Harrison (U. Nottingham Ningbo, China)

· Judith Holler (MPI, The Netherlands)

· Ewa Jarmołowicz (Adam Mickiewicz University, Poland)

· Konrad Juszczyk (Adam Mickiewicz University, Poland)

· Maciej Karpiński (Adam Mickiewicz University, Poland)

· Sotaro Kita (U. Warwick, UK)

· Stefan Kopp (U. Bielefeld, Germany)

· Emiel Krahmer (U. Tilburg, The Netherlands)

· Anna Kuhlen (U. Humbolt Berlin, Germany)

· Silva H. Ladewig (Europa-Universitat Frankfurt, Germany)

· Maarten Lemmens (U. Lille 3, France)

· Zofia Malisz (U. Bielefeld, Germany)

· Irene Mittelberg (HUMTEC Aachen, Germany)

· Asli Ozyurek (MPI, The Netherlands)

· Katharina J. Rohlfing (U. Bielefeld, Germany)

· Gale Stam (National Louis University, USA)

· Marc Swerts (U. Tilburg, The Netherlands)

· Michał Szczyszek (Adam Mickiewicz University, Poland)

· Marion Tellier (U. Aix en Provence, France)

· Mark Tutton (U. Nantes, France)

· Petra Wagner (U. Bielefeld, Germany)

GESPIN conference board

· Ewa Jarmołowicz (Adam Mickiewicz University, Poland)

· Konrad Juszczyk (Adam Mickiewicz University, Poland)

· Maciej Karpiński (Adam Mickiewicz University, Poland)

· Zofia Malisz (U. Bielefeld, Germany)

· Katharina J. Rohlfing (U. Bielefeld, Germany)

· Michał Szczyszek (Adam Mickiewicz University, Poland)

· Petra Wagner (U. Bielefeld, Germany)


3-3-10(2015-09-03) Satellite of SLaTE 2015: L1 Teaching, Learning and Technology, Leipzig, Germany

Deadline Extension: May 25, 2015 paper submission deadline

Leipzig, Sept. 3: Satellite of SLaTE 2015: L1 Teaching, Learning and Technology

co-located with SLATE (Leipzig) and INTERSPEECH (Dresden)


The aim of this 1-day SoS (Satellite of a Satellite) workshop is to bridge the gap between researchers in education and researchers in speech and text processing technology by organising a joint event where researchers from one workshop are able to visit the other workshop to get an idea of the respective positions on the state of the art on the topic of language and technology in education.

The SoS workshop intends to join researchers across countries on the topic of language teaching/learning. In contrast to SLaTE, papers submitted here do not have to employ any technology yet. We are looking for contributions from users that may not be aware of all the possibilities that the technologies have to offer to solve educational research problems. What these papers bring to the table are problem statements and data collections that the speech and text processing community may in turn not be aware of. Thus we are looking for symbioses between the two disciplines in research about learning/teaching language. It is important for both areas to get to know each other's research questions and potential application for technologies.

Key to this will be provided by the collocation of the event with SLaTE (focusing on technology for education) that allows you to meet people with similar interests, share your work and forge new interactions across disciplines. In doing so, we are looking for a broad range of contributions from didactics, psychology and pedagogy from researchers interested in bridging the current gap to automation. Demonstrations as well as samples of data collections and annotations are welcome.

In order to join the two communities of SLaTE (Spoken Language Technology for Education) and Education in discussions regarding the possibilities of applying this technology to educational questions and datasets, we invite SLaTE attendees to attend the discussions in our workshop and our attendees to attend talks on the first morning of SLaTE. We hope to thus foster new connections and gain access to innovative connections between technology and education.

Invited Speaker: Visualising Multiple Sources of Learning Data for Learners and Teachers in the Language Context; Susan Bull; university of Birmingham, UK


Topics of Interest:

NOTICE: the maximum number of pages is 8, but it is not required !!

  • Data collection, methods, annotation, recognition, analysis, diagnostic, progression of skills, for example in:

  • Handwriting

  • Spoken interaction

  • Story telling

  • Text production

  • Spelling errors

  • Evaluation of L1/L2 teaching methods

  • Teaching L2 Kids in an L1 class environment

  • Models of learning

  • Applications for teaching, self-learning, classroom learning

  • Giving Feedback

  • Technology in the classroom

  • games





3-3-11(2015-09-04) Workshop on Speech and Language Technology for Education (SLaTE 2015)

Workshop on Speech and Language Technology for Education (SLaTE 2015)

Satellite event of Interspeech 2015


The ISCA (International Speech Communication Association) Special Interest Group (SIG) on Speech and Language Technology in Education (SLaTE) promotes the use of speech and language technology for educational purposes, and provides a forum for exchanging information regarding recent developments and other matters of interest related to this topic. For further information please visit

The upcoming Sixth Workshop on Speech and Language Technologies for Education (SLaTE 2015) will be organized by the Pattern Recognition Lab of Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) in cooperation with Hochschule für Telekommunikation Leipzig (HfTL).

The workshop will be held in Leipzig, September 4–5, 2015. It is a satellite event of the 16th Annual Conference of the International Speech Communication Association (INTERSPEECH 2015), which will take place afterwards in Dresden, September 6–10, 2015. Dresden is only 120 km away from Leipzig and can be reached easily within 72 minutes by train (ICE).

If you are interested, please download our flyer or our posters (poster 1 and poster 2). We will present them at the INTERSPEECH 2015 booth at INTERSPEECH 2014 in Singapore.

We are looking forward to welcome you in Leipzig!


3-3-12(2015-09-05) Workshop on the History of Speech Communication, Dresden, Germany

Workshop on the History of Speech Communication,  (Sig-Hist)

Technische Sammlungen,

Dresden, Germany

Organizers:  Rüdiger Hoffmann

                  Jürgen Trouvain


3-3-13(2015-09-06 ) INTERSPEECH 2015 Special Session on Synergies of Speech and Multimedia Technologies

Call for paper: submission for INTERSPEECH 2015 Special Session on
Synergies of Speech and Multimedia Technologies

Paper submission deadline: March 20, 2015
Special Session page:

Growing amounts of multimedia content is being shared or stored in
online archives. Alternative research directions in the speech
processing and multimedia analysis communities are developing and
improving speech or multimedia processing technologies in parallel,
often using each others work as ?black boxes?. However, genuine
combination would appear to be a better strategy to exploit the
synergies between the modalities of content containing multiple
potential sources of information.

This session seeks to bring together the speech and multimedia research
communities to report on current work and to explore potential synergies
and opportunities for creative research collaborations between speech
and multimedia technologies. From the speech perspective the session
aims to explore how fundamentals of speech technology can be benefit
multimedia applications, and from the multimedia perspective to explore
the crucial role that speech can play in multimedia analysis.

The list of topics of interest includes (but is not limited to):

- Navigation in multimedia content using advanced speech analysis features;
- Large scale speech and video analysis
- Multimedia content segmentation and structuring using audio and visual
- Multimedia content hyperlinking and summarization;
- Natural language processing for multimedia;
- Multimodality-enhanced metadata extraction, e.g. entity extraction,
keyword extraction, etc;
- Generation of descriptive text for multimedia;
- Multimedia applications and services using speech analysis features;
- Affective and behavioural analytics based on multimodal cues;
- Audio event detection and video classification;
- Multimodal speaker identification and clustering.

Important dates:

20 Mar 2015 paper submission deadline
01 Jun 2015 paper notification of acceptance/rejection
10 Jun 2015 paper camera-ready
20 Jun 2015 early registration deadline
6-10 Sept 2015 Interspeech 2015, Dresden, Germany

Submission takes place via the general Interspeech submission
system. Paper contributions must comply to the INTERSPEECH paper
submission guidelines, cf.
There will be no extension to the full paper submission deadline.
We are looking forward to receive your contribution!


- Maria Eskevich, Communications Multimedia Group, EURECOM, France
( <>)
- Robin Aly, Database Management Group, University of Twente, The
Netherlands ( <>)
- Roeland Ordelman, Human Media Interaction Group, University of Twente,
The Netherlands (
- Gareth J.F. Jones, CNGL Centre for Global Intelligent Content, Dublin
City University, Ireland (

Delete | Reply | Reply to List | Reply to All | Forward | Redirect | View Thread | Blacklist | Whitelist | Message Source | Save as | Print
Move | Copy

3-3-14(2015-09-11) FAAVSP - The 1st Joint Conference on Facial Analysis, Animation and Audio-Visual Speech Processing, Vienna, Austria

AVSP 2015 - FAA 2015 - Call for Participation


FAAVSP - The 1st Joint Conference on Facial Analysis, Animation and Audio-Visual Speech Processing
The International Symposium on Facial Analysis and Animation (FAA
The International Conference on Auditory-Visual Speech Processing (AVSP

The conference program is now available online at 
Early registration until 31.8.2015 

Social events: 

11-13 September, 2015 - Vienna, Austria 


* Invited Speakers 
Volker Helzle (Filmakademie Baden-Württemberg, Institute of Animation)
Veronica Orvalho (University of Porto, Department of Computer Science)
Jean-Luc Schwartz (GIPSA Lab, Grenoble)
Frank Soong (Microsoft Research Asia)
Lijuan Wang (Microsoft Research Asia)

* Social Events:
11 September 2015: Wine Tavern 'Zur Wildsau'
12 September 2015: Wambacher Restaturant


* Important Dates:
11-13 September 2015: Conference




* Description

This year the AVSP 2015 conference will be held in conjunction with the Facial Analysis and Animation (FAA) conference, i.e.,
AVSP + FAA conferences = FAAVSP ?The 1st Joint Conference on Facial Analysis, Animation and Audio-Visual Speech Processing?

The first day will be devoted to the FAA and the next two days to AVSP (we hope people attend both!) Keynotes will present topics relevant to both communities.

This conference brings together two established interdisciplinary conferences:

The International Symposium on Facial Analysis and Animation (FAA)
The International Conference on Auditory-Visual Speech Processing (AVSP)

Both conferences have a common focus on facial communication research.

FAA focuses on facial animation analysis and synthesis addressed in the fields of computer graphics, computer vision and psychology.

AVSP focuses on how auditory and visual speech information plays a role in human perception, machine recognition, and human-machine interaction.

The two conferences attract researchers from diverse fields, such as speech processing, computer graphics and computer vision, psychology, neuroscience, linguistics, robotics and electrical engineering.

The aim of this first joint conference is to bring together, from both academia and industry, the two communities of facial animation (FAA) and audiovisual speech (AVSP) to discuss research and exchange ideas, data and experiences.

* Topics
- Acquisition of Facial Shape, Motion and Texture
- Facial animation and rendering techniques
- Facial Model Based Coding and Compression
- Facial Analysis and Animation for Mobile Applications
- Embodied Virtual Agents
- Visual and Audiovisual Speech Synthesis
- Human and machine recognition of audio-visual speech
- Human and machine models of multimodal integration
- Multimodal and perceptual processing of facial animation and audiovisual events
- Cross-linguistic studies of audio-visual speech processing
- Developmental studies of audio-visual speech processing
- Audio-visual prosody
- Emotion and Expressivity modeling
- Gestures accompanying speech and non-linguistic behavior
- Neuropsychology and neurophysiology of audio-visual speech processing
- Scene analysis using audio and visual speech information
- Data collection and corpora for audio-visual speech processing

The conference will be held in Vienna, Austria, 11.-13. September 2015.
The session on September 11 will be devoted to FAA topics and those on September 12-13 to AVSP topics.
The keynotes will present topics relevant to both communities.

The FAAVSP organizers 


3-3-15(2015-09-11) Workshop on Bio-inspired Cyber Security & Networking (BCSN 2015), Ghent, Belgium


Workshop on Bio-inspired Cyber Security & Networking (BCSN 2015) co-located with 27th
International Teletraffic Congress (ITC 2015)

Workshop website:

Het Pand
Ghent, Belgium
September 11th, 2015


Nature is probably the most amazing and recognized invention machine on Earth. Its
ability to address complex and large-scale
problems has been developed after many years of selection, genetic drift and mutations.
As a result, it is not surprising that
natural systems continue to inspire inventors and researchers.

Nature's footprint is present in the world of Information Technology, where there are an
astounding number of computational
bio-inspired techniques. These well-regarded approaches include genetic algorithms,
neural networks, ant algorithms just to name a
few. For example several networking management and security technologies have
successfully adopted some of nature's approaches,
which take form of swarm intelligence, artificial immune systems, sensor networks, etc.

Nature has also developed an outstanding ability to recognize individuals or foreign
objects to protect a group or a single
Those abilities have great possibility to improve the area of security and network.

The aim of this workshop is to bring together the research accomplishments provided by
the researchers from academia and the
industry. The other goal is to show the latest research results in the field of
bio-inspired security and networking.

The workshop will be accessible to both non-experts interested in learning about this
area and experts interesting in hearing about
new research and approaches.

Topics of interest include, but are not limited to:
- Bio-inspired security and networking algorithms & technologies
- Moving-target techniques
- Bio-inspired anomaly & intrusion detection
- Adaptation algorithms
- Biometrics
- Biomimetics
- Artificial Immune Systems
- Adaptive and Evolvable Systems
- Machine Learning, neural networks, genetic algorithms for cyber security & networking
- Prediction techniques
- Expert systems
- Cognitive systems
- Sensor networks
- Information hiding solutions (steganography, watermarking) for network traffic
- Cooperative defense systems
- Theoretical development in heuristics
- Management of decentralized networks
- Bio-inspired algorithms for dependable networks


Submissions are limited to 6 pages 2-column IEEE conference style with minimal font size
of 10 pt. Papers will appear in the
conference proceedings and will be available on IEEE Xplore. Submissions are handled
through the EDAS system. Upon final
camera-ready paper submission, authors will need to sign an IEEE copyright form for each
accepted paper to comply with IEEE

Submission of a paper implies that should the paper be accepted, at least one of the
authors will register and present the paper in
the conference.

The extended versions of all papers accepted for BCSN will be published in a special
issue of the JCR journal (tentative).


May 22, 2015 (extended!): Regular Paper Submission
June 15, 2015: Notification Date
July 1, 2015: Camera-Ready Paper Deadline


Wojciech Mazurczyk, Warsaw University of Technology, Poland Errin W. Fulp, Wake Forest
University, USA Hiroshi Wada, Unitrends,
Australia Krzysztof Szczypiorski, Warsaw University of Technology, Poland

Contact BCSN 2015 chairs using this email address: --



3-3-16(2015-09-11) Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), Dresden, Germany

SLPAT 2015

Workshop on Speech and Language Processing for Assistive Technologies (SLPAT)

11th September 2015, co-located with Interspeech 2015, Dresden, Germany

Submission deadline:  8th June 2015



We are pleased to announce the first call for papers for the sixth Workshop on Speech and Language Processing for Assistive Technologies (SLPAT) on Friday 11 September 2015 to be co-located with Interspeech 2015, Dresden, Germany. Full details on the workshop, topics of interest, timeline, and formatting of regular papers are here:



This workshop will bring together researchers from all areas of speech and language technology with a common interest in making everyday life more accessible for people with physical, cognitive, sensory, emotional, or developmental disabilities. The workshop will provide an opportunity for individuals from both research communities, and the individuals with whom they are working, to assist to share research findings, and to discuss present and future challenges and the potential for collaboration and progress. General topics include but are not limited to:


         • Speech synthesis and speech recognition for physical or cognitive impairments

         • Speech transformation for improved intelligibility

         • Speech and language technologies for daily assisted living and Ambient Assisted Living (AAL)

         • Translation systems; to and from speech, text, symbols and sign language

         • Novel modeling and machine learning approaches for Augmentative and Alternative Communication (AAC) / Assistive Technologies (AT) applications

         • Text processing for improved comprehension, e.g., sentence simplification or TTS

         • Silent speech: speech technology based on sensors without audio

         • Symbol languages, sign languages, nonverbal communication

         • Dialogue systems and natural language generation for assistive technologies

         • Multimodal user interfaces and dialogue systems adapted to assistive technologies

         • NLP for cognitive assistance applications

         • Presentation of graphical information for people with visual impairments

         • Speech and NLP applied to typing interface applications

         • Brain-computer interfaces for language processing applications

         • Speech, natural language and multimodal interfaces to assistive technologies

         • Assessment of speech and language processing within the context of AT

         • Web accessibility; text simplification, summarization, and adapted presentation modes such as speech, signs or symbols

         • Deployment of speech and NLP tools in the clinic or in the field

         • Linguistic resources; corpora and annotation schemes

         • Evaluation of systems and components, including methodology

         • Other topics in AAC and AT


Please contact the conference organizers at with any questions.


Important dates:

  • 8 June: Paper due     
  • 15 July: Notification of acceptance
  • 1 August: Camera-ready papers due
  • 11 September SLPAT workshop



Frank Rudzicz, PhD

   Scientist, Toronto Rehabilitation Institute;

   Assistant professor, Department of Computer Science,

         University of Toronto;

   Founder and Chief Executive Officer, Thotra Incorporated

   Director, SPOClab (signal processing and oral communications)

|| Website:

|| Phone (office) : 416 597 3422 x7971

|| Fax : 416 597 3031



3-3-17(2015-09-12) Errors by Humans and Machines in multimedia, multimodal and multilingual data processing – ERRARE 2015, Sinaia (Romania), UPDATED

Errors by Humans and Machines in multimedia, multimodal and multilingual data processing – ERRARE 2015

12-13 September 2015, Sinaia (Romania)

The Research Institute for Artificial Intelligence “Mihai Draganescu” (ICIA) of the Romanian Academy in collaboration with IMMI-CNRS, LIMSI-CNRS and the LAbEx EFL organizes the second edition of the “Errare” workshop, in September 12-13, 2015, as a satellite event of Interspeech 2015 (

The workshop will be organized around the topic of errors produced and processed by humans and machines in multimedia, multimodal and multilingual data with a particular focus on spoken language. It distinguishes itself from other conferences addressing these issues by providing a forum for dialogue and exchange between researchers working in linguistics, including psycho- and neurolinguistics, on the one hand, and researchers in computer science, machine learning and multimedia speech and language processing, on the other hand. 
For this interdisciplinary workshop, we would like to gather these different communities around the issues of variation, ambiguity and errors in speech and language.  The purpose of this workshop is to share interdisciplinary expertise on a heterogeneous phenomenon referred to as “variation” and “ambiguity” in some domains and as “errors” in others. Researchers are invited to share their thoughts and observations through case studies run in the context of various initiatives.

A large panel of research areas shares a common object of study:  human language. These areas encompass historically well-established research communities: classical humanities and social sciences (phonetics, phonology, psycholinguistics, etc.), and more recent domains of the sciences (brain and computer science). Research objectives include analyzing, modeling, understanding and theorizing the human processing of speech variation. For linguists and psycholinguists variation in speech involves some matching process between variable surface forms and stable underlying forms: in such a framework errors may naturally arise as mismatches occurring at the interface of surface and underlying representations. Yet by which mechanisms errors may arise and how to interpret the patterning of errors within theoretical models of speech production and perception has been a matter of controversy. Speech error research in recent years has particularly highlighted the fuzzy boundary between the concepts of 'variability', ambiguity' and 'error'.
Research activities most often include corpora consisting of various types of recorded speech from controlled (laboratory) speech to large scale data. Such corpora may be a result of a variety of capturing techniques from standard audio recordings to multi-sensor capturing of either articulation gestures or brain activities. Errors can also be envisioned as a result of noisy data capturing conditions.

sharing experience with errors, variation and ambiguity is expected to produce beneficial insights for the different communities:

Concerning humanities, variation and ambiguity are central to the different branches of linguistics. Furthermore, human production and perception errors challenge the existing language acquisition, production and perception models.

For automatic speech and language processing, residual errors indicate regions which escape current modeling capacities. In-depth analyses in collaboration with linguists, psycholinguists and speech scientists may contribute to a better understanding of these phenomena and to the proposal of innovative strategies.

Brain sciences, a recent rapidly evolving research area, open new opportunities and the study of errors can contribute to reveal the hidden organization of the brain.

We invite contributions focusing on errors produced by humans and/or machines from (but not limited to) the following areas:

Cognition and brain studies related to errors in speech
Speech production (e.g. slips of the tongue...)
Speech perception
First and second language acquisition
Bilingualism and code switching 
Voice pathologies / clinical phonetics

Natural language processing
Corpus linguistics
Automatic speech processing

Speech and multimodality
Speech and language translation
Spoken Interaction
Information retrieval
Evaluation methods

“Errare 2015” will welcome about 80 participants, with both invited and submitted papers.

Important dates:
15 May  2015: updated submission deadline
15 June 2015 : notifications of acceptance
29 June 2015 : final papers
Workshop dates : 12-13 September 2015

Organizing committee:
Ioana Vasilescu (LIMSI-CNRS)
Gilles Adda (IMMI-LIMSI)
Joseph Mariani (IMMI-LIMSI)
Verginica Mititelu (ICIA, Romanian Academy)
Dan Tufis (ICIA, Romanian Academy)
Maria Candea (Un iversity Paris 3)
Ioana Chitoran university Paris 7)
Sophie Rosset (LIMSI-CNRS)
Guillaume Wisniewski (LIMSI-CNRS)
Laurence Devillers university Paris 4/LIMSI)

program committee:

Gilles Adda (IMMI-LIMSI)
Martine Adda-Decker (University Paris 3/LIMSI)
Tiberiu Boros (ICIA, Romanian Academy)
Maria Candea (University Paris 3)
Ioana Chitoran (University Paris 7)
Laurence Devillers (University Paris 4/LIMSI)
Mirjam Ernestus (Radboud University & Max Planck Institute for Psycholinguistics)
Julia Hirschberg (Columbia University)
Lori Lamel (LIMSI-CNRS)
Mark Liberman (University of Pennsylvania)
Joseph Mariani (IMMI-LIMSI)
Verginica Mititelu (ICIA, Romanian Academy)
Bernd T. Meyer (Carl von Ossietzky Universität Oldenburg)
Marianne Pouplier (Institut für Phonetik und Sprachverarbeitung Munchen)
Sophie Rosset (LIMSI-CNRS)
Dan Tufis (ICIA, Romanian Academy)
Ioana Vasilescu (LIMSI-CNRS)
Guillaume Wisniewski (LIMSI-CNRS)

Scientific committee:

To be announced soon.

Website :

Ioana Vasilescu
Verginica Mititelu
Gilles Adda
Joseph Mariani


3-3-18(2015-09-14) Eighteenth International Conference on TEXT, SPEECH and DIALOGUE (TSD 2015), Pilzen, Czech Republic


Eighteenth International Conference on TEXT, SPEECH and DIALOGUE (TSD 2015)
            Plzen (Pilsen), Czech Republic, 14-17 September 2015


* Keynote speakers: Hermann Ney, Dan Roth, Björn W. Schuller,
  Peter D. Turney, and Alexander Waibel.
* TSD is traditionally published by Springer-Verlag and regularly listed
  in all major citation databases: Thomson Reuters Conference Proceedings
  Citation Index, DBLP, SCOPUS, EI, INSPEC, COMPENDEX, etc.
* TSD offers high-standard transparent review process - double blind,
  final reviewers discussion.
* TSD is officially recognized as an INTERSPEECH 2015 satellite event.
* TSD will take place in Pilsen, the European Capital of Culture 2015.
* TSD provides an all-service package (conference access and material,
  all meals, one social event, etc) for an easily affordable fee starting
  at 270 EUR for students and 330 EUR for full participants.


March 31, 2015 ............ Submission of full papers
May 10, 2015 .............. Notification of acceptance
May 31, 2015 .............. Final papers (camera ready) and registration

September 14-17, 2015 ....... Conference date


TSD series have evolved as a prime forum for interaction between
researchers in both spoken and written language processing from all over
the world. Proceedings of TSD form a book published by Springer-Verlag in
their Lecture Notes in Artificial Intelligence (LNAI) series. The TSD
proceedings are regularly indexed by Thomson Reuters Conference
Proceedings Citation Index. LNAI series are listed in all major citation
databases such as DBLP, SCOPUS, EI, INSPEC, or COMPENDEX.

The contributions to the conference will be published in proceedings that
will be made available on a CD to participants at the time of the


Keynote topic:
    Challenges of Modern Era in Speech and Language Processing

Topics of the conference will include (but are not limited to):

    Corpora and Language Resources (monolingual, multilingual,
    text and spoken corpora, large web corpora, disambiguation,
    specialized lexicons, dictionaries)

    Speech Recognition (multilingual, continuous, emotional
    speech, handicapped speaker, out-of-vocabulary words,
    alternative way of feature extraction, new models for
    acoustic and language modelling)

    Tagging, Classification and Parsing of Text and Speech
    (multilingual processing, sentiment analysis, credibility
    analysis, automatic text labeling, summarization, authorship

    Speech and Spoken Language Generation (multilingual, high
    fidelity speech synthesis, computer singing)

    Semantic Processing of Text and Speech (information
    extraction, information retrieval, data mining, semantic web,
    knowledge representation, inference, ontologies, sense
    disambiguation, plagiarism detection)

    Integrating Applications of Text and Speech Processing
    (machine translation, natural language understanding,
    question-answering strategies, assistive technologies)

    Automatic Dialogue Systems (self-learning, multilingual,
    question-answering systems, dialogue strategies, prosody in

    Multimodal Techniques and Modelling (video processing, facial
    animation, visual speech synthesis, user modelling, emotions
    and personality modelling)


    Elmar Noeth, Germany (general chair)
    Eneko Agirre, Spain
    Genevieve Baudoin, France
    Vladimir Benko, Slovakia
    Paul Cook, Australia
    Jan Cernocky, Czech Republic
    Simon Dobrisek, Slovenia
    Kamil Ekstein, Czech Republic
    Karina Evgrafova, Russia
    Darja Fiser, Slovenia
    Eleni Galiotou, Greece
    Radovan Garabik, Slovakia
    Alexander Gelbukh, Mexico
    Louise Guthrie, United Kingdom
    Jan Hajic, Czech Republic
    Eva Hajicova, Czech Republic
    Yannis Haralambous, France
    Hynek Hermansky, USA
    Jaroslava Hlavacova, Czech Republic
    Ales Horak, Czech Republic
    Eduard Hovy, USA
    Maria Khokhlova, Russia
    Daniil Kocharov, Russia
    Miloslav Konopik, Czech Republic
    Ivan Kopecek, Czech Republic
    Valia Kordoni, Germany
    Siegfried Kunzmann, Germany
    Natalija Loukachevitch, Russia
    Bernardo Magnini, Italy
    Vaclav Matousek, Czech Republic
    France Mihelic, Slovenia
    Roman Moucek, Czech Republic
    Hermann Ney, Germany
    Karel Oliva, Czech Republic
    Karel Pala, Czech Republic
    Nikola Pavesic, Slovenia
    Maciej Piasecki, Poland
    Adam Przepiorkowski, Poland
    Josef Psutka, Czech Republic
    James Pustejovsky, USA
    German Rigau, Spain
    Leon Rothkrantz, The Netherlands
    Anna Rumshisky, USA
    Milan Rusko, Slovakia
    Mykola Sazhok, Ukraine
    Pavel Skrelin, Russia
    Pavel Smrz, Czech Republic
    Petr Sojka, Czech Republic
    Stefan Steidl, Germany
    Georg Stemmer, Germany
    Marko Tadic, Croatia
    Tamas Varadi, Hungary
    Zygmunt Vetulani, Poland
    Pascal Wiggers, The Netherlands
    Yorick Wilks, United Kingdom
    Marcin Wolinski, Poland
    Victor Zakharov, Russia


The official language of the event will be English. However, papers on
processing of languages other than English are strongly encouraged.


The conference fee depends on the date of payment and on your status. It
includes one copy of the conference proceedings, refreshments/coffee
breaks, opening dinner, welcome party, mid-conference social event
admissions, and organizing costs. In order to lower the fee as much as
possible, the accommodation and the conference trip are not included.

Full participant:
early registration by May 31, 2015 - CZK 9.000 (approx. 330 EUR)
late registration by August 1, 2015 - CZK 10.000 (approx. 370 EUR)
on-site registration - CZK 10.700 (approx. 390 EUR)

Student (reduced):
early registration by May 31, 2015 - CZK 7.400 (approx. 270 EUR)
late registration by August 1, 2015 - CZK 9.000 (approx. 330 EUR)
on-site registration - CZK 10.000 (approx. 370 EUR)


The city of Plzeň (Pilsen) is situated in Western Bohemia at the
confluence of four rivers. With its 170,000 inhabitants it is the fourth
largest city in the Czech Republic and an important industrial,
commercial, and administrative centre. It is also the capital of the
Pilsen Region. In addition, Pilsen won the title of the European Capital
of Culture for the upcoming year 2015.

Pilsen is well-known for its brewing tradition. The trademark
Pilsner-Urquell has a good reputation all over the world thanks to the
traditional recipe, high quality hops and good groundwater. Beer lovers
will also appreciate a visit to the Brewery Museum or the Brewery itself.

Apart from its delicious beer, Pilsen hides lots of treasures in its core.
The city can boast the second largest synagogue in Europe. The dominant of
the old part of the city center is definitely the 13th-century Gothic
cathedral featuring the highest church tower in Bohemia (102.34 m). It is
possible to go up and admire the view of the city. Not far from the
cathedral is the splendid Renaissance Town Hall from 1558 and plenty of
pleasant cafes and pubs are situated on and around the main square.

There is also the beautiful Pilsen Historical Underground - under the city
center, a complex network of passageways and cellars can be found. They
are about 14 km long and visitors can see the most beautiful part of this
labyrinth during the tour. It is recommended to visit the City Zoological
Garden, having the second largest space for bears in Europe and keeping
several Komodo dragons, large lizards which exist only in a few zoos in
the world.

The University of West Bohemia in Pilsen provides a variety of courses for
both Czech and international students. It is the only institution of
higher education in this part of the country which prepares students for
careers in engineering (electrical and mechanical), science (computer
science, applied mathematics, physics, and mechanics), education (both
primary and secondary), economics, philosophy, politics, archeology,
anthropology, foreign languages, law and public administration, art and


The conference is organized by the Faculty of Applied Sciences, University
 of West Bohemia, Pilsen, and the Faculty of Informatics, Masaryk University,
Brno. The conference is supported by International Speech Communication
Association (ISCA).

Venue: Plzeň (Pilsen), Parkhotel Congress Center Plzeň, Czech Republic


All correspondence regarding the conference should be addressed to:
Ms Anna Habernalová, TSD2015 Conference Secretary
Phone: (+420) 724 910 148
Fax: +420 377 632 402 - Please, mark the faxed material with capitals
                        'TSD' on top.
TSD 2015 conference web site:

Delete | Reply | Reply to All | Forward | Redirect | View Thread | Blacklist | Whitelist | Message Source | Save as | Print
Move | Copy

3-3-19(2015-09-17) EMNLP 2015, Lisbon, Portugal


September 17-21, 2015

Lisbon, Portugal


Long paper submission deadline: May 31, 2015

Short paper submission deadline: June 15, 2015



SIGDAT, the Association for Computational Linguistics' special interest group on linguistic data and corpus-based approaches to NLP, invites submissions to EMNLP 2015.


The conference will be held on September 17-21 2015 in Lisbon, Portugal. The conference will consist of three days of full paper presentations with two days of workshops and tutorials.


Conference URL:


The conference web site will include updated information on workshops, tutorials, venue, traveling, etc. For helpful tips on visiting Lisbon, Portugal, please check the WikiTravel website



As in recent years, some of the presentations at the conference will be of papers accepted for the Transactions of the ACL journal (




EMNLP 2015 will have a large workshop program with 7 workshops and 8 tutorials. See and for more details.




We solicit papers on all areas of interest to the SIGDAT community and aligned fields, including but not limited to:


- Phonology, Morphology, and Segmentation

- Tagging, Chunking, Parsing and Syntax

- Discourse, Dialogue, and Pragmatics

- Semantics

- Summarization and Generation

- Statistical Models and Machine Learning Methods

- Machine Translation and Multilinguality

- Information Extraction

- Information Retrieval and Question Answering

- Sentiment Analysis and Opinion Mining

- Spoken Language Processing

- Computational Psycholinguistics

- NLP for Web and Social Media (including Computational Social Science)

- Language and Vision

- Text Mining and NLP Applications




- Long Paper submission deadline: May 31, 2015

- Short Paper submission deadline: June 15, 2015

- Author response period: July 7-10, 2015

- Acceptance notification: July 24, 2015

- Camera-ready submission deadline: August 14, 2015

- Workshops and tutorials: September 17-18, 2015

- Main conference: September 19-21, 2015


All deadlines are calculated at 11:59pm (UTC/GMT -11 hours)





Long papers


EMNLP 2015 submissions must describe substantial, original, completed and unpublished work. Wherever appropriate, concrete evaluation and analysis should be included. Each submission will be reviewed by at least three program committee members.


Each long paper submission consists of a paper of up to eight (8) pages of content, plus two pages for references; final versions of long papers will be given one additional page (up to 9 pages with 2 pages for references) so that reviewers' comments can be taken into account.


Short papers


EMNLP 2015 also solicits short papers. Short paper submissions must describe original and unpublished work. While a short paper is not a shortened long paper, the characteristics of short papers include:


- A small, focused contribution

- Work in progress

- A negative result

- An opinion piece

- An interesting application nugget


Each short paper submission consists of up to four (4) pages of content, plus 2 pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and 2 pages for references. Authors are encouraged to use this additional page to address reviewers' comments in their final versions. Each short paper submission will be reviewed by at least three program committee members.


Both long and short papers


Papers may be accompanied by the resources (software and/or data) described in the papers. Papers that are submitted with accompanying software/data may receive additional credit toward the overall evaluation score, and the potential impact of the software and data will be taken into account when making the acceptance/rejection decisions.


Accepted papers will be presented orally or as a poster (at the discretion of the program chairs). There will be no distinction in the proceedings between papers presented orally or as posters.


Both long and short papers should follow the two-column format to be provided at the conference site. We reserve the right to reject submissions if the paper does not conform to these styles, including paper size and font size restrictions.


As the reviewing will be blind, papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g., “We previously showed (Smith, 1991) ...‚”, should be avoided. Instead, use citations such as‚ ”Smith (1991) previously showed ...‚”. Submissions that do not conform to these requirements will be rejected without review. Separate author identification information is required as part of the on-line submission process.


Submission will be online, managed by the START system ( The site will be open for accepting submissions one and half months before the conference deadline. To minimize network congestion we request authors upload their submissions as early as possible.


EMNLP multiple submission policy


Papers that have been or will be submitted to other meetings or publications must indicate this at submission time, and must be withdrawn from the other venues if accepted by EMNLP 2015. We will not accept for publication or presentation papers that overlap significantly in content or results with papers that will be (or have been) published elsewhere.


Authors submitting more than one paper to EMNLP 2015 must ensure that submissions do not overlap significantly (>25%) with each other in content or results.


Preprint servers such as and ACL-related workshops that do not have published proceedings in the ACL Anthology are not considered archival for purposes of submission. Authors must state in the online submission form the name of the workshop or preprint server and title of the non-archival version. The submitted version should be suitably anonymized and not contain references to the prior non-archival version. Reviewers will be told: 'The author(s) have notified us that there exists a non-archival previous version of this paper with significantly overlapping text. We have approved submission under these circumstances, but to preserve the spirit of blind review, the current submission does not reference the non-archival version.' Reviewers are free to do what they like with this information.




All accepted papers must be presented at the conference to appear in the proceedings. At least one author of each accepted paper must register for EMNLP 2015.





General Chair

Lluís Márquez, Qatar Computing Research Institute


Program co-Chairs

Chris Callison-Burch, University of Pennsylvania

Jian Su, Institute for Infocomm Research (I2R)


Workshops co-Chairs

Zornitsa Kozareva, Yahoo! Labs

Jörg Tiedemann, Uppsala University


Tutorial co-Chairs

Maggie Li, Hong Kong Polytechnic University

Khalil Sima'an, University of Amsterdam


Publication co-Chairs

Daniele Pighin, Google Inc.

Yuval Marton, Microsoft Corp.


Publicity Chair

Barbara Plank, University of Copenhagen


Sponsorship Team

Hang Li, Huawei Technologies

João Graça, Unbabel Inc.


SIGDAT Liaison

Noah Smith, Carnegie Mellon University


Local co-Chairs

André Martins, Priberam

João Graça, Unbabel Inc.


Local Publicity Chair

Isabel Trancoso, University of Lisbon


Conference Handbook Chair

Fernando Batista, University Institute of Lisbon (ISCTE-IUL)


Website and App Chair

Bruno Martins, University of Lisbon





3-3-20(2015-09-17) 16th Science of Aphasia (SoA) Conference, University of Aveiro, Portugal

The University of Aveiro, Portugal is pleased to announce that
will be hosting the 16th Science of Aphasia (SoA) Conference between
the 17th and 22nd of September 2015.

This year's program theme is Neuroplasticity and Language and includes
the following invited speakers: 
Argye Hillis(Johns Hopkins University, USA)

Hugues Duffau (CHU Montpellier, France)

Cathy Price (UCL, UK)

Alexandre Castro Caldas (UCP, Portugal)

Alexandra Reis (UAlg, Portugal)

Stanislas Dehaene (Collège de France, France)

Uri Hasson (Princeton University, USA)

Jenny Crinion (UCL, UK)

Brenda Rapp (Johns Hopkins University, USA)

David Poeppel (New York University, USA)

Dan Bub (University of Victoria, Canada)

David Caplan (Harvard Medical School, USA)

The SoA conferences have brought together, for the past 15 years,
senior and junior scientists working in the multidisciplinary field
of Neurocognition of language and deal with normal function as well
as disorders. The conference structure ensures direct and informal
interaction between all participants.

The conference program will include keynotes (mornings), and contributed
oral and poster presentations (afternoons) over 4 days (between the 18th and
the 21st of September 2015). Full/student registration will include the
conference proceedings, lunches, coffee breaks, a social program and conference
dinner. Participants will be able to register only for a day as well.

Abstracts can be submitted at
until the 1st of April. Selected abstract authors will then be invited to submit full
length papers.

Conference proceeding will be published as part of a special number
of the journal Stem-, Spraak- en Taalpathologi. The papers are published
online at the journal's website (see previous conference proceedings here)
and a printed copy is distributed to all conference participants.

The venue is the School of Health Sciences at University of Aveiro's
Campus de Santiago, overlooking the Aveiro's lagoon, which is
renowned internationally for its many buildings designed by famous
Portuguese architects, only at a short distance from the city centre
(5 minute walk).


Luis M. T. Jesus
(Local Chair)
University of Aveiro, Portugal                                                                


3-3-21(2015-09-20) 17th International Conference on Speech and Computer (SPECOM-2015), Athens, Greece


17th International Conference on Speech and Computer (SPECOM-2015)
Venue: Athens, Greece, September 20-24, 2015


The conference is organized by University of Patras (Patras, Greece), in cooperation with Moscow State Linguistic University (MSLU, Moscow, Russia), St. Petersburg Institute for Informatics and Automation of the Russian Academy of Science (SPIIRAS, St. Petersburg, Russia) and ITMO University (St. Petersburg, Russia).

SPECOM conferences

Ten years later the SPECOM conference returns to Greece. Recently SPECOM venue is significantly varied: Patras, Greece, 2005; St.Petersburg, Russia, 2006; Moscow, Russia, 2007; St.Petersburg, Russia, 2009; Kazan, Russia, 2011; Plzen, Czech Republic, 2013; Novi Sad, Serbia, 2014. The last conferences were organized in parallel with TSD'2013 and DOGS'2014 and had a great success and benefits of joining the various research teams.
SPECOM Proceedings will be published by Springer-Verlag as a book in the Lecture Notes in Artificial Intelligence (LNAI) series listed in all major citation databases such as DBLP, SCOPUS, EI, INSPEC, COMPENDEX. SPECOM Proceedings are included in the list of forthcoming proceedings for September 2015.


The SPECOM conference is devoted to issues of human-machine interaction, particularly:
Applications for human-computer interaction
Audio-visual speech processing
Automatic language identification
Corpus linguistics and linguistic processing
Forensic speech investigations and security systems
Нuman-robot interaction
Multichannel signal processing
Multimedia processing
Multimodal analysis and synthesis
Signal processing and feature extraction
Speaker identification and diarization
Speaker verification systems
Speech and language resources
Speech driving systems in robotics
Speech enhancement
Speech perception and speech disorders
Speech recognition and understanding
Speech translation automatic systems
Spoken dialogue systems
Spoken language processing
Text-to-speech and Speech-to-text systems


Gerhard Rigoll - Institute for Human-Machine-Communication, TU Munich, Germany
Yannis Stylianou - Computer Science Dept. Univ. of Crete, and Toshiba, Cambridge Research Lab, Cambridge, UK
Murat Saraclar - Bogazici University, Istanbul, Turkey


The official language of the event is English. However, papers on processing of languages other than English are encouraged.


Etienne Barnard, North-West University, South Africa
Laurent Besacier, Laboratory of Informatics of Grenoble, France
Vlado Delic, University of Novi Sad, Serbia
Evangelos Dermatas, University of Patras, Greece
Christoph Draxler, Institute of Phonetics and Speech Communication, Germany
Thierry Dutoit, University of Mons, Belgium
Nikos Fakotakis, University of Patras, Greece
Peter French, University of York, UK
Hiroya Fujisaki, University of Tokyo, Japan
Todor Ganchev, Technical University of Varna, Bulgaria
Ruediger Hoffmann, Dresden University of Technology, Germany
Slobodan Jovicic, University of Belgrade, Serbia
Dimitri Kanevsky, IBM Thomas J. Watson Research Center, USA
Alexey Karpov, SPIIRAS, Saint-Petersburg, Russia
Walter Kellerman, Erlangen-Nurnberg University, Germany
George Kokkinakis, University of Patras, Greece
Steven Krauwer, Utrecht University, The Netherlands
Lin-shan Lee, National Taiwan University, Taiwan
Boris Lobanov, United Institute of Informatics Problems, Belarus
Benoit Macq, University Сatholique de Louvain, Belgium
Yuri Matveev, ITMO University, Russia
Roger Moore, Sheffield University, UK
John Mourjopoulos, University of Patras, Greece
Konstantinos Moustakas, University of Patras, Greece
Geza Nemeth, Budapest University of Technology and Economics, Hungary
Heinrich Niemann, University of Erlangen-Nuremberg, Germany
Alexander Petrovsky, Belarusian State University of Informatics and Radioelectronics, Belarus
Elias Potamitis, University of Patras, Greece
Rodmonga Potapova, Moscow State Linguistic University, Russia
Dimitar Popov, Bologna University, Italy
Lawrence Rabiner, Rutgers University, USA
Gerhard Rigoll, Munich University of Technology, Germany
Andrey Ronzhin, SPIIRAS, Saint-Petersburg, Russia
Murat Saraclar, Bogazici University, Turkey
Jesus Savage, University of Mexico, Mexico
Tanja Schultz, University of Karlsruhe, Germany
Eberhard Stock, Halle, Germany
Milan Secujski, University of Novi Sad, Serbia
Pavel Skrelin, St. Petersburg State University, Russia
Viktor Sorokin, Institute for Information Transmission Problems, Russia
Yannis Stylianou, University of Crete, Greece
Christian Wellekens, EURECOM, France
Milos Zelezny, University of West Bohemia, Czech Republic


The conference program will include presentation of invited papers, oral presentations, and poster/demonstration sessions. Papers will be presented in plenary or topic oriented sessions.
Details about the social events will be available on the web page.


Authors are invited to submit a full paper not exceeding 8 pages formatted in the LNCS style (see below). Those accepted will be presented either orally or as posters. The decision on the presentation format will be based upon the recommendation of three independent reviewers. The authors are asked to submit their papers using the on-line submission form accessible from the conference web site.
Papers submitted to SPECOM 2015 must not be under review by any other conference or publication during the SPECOM review cycle, and must not be previously published or accepted for publication elsewhere.
As the reviewing is blind, the paper should not include authors' names and affiliations.


April 30, 2015 ............ Submission of full papers (extended deadline)
June 01, 2015 ............ Notification of acceptance
June 15, 2015 ............ Camera ready papers and registration
September 20-24, 2015 ..... Conference dates

The contributions to the conference will be published in proceedings that will be made available to participants at the time of the conference.


The conference fee depends on the date of payment and on your status. It includes one copy of the conference proceedings, refreshments/coffee breaks, opening dinner, welcome party, mid-conference social event admissions, and organizing costs. In order to lower the fee as much as possible, meals during the conference, the accommodation, and the conference trip are not included.

Full participant:
early registration by June 15, 2015 – 380 EUR
late registration by August 20, 2015 – 420 EUR
on-site registration – 470 EUR

Student (reduced):
early registration by June 15, 2015 – 300 EUR
late registration by August 20, 2015 – 330 EUR
on-site registration – 370 EUR

The payment may be refunded up until August 20, at the cost of 60 EUR. No refund is possible after this date.
At least one of the authors has to register and pay the registration fee by June 15, 2015 for their paper to be included in the conference proceedings. Only one paper of up to 8 pages is included in the regular registration fee. An author with more than one paper pays the additional paper rates unless a co-author has also registered and paid the full registration fee. In the case of uncertainty, feel free to contact the organising committee for clarification.


The conference will be organized in Athens, Greece.
Each year, more and more travelers are choosing Athens for their leisure and business travel all year round. Athens offers a variety of things to see and do, and most of the times, under favorable weather conditions. Athens is considered one of Europe's safest capitals; its transportation network is user-friendly; there are numerous museums and archeological sites and hundreds of restaurants to satisfy every taste.


All correspondence regarding the conference should be addressed to:
SPECOM Secretariat
Phone/Fax: +7 812 328 7081
Fax: +7 812 328 4450 — Please, designate the faxed material with capitals 'SPECOM' on top.
SPECOM 2015 conference web site:


3-3-22(2015-09-21) International Workshop on Affective Social Multimedia Computing (ASMMC2015), Xi-an, China


CALL FOR PAPERS  International Workshop on Affective Social Multimedia computing (ASMMC2015)
Selected papers will be published in an SCI Journal in the Multimedia Area.
The International Workshop on Affective Social Multimedia Computing (ASMMC) 2015, Xi?an, China, 21 September, 2015
A one-day workshop of the 6th International Conference on Affective Computing and Intelligent Interaction (ACII2015). 
In co-located with ACII 2015 (
Important Dates
Full Paper Submission : 3 June 2015
Notification of Acceptance : 3 July 2015
Camera-Ready Submission : 24 July 2015
Workshop Scope
Social multimedia is fundamentally changing how we communicate, interact, and collaborate in our daily lives. Recent advances in multimedia computing attract an increase in the research on multimedia content analysis, indexing and retrieval based on subjective concepts such as emotion, aesthetics, and preference. Different from the traditional content-based retrieval methods, the affective social media computing aims to process affective content from social multimedia. As the availability of massive and heterogeneous social media data, the problem becomes challenging because it requires multidisciplinary understanding of content and perceptional cues from social multimedia. From the multimedia perspective, research relies on the theoretical and technological findings in affective computing, machine learning, pattern recognition, signal/multimedia processing, computer vision, behavior and social psychology. Affective analysis of social multimedia is attracting growing attention from industry and businesses that provide social networking sites, content-sharing services, distribute and host the media.
This workshop focuses on the analysis of affective signals in social multimedia (e.g., twitter, weichat, weibo, youtube, facebook, etc). It seeks contributions on various aspects of affective computing in social multimedia on related theory, methodology, algorithms and techniques. 
The workshop will address, but is not limited to, the following topics:
? Affective/Emotional content analysis of images, videos, music, metadata (text, symbols, etc.)
? Affective indexing, ranking, and retrieval on big social media data
? Affective computing in social multimedia by multimodal integration (face expression, gesture, posture, speech, text/language)
? Emotional implicit tagging and interactive systems
? User interests and behavior modeling in social multimedia
? Video and image summarization based on affect
? Affective analysis of social media and harvesting the affective response of crowd
? Affective generation in social multimedia, expressive text-to-speech and expressive language translation
? Applications of affective social multimedia computing
The best paper award(s) and journal special issue
The workshop will select the best paper with a cash award from sponsors. In order to promote this emergent research area, we currently seek to publish a special issue or a special section on affective social multimedia in a relevant journal (SCI-indexed). If successful, the extensions of the best paper(s) and honorable mention papers will have a priority to be included into the special issue or session of the journal.
Submission of papers
The papers should feature original empirical work, theoretical work, or a well defendable but arguable position of the authors. Papers will be published in the proceedings of ACII 2015 by IEEE. Papers should be limited to 6 pages.
Submitting a paper means that, if the paper is accepted, at least one author should attend the workshop and present the paper.
Please submit your paper via: 
Workshop Co-Chairs
Dong-Yan HUANG , Institute for Infocomm Research, Singapore
Lei XIE, Northwestern Polytechnical University, China
Shuicheng YAN, National University of Singapore, Singapore
Jie YANG, National Science Foundation, USA
ACII workshop chairs
Carlos BUSSO, The University of Texas at Dallas, USA
Hatice GUNES, Queen Mary University of London, UK
Programme Committee
Shih-Fu CHANG, Columbia University, U.S.A
Stephen COX , University of East Anglia, UK
Minghui DONG, Institute for Infocomm Research, Singapore
Wolfgang HUERST, Utrecht University, Nerthland
Qiang JI , Rensselaer Polytechnic Institute, USA
Jia JIA , Tsinghua university, China
Qin JIN, Renmin University of China, China
Swee Lan SEE , Institute for Infocomm Research, Singapore
Haizhou LI , Institute for Infocomm Research, Singapore
Weisi LIN , Nanyang Technology University, Singapore
Jiebo LUO , University of Rochester, USA
Hichem SAHLI, Vrije Universiteit Brussel, Belgium
Bjoern SCHULLER, TUM, Germany
Vidhyasaharan SETHU , University of New South Wales, Sydney, Australia
Rainer STIEFELHAGEN, Karlsruhe Institute of Technology, USA
Yan TONG , University of South Carolina, USA
Changsheng XU, Chinese Academy of Sciences, China
Zhongfei ZHANG, Binghamton University
Peng ZHANG, Northwestern Polytechnical University, China
Xuan ZHU , Samsung R&D Institute of China, China
Please email inquiries concerning ASMMC2015 to:
Dr. Huang Dong Yan, Email:

Prof. Lei Xie, Email:


3-3-23(2015-09-22)​ e-Infrastructures & RDA for data intensive science pre-RDA plenary workshops, Paris France

e-Infrastructures & RDA for data intensive science
pre-RDA plenary workshops
22 September 2015, Paris  France

Stream 5: Data & Computing infrastructures for Global Linguistic Resources

Objectives of the workshop: To gather leading initiatives in the area of language resources and tools to discuss data management trends and challenges, available and planned services, and opportunities for collaboration in the context of emerging digital single market promoted by the European Commission.
The workshop aims to identify opportunities for collaboration both at European and global level.

Background documents:

  • Data Management Trends, Principles and Components ? What Needs to be Done Next?  Paris paper:     
  • Future of research data and computing infrastructures supporting research and innovation - Rome paper:


3 presentation sessions followed by a panel and a discussion round. Each panelist will introduce the initiative he/she represents, with a focus on services, collaboration, policy actions and directions for building advanced infrastructures.

Organizers: Dieter van Uytvanck, Khalid Choukri, Peter Wittenburg


 09:00 ? 10:00


Opening Plenary

Session 1: Scientist Views on Rome and Paris Papers


Welcome and introduction

Dieter Van Uytvanck


presentation 1

Bridget Almas (Perseus, Open Philology)


presentation 2

Erhard Hinrichs (CLARIN)


presentation 3

John McNaught (NaCTeM, tbc)


presentation 4

Andrejs Vasiljevs, (Tilde, tbc)


Coffee break

Session 2: Data Center and Infrastructure Views on Rome and Paris Papers


presentation 1

Khalid Choukri (ELRA)


presentation 2

Chris Cieri (LDC, tbc)


presentation 3

Alastair Dunning (Europeana)





presentation 4

Ross Wilkinson (ANDS, tbc)



Session 3: Humanities on Rome and Paris Papers


presentation 1

Sandra Collins (Digital Repository of Ireland)


presentation 2

Beth Plale (Hathi Trust Research Center)


presentation 3

Stelios Piperidis (ILSP, tbc)


Coffee break




  • ·         Presentation of Summary Messages
  • ·         Panellist Interventions
  • ·         Audience Interventions
  • ·         final Summary Messages


Delete | Reply | Reply to All | Forward | Redirect | View Thread | Blacklist | Whitelist | Message Source | Save as | Print
Move | Copy

3-3-24(2015-09-28) 57th International Symposium ELMAR-2015 , Zadar, Croatia

 57th International Symposium ELMAR-2015
 September 28-30, 2015
 Zadar, Croatia
 Paper submission deadline: March 25, 2015
 --> Image and Video Processing
 --> Multimedia Communications
 --> Speech and Audio Processing
 --> Wireless Communications
 --> Telecommunications
 --> Antennas and Propagation
 --> e-Learning and m-Learning
 --> Navigation Systems
 --> Ship Electronic Systems
 --> Power Electronics and Automation
 --> Naval Architecture
 --> Sea Ecology
 --> Special Sessions:
 Deadline for submission of full papers: March 25, 2015
 Notification of acceptance mailed out by: May 6, 2015
 Submission of (final) camera-ready papers: June 3, 2015
 Preliminary program available online by: June 17, 2015
 Registration forms and payment deadline: June 17, 2015


3-3-25(2015-10-15) Young Researchers in Sciences of Language, Laboratory Praxiling, University of Montpellier, France

Young Researchzers in Sciences of Language


Laboratory Praxiling, University of Montpellier, France


Call for papers: CJC2015

« Trace(s) »

15th-16th october 2015,370.html

The aim of this 9th edition is to bring together researchers interested in the notion of the trace, from theoretical and methodological perspective in various disciplines. The term trace raises both by its multiple meanings and by its recurring presence in the scientific literature. While trace is a common term used in everyday language, the apparent straightforwardness of its meaning hides a number of complex questions in the literature about the contextualization of the term. These questions are all the more relevant in the digital age where the trace is playing an increasingly important role in IT environments (review Intellectica, No. 59). To begin with, an epistemological questioning calls for a multidisciplinary approach. In 2002, A. Serres drew up an inventory of possible meanings of the term trace (as a marker, as an clue) and discussed its presence in literature, linguistics and philosophy. His approach constitutes a solid basis for our thinking. Serres also reviewed intrinsic links between trace and memory (Ricoeur) and trace and writing (Derrida). Secondly, this notion of trace is omnipresent in the field of Linguistics and can be found at all levels of research (epistemological, pragmatic and praxeological). Therefore, it is worth revisiting, at a methodological level, the practices of identification, creation, exploitation and conservation of objects of research, considered as traces of this research : what about the positioning and choices of young researchers on data collection, analysis of corpus, archiving ?

Phonetics and phonology: If we consider sound as a trace in the elastic medium represented by the air, it is worthwhile discussing the notion of the trace in relation to the acoustic signal. In fact, sound traces the acoustic signal thanks to the articulatory gestures. Those gestures can be altered by a communication disorder which will leave a number of traces in the speech. Finally, in the voice, other traces can be observed allowing one to identify the speaker’s gender or his/her emotions.

Language acquisition, didactics and language learning: In the learning process, the target language acquisition is based on existing knowledge and skills that will be progressively transferred from the source language. Therefore, various traces of the first language can be found in the second language, reflecting different levels of the language: linguistics, pragmatics or sociocultural.

Written communication: In the written communication, the participants are not in a situation of co-presence. Therefore, we can talk about a delayed communication that seems to be an interesting subject for discussion. Indeed, the written communication fits into the framework of elaboration and conservation of the traces. As this communication mode is not subject to the constraints that are tied up with the speech flow, it allows backtracking, corrections or erasing all of which may be studied by the researcher. Finally, the four basic operations of substitution (addition, removal, substitution and displacement) can also be detected thanks to their graphic traces.

Digital communication: When considering interactions within the computing environment, it is impossible not to include traces which result from the usage of these devices. Indeed, every user or machine profile leaves a binary line (internet identity). This binary line constitutes a form of digital writing which contributes to a synchronous and an asynchronous communication. This raises several questions related to the trace: its acquisition, its development, its visualization, its archiving, its annotation, its suppression and its recovery.

Language processing: Language processing is essential when it comes to make use of the trace, recover it, repair it or rebuild it. To intercept the trace, researchers create algorithmic models in the form of procedures using a software architecture that will run a program on one or more computers, on condition that those computers are connected together via social networks or internet. These models are developed with adjustable variables allowing to specify the task through the gathered trace. Therefore, we will be able to work with the trace: cut or label it, define its structure, evaluate its meaning, contextualize or generate it.

Contributions from the following areas of linguistics will be considered with the utmost attention: Syntax, Morphology, Semantics, Pragmatics, Phonetics, Phonology, Neurolinguistics, Psycholinguistics, Language Acquisition, TAL, etc. Proposals combining theoretical reflections and naturally occurring data will be particularly appreciated.

Submission: Submitted abstracts should be 800 words long (excluding references and tables). The deadline for our call for papers is March 31st 2015. Submissions must be made via EasyChair: Proposals will be reviewed anonymously by two members of the Scientific Committee. Notification of acceptance will be communicated in May.

Registration: Registration should be made via Azur Colloque :

Fees: Standard registration – early : 70 EUR (on or before September 1st, 2015) Standard registration – regular : 80 EUR (after September 1st, 2015) Visitor registration – early : 80 EUR (on or before September 1st, 2015) Visitor registration – regular : 90 EUR (after September 1st, 2015)

Registration fees include: Access to all sessions / Coffee breaks / Lunch

Scientific committee forthcoming

Planning committee:

Ivana Didirkova

Nada Jonchère

Nathalie Matheu

Contact :


3-3-26(2015-10-18) 2015 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA15), New Paltz, NY, USA
2015 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA15)
Mohonk Mountain House, New Paltz, New York
October 18-21, 2015


Important Dates
Submission of papers: April 10, 2015
Notification of acceptance: June 26, 2015
Early registration until: August 14, 2015
Workshop: October 18-21, 2015

The 2015 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA?15) will be held at the Mohonk Mountain House in New Paltz, New York, and is supported by the Audio and Acoustic Signal Processing technical committee of the IEEE Signal Processing Society. The objective of this workshop is to provide an informal environment for the discussion of problems in audio and acoustics and signal processing techniques leading to novel solutions. Technical sessions will be scheduled throughout the day. Afternoons will be left free for informal meetings among workshop participants. Papers describing original research and new concepts are solicited for technical sessions on, but not limited to, the following topics:

Acoustic Signal Processing:

  • Source separation: Single- and multi-microphone techniques
  • Source localization
  • Signal enhancement: Dereverberation, noise reduction, echo reduction
  • Microphone and loudspeaker array processing
  • Acoustic sensor networks: Distributed algorithms, synchronization
  • Acoustic scene analysis: Event detection and classification
  • Room acoustics

Audio and Music Signal Processing:

  • Content-based music retrieval: Fingerprinting, matching, cover song retrieval
  • Musical signal analysis: Segmentation, classification, transcription
  • Music signal synthesis: Waveforms, instrument models, singing
  • Music separation: Direct-ambient decomposition, vocal and instruments
  • Audio effects: Artificial reverberation, guitar amplifier modeling
  • Upmixing and downmixing

Audio and Speech Coding:

  • Waveform coding and parameter coding
  • Spatial audio coding
  • Sparse representations
  • Low-delay audio and speech coding
  • Digital rights

Hearing and Perception:

  • Hearing aids
  • Computational auditory scene analysis
  • Auditory perception
  • Spatial hearing
  • Speech and audio quality assessment
Workshop Committee

General Chairs
Laurent Daudet
Université Paris Diderot
Gaël Richard
Telecom ParisTech

Technical Program Chair
Bryan Pardo
Northwestern University

Finance Chair
Dorothea Kolossa
Ruhr-Universität Bochum

Far East Liaison
Nobutaka Ono
National Institute of Informatics (Japan)

Publ. Chair & Industry Liaison
John Hershey
Mitsubishi Electric Research Laboratories

Local Arrangements Chair
Juan Bello
New York University

Registration Chair
Bob L. Sturm
Queen Mary University of London

Copyright IEEE Signal Processing Society. All rights reserved.
For questions about your IEEE Membership or IEEE Account, inquire with IEEE Contact Center.


3-3-27(2015-10-28) 18th Oriental COCOSDA/CASLRE Conference, Shanghai, China.

The Oriental Chapter of COCOSDA (International Committee for the Co-ordination and Standardization of Speech Databases and Assessment Techniques) / CASLRE (Conference on Asian Spoken Language Research and Evaluation) is pleased to announce that the 18th Oriental COCOSDA/CASLRE Conference will be held during October 28-30 2015 in Shanghai Jiao Tong University, Shanghai, China.

Oriental COCOSDA/CASLRE is an international conference held annually by the oriental chapter of COCOSDA/CASLRE. It aims at boosting the research and development in the field of speech databases and speech technology and enthusing the interests towards spoken language research in East and Southeast Asia. The past Oriental COCOSDA/CASLRE conferences were held in Tsukuba, Taipei, Beijing, Jeju, Huahin, Singapore, Delhi, Jakarta, Penang, Hanoi, Beijing, Kyoto, Katmandu, Hsinchu, Macau, Gurgaon, and Phuket.

The Oriental COCOSDA/CASLRE Conference in Shanghai will feature world-class plenary speakers, and interactive lecture and poster sessions. Conference proceedings have been indexed by IEEE Xplore in the past years, and we will continue to submit the accepted papers to the IEEE Xplore database with Engineering Index (EI) this year. Papers are invited to report substantial, original and unpublished research on all aspects of speech databases, assessments and speech input/output, including, but not limited to:

  • Assessment of speech input and output technologies
  • Multilingual speech corpora
  • Phonetic/phonological systems for oriental languages
  • Romanization of Non-Roman Characters
  • Segmentation and labeling
  • Speech databases and text corpora
  • Speech processing and applications
  • Speech prosody
  • Standards in annotation and evaluation
  • Any other relevant topics

 Prospective authors are invited to submit four-page papers at

 Important Dates:

  •  Full Paper Submission June 15, 2015
  •  Notification of AcceptanceAugust 15, 2015
  •  Final Manuscript SubmissionAugust 30, 2015
  •  Early Registration DeadlineSeptember 15, 2015

The conference venue is located in School of Foreign Languages, Shanghai Jiao Tong University, which enjoys the beautiful campus along with the active, intellectual and intimate campus atmosphere.

Shanghai is regarded as the Paris of the east; it has a seamless blending of modern and traditional, east and west. Its famous attractions, such as the Bund, Yuyuan Garden, Shanghai World Financial Center, and Oriental Pearl TV Tower have never failed to amaze visitors. We welcome you to Shanghai to experience the culture, architecture, and cuisine of this amazing metropolis.

 For more information about the conference, please visit the conference website at


3-3-28(2015-10-30) ACM Multimedia 2015 Workshop *Speech, Language and Audio in Multimedia* Brisbane, Australia (date is modified)

 ACM Multimedia 2015 Workshop

         *Speech, Language and Audio in Multimedia*

           30 October 2015, Brisbane, Australia


The third workshop on Speech, Language and Audio in Multimedia (SLAM) aims at bringing
together researchers working in speech, language and audio processing to analyze, index
and access multimedia data. Multimedia data are now available in enormous volumes in a
wide variety of formats and qualities, from professional content to user-generated ones:
Lectures, meetings, interviews, debates, conversational broadcast, podcasts, social
videos on the Web, etc. Such data, along with the associated use scenarios, raise
specific challenges: Robustness facing the high variability in quality; Efficiency to
handle very large amount of data; Semantics shared across modalities; Potentially high
error rates in transcription; etc. Worldwide, several national and international research
projects are focusing on audio and language analysis of multimedia data. Similarly,
various benchmark initiatives have devoted effort to offering tasks related to multimodal
multimedia challenges (e.g., TRECVid, CLEF, MediaEval).

Following SLAM 2013 in Marseille, France, and SLAM 2014 in Pinang, Malaysia, both
collocated with the Interspeech conference, SLAM 2015 moves to the multimedia community.
To make the most of the collocation with ACM Multimedia, the workshop features a
dedicated session to highlight work on multimodality and fusion, at the intersection of
speech, audio, language and computer vision.

SLAM gathers players from the fields of speech and audio processing and of multimedia to
share recent research results, discuss ongoing and future projects, explore potential
areas for interdisciplinary collaboration or sharing or ideas, and develop new
benchmarking initiatives of mutual interest to multimedia and language researchers. We
expect contributions on ongoing research work, project descriptions, evaluation
initiatives, demonstrations and applications emphasizing the speech and/or language
and/or audio contribution to any type of multimedia technology.

As a special focus of SLAM 2015, we particularly welcome contributions on video
hyperlinking, as a case study where the speech and language modalities are complemented
by audio and vision.

*Important dates*

Paper submission deadline        July 10, 2015
Notification of acceptance        August 2, 2015
Camera ready paper                August 10, 2015


3-3-29(2015-11-09) ICMI Doctoral Consortium 17th International Conference on Multimodal Interaction Seattle, USA,

 ICMI 2015 Doctoral Consortium 17th International Conference on Multimodal Interaction

Nov 9-13, 2015, Seattle, USA.<>
NOTE: All accepted students will receive financial support to attend ICMI!


Submission deadline: July 14th, 2015
Notifications: August 24th, 2015
Camera-ready deadline: TBA
Consortium Date: November 9th, 2015

?       Submission format: Five-page, ACM SIGCHI proceedings format (
?       Submission system:
?       Selection process: Peer-Reviewed
?       Presentation format: Talk on consortium day and participation in the conference poster session
?       Proceedings: Included in conference proceedings and ACM Digital Library
?       Doctoral Consortium Co-chairs: Carlos Busso (University of Texas at Dallas) and Vidhyasaharan Sethu (University of New South Wales)

The goal of the ICMI Doctoral Consortium is to provide PhD students with an opportunity to present their work to a group of mentors and peers from a diverse set of academic and industrial institutions, to receive feedback on their doctoral research plan and progress, and to build a cohort of young researchers interested in designing multimodal interfaces. We invite students from all PhD granting institutions who are in the process of forming or carrying out a plan for their PhD research in the area of designing multimodal interfaces. The Consortium will be held on November 9th, 2015. We expect to provide economic support to most attendees that will cover part of their costs (travel, registration, meals etc.).

Who should apply?
While we encourage applications from students at any stage of doctoral training, the doctoral consortium will benefit most the students who are in the process of forming or developing their doctoral research. These students will have passed their qualifiers or have completed the majority of their coursework, will be planning or developing their dissertation research, and will not be very close to completing their dissertation research. Students from any PhD granting institution whose research falls within designing multimodal interfaces are encouraged to apply.

Submission Guidelines:
Graduate students pursuing a PhD degree in a field related to designing multimodal interfaces should submit the following materials:

?       Extended Abstract: A five-page description of your PhD research plan and progress in the ACM SIGCHI proceedings format. Your extended abstract should follow the same outline, details, and format of the ICMI short papers. The submissions should not be anonymous and should cover:
?       The key research questions and motivation of your research,
?       Background and related work that informs your research,
?       A statement of hypotheses or a description of the scope of the technical problem,
?       Your research plan, outlining stages of system development or series of studies,
?       The research approach and methodology,
?       Your results to date (if any) and a description of remaining work,
?       A statement of research contributions to date (if any) and expected contributions of your PhD work.

?       Advisor Letter: A one-page letter of nomination from the student's PhD advisor. This letter is not a letter of support. Instead, it should focus on the student's PhD plan and how the Doctoral Consortium event might contribute to the student's PhD training and research.
?       CV: A two-page curriculum vitae of the student.

All materials should be prepared in PDF format into a single document and submitted through the ICMI submission system ( in the 'Doctoral Consortium' track.

Review Process:
The Doctoral Consortium will follow a review process in which submissions will be evaluated by a number of factors including (1) the quality of the submission, (2) the expected benefits of the consortium for the student's PhD research, and (3) the student's contribution to the diversity of topics, backgrounds, and institutions, in that order of importance. More particularly, the quality of the submission will be evaluated based on the potential contributions of the research to the field of multimodal interfaces and its impact on the field and beyond. Students who are in the process of forming their PhD research plan or are developing the research they have planned but are not too close to completing their degrees would most benefit from participating in the consortium. Finally, we hope to achieve a diversity of research topics, disciplinary backgrounds, methodological approaches, and home institutions in this year's Doctoral Consortium cohort. We do not expect more than two students to be invited from each institution to represent a diverse sample. Women are especially encouraged to apply.

Financial Support:
We will provide financial support to all the accepted students, covering the majority of the costs of attending the Doctoral Consortium and the main conference.

All authors of accepted submissions are expected to attend the Doctoral Consortium and the main conference poster session. The attendees will present their PhD work as a short talk at the Consortium and as a poster at the conference poster session. A detailed program for the Consortium and the participation guidelines for the poster session will be available after the camera-ready deadline.

For more information and updates on the ICMI 2015 Doctoral Consortium, visit the Doctoral Consortium page of the main conference website (

For further questions, contact the Doctoral Consortium co-chairs:

?       Carlos Busso, The University of Texas at Dallas, USA (
?       Vidhyasaharan Sethu, University of New South Wales, Australia (


3-3-30(2015-11-13) ICMI 2015 - SECOND CALL For WORKSHOP PROPOSALS




Important dates:

  • workshop proposal:    8th of MAY            NEW DEADLINE  
  • Workshop acceptance notification:   11th of May
  • workshop day: 13th of November


The International Conference on Multimodal Interfaces (ICMI) will be held in Seattle, USA, November 9-13, 2015. ICMI is the premier international conference for multidisciplinary research on multimodal human-human and human-computer interaction analysis, interface designs, and system development. ICMI has developed a tradition of hosting workshops on a day around the main conference to further foster the mingling and exchanges around new research, technology, social science models, application and business opportunities  Examples of recent workshops include:

  • Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze and Multimodality
  • Workshop on Emotion Representations and Modelling for Human-Computer Interaction System
  • Workshop on Smart Material Interfaces: Another Step to a Material Future
  • Workshop on Social Behaviour in Music
  • Multimodal, Multi-Party, Real-World Human-Robot Interaction
  • Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges

This tradition will continue at ICMI-2015 and workshops will be held on 13th November 2015 after the ICMI main technical program. Of interest are focused workshops on emerging research areas of the main conference topics, and in particular those favoring multi-disciplinary views around application areas, business opportunities, or societal challenges.

The format, style, and content of accepted workshops are under the control of the workshop organizers. Workshops may be of a half-day or one day in duration. Workshop organizers will be expected to manage the workshop content, be present to moderate the discussion and panels, invite experts in the domain, and maintain a website for the workshop. Workshop papers will be included in the conference proceedings thumb drive and indexed by the ACM organization.

Prospective workshop organizers are invited to submit proposals in PDF format via email to , by  8th of May, 2015. The proposal should include the following:

  1. Workshop title,
  2. List of organizers including affiliation, email address, and short bio,
  3. Workshop motivation, expected outcomes and impact,
  4. Workshop format (by invitation only, call for papers, etc), anticipated number of talks/posters, workshop duration (half-day or full-day) including tentative program.
  5. Planned advertisement means, website hosting, and estimated participation (industry/academia)
  6. Paper submission procedure (submission via web site, via email, etc.) if applicable. Workshop organizers can rely on the ICMI submission system, or use their preferred one, e.g.
  7. Paper review procedure (single/double-blind, internal/external, solicited/invited-only, pool of reviewers, etc.),
  8. Paper submission and acceptance deadlines (camera-ready and early registration deadlines for a workshop must coincide with the corresponding deadlines of ICMI-2015 - see Important Dates).
  9. Special space and equipment requests, if any.

3-3-31(2015-11-13)1st CFP International Workshop on Advancements in Social Signal Processing for Multimodal Interaction,Seattle, WA, USA

1st CFP International Workshop on Advancements in Social Signal Processing for Multimodal Interaction (ASSP4MI@ICMI2015)

17th ACM International Conference on Multimodal Interaction (ICMI 2015); November 13, 2015, Seattle, Washington, USA


The last decade has shown a significant increase in research in affective computing and social signal processing (SSP). This body of work is inherently multimodal (e.g. eye gaze, vocal and facial expressions) and multidisciplinary (e.g. psychology, linguistics, computer science) of nature by addressing foci that call for these approaches. Major foci are the understanding and automatic detection and interpretation of emotional and social behavior in spontaneous interactions, as well as the generation of socially normative behavior in specific situations. The interpretation of multimodal behaviors also makes it possible to endow systems, such as virtual agents or robots, with socially intelligent capabilities.

The developments in the field are remarkable, especially with respect to methods and applications, which are tightly intertwined. These developments have also led to the emergence of related (sub)fields such as computational social science where data-driven modeling of massive amounts of behavioral data of groups of people for the understanding of social phenomena is key. SSP seems to be continuously developing as a lively multidisciplinary research domain, bringing along new challenges, methods, application areas and emerging fields of research.

After a decade of the introduction of SSP as a research field, we believe it is time to take stock and look into the future. The goal of this workshop is to bring together researchers to discuss recent as well as future developments in SSP for multimodal interaction research: where do we stand now, what are the recent developments in novel methods and application areas, what are the major challenges, and how do we further mature and broaden and increase the impact of SSP? We also believe it is necessary to ensure the quality and advancement of the research in SSP and by training students with the necessary expertise. Since SSP is a relatively new research domain, a textbook for teaching SSP is not available (yet). How we may teach SSP is therefore another topic of interest in this workshop.

We invite both research and position papers and aim for a mix of presentations around recent research and around presentations/discussions about the future of SSP. Papers related to the following topics are in particular encouraged, although other topics are also welcome:

* Recent research *
- Detection and interpretation of social behaviors in human-human and human-agent interaction
- Generation of social agent (virtual and robot) behavior
- Databases and methods for data collection and annotation for SSP research
- Analysis of social group behaviors

* Methodology *
- Standardization: what are standard practices in SSP research?
- Cross-disciplinary methods: what methods can be borrowed from other disciplines?
- What are novel methods used in SSP: e.g., virtual research environments (VRE), virtual reality, novel data collection methods, crowdsourcing, methods dealing with multimodal information, human-in-the-loop machine learning?

* Application Areas *
- What are the novel application areas, e.g., human-robot interaction, health, clinical and therapeutic settings, smart environments, multimedia retrieval, what can they offer SSP and vice versa?
- What could be the killer applications of SSP?
- How is SSP used in other disciplines, such as psychology?

* Education *
- What are the fundamentals in SSP to be taught to students, what would a course in SSP look like?
- What capabilities should a student being trained in SSP have?

Submission deadline: July 13, 2015
Paper notification:     August 10, 2015
Workshop:         November 13, 2015

* All other dates, including registration and camera-ready submission deadlines, will follow the ICMI 2015 dates and deadlines.

Interested researchers are invited to submit a paper in the same ACM publication format as the main conference, see (Word) and (LaTeX). Papers may be up to six pages long including references. Submissions must be made to

Khiet Truong, University of Twente/Radboud University, the Netherlands
Dirk Heylen, University of Twente, the Netherlands
Mohamed Chetouani, University Pierre and Marie-Curie, France
Bilge Mutlu, University of Wisconsin-Madison, USA
Albert Ali Salah, Boğaziçi University, Turkey

k dot p dot truong AT utwente dot nl




SLSP 2015

Budapest, Hungary

November 24-26, 2015

Organised by:

Laboratory of Speech Acoustics
Department of Telecommunications and Telematics
Budapest University of Technology and Economics

Research Group on Mathematical Linguistics (GRLMC)
Rovira i Virgili University



SLSP is a yearly conference series aimed at promoting and displaying excellent research on the wide spectrum of statistical methods that are currently in use in computational language or speech processing. It aims at attracting contributions from both fields. Though there exist large, well-known conferences and workshops hosting contributions to any of these areas, SLSP is a more focused meeting where synergies between subdomains and people will hopefully happen. In SLSP 2015, significant room will be reserved to young scholars at the beginning of their career and particular focus will be put on methodology.


SLSP 2015 will take place in Budapest, on the banks of the Danube and an extensive UNESCO World Heritage site. The venue will be the Faculty of Electrical Engineering and Informatics of the Budapest University of Technology and Economics.


The conference invites submissions discussing the employment of statistical models (including machine learning) within language and speech processing. Topics of either theoretical or applied interest include, but are not limited to:

anaphora and coreference resolution
authorship identification, plagiarism and spam filtering
computer-aided translation
corpora and language resources
data mining and semantic web
information extraction
information retrieval
knowledge representation and ontologies
lexicons and dictionaries
machine translation
multimodal technologies
natural language understanding
opinion mining and sentiment analysis
part-of-speech tagging
question-answering systems
semantic role labelling
speaker identification and verification
speech and language generation
speech recognition
speech synthesis
speech transcription
spelling correction
spoken dialogue systems
term extraction
text categorisation
text summarisation
user modeling


SLSP 2015 will consist of:

invited talks
invited tutorials
peer?reviewed contributions


to be announced


Steven Abney (University of Michigan, Ann Arbor, USA)
Jean-François Bonastre (University of Avignon, France)
Nicoletta Calzolari (National Research Council, Pisa, Italy)
Kevin Bretonnel Cohen (University of Colorado, Denver, USA)
W. Bruce Croft (University of Massachusetts, Amherst, USA)
Udo Hahn (University of Jena, Germany)
Mark Hasegawa-Johnson (University of Illinois, Urbana, USA)
Jing Jiang (Singapore Management University, Singapore)
Tracy Holloway King (, Palo Alto, USA)
Claudia Leacock (McGraw-Hill Education CTB, Monterey, USA)
Mark Liberman (University of Pennsylvania, Philadelphia, USA)
Carlos Martín?Vide (Rovira i Virgili University, Tarragona, Spain, chair)
Alessandro Moschitti (University of Trento, Italy)
Jian-Yun Nie (University of Montréal, Canada)
Maria Teresa Pazienza (University of Rome Tor Vergata, Italy)
Adam Pease (IPsoft Inc., New York, USA)
Bhiksha Raj (Carnegie Mellon University, Pittsburgh, USA)
Javier Ramírez (University of Granada, Spain)
Dietrich Rebholz-Schuhmann (University of Zurich, Switzerland)
Douglas A. Reynolds (Massachusetts Institute of Technology, Lexington, USA)
Michael Riley (Google Inc., Mountain View, USA)
Stefan Schulz (Medical University of Graz, Austria)
Tomoki Toda (Nara Institute of Science and Technology, Japan)
Klára Vicsi (Budapest University of Technology and Economics, Hungary)
Enrique Vidal (Technical University of Valencia, Spain)
Junichi Yamagishi (University of Edinburgh, UK)
Pierre Zweigenbaum (LIMSI-CNRS, Orsay, France)


Adrian Horia Dediu (Tarragona)
Carlos Martín?Vide (Tarragona, co-chair)
György Szaszák (Budapest)
Klára Vicsi (Budapest, co-chair)
Florentina Lilica Voicu (Tarragona)


Authors are invited to submit non-anonymized papers in English presenting original and unpublished research. Papers should not exceed 12 single?spaced pages (including eventual appendices, references, proofs, etc.) and should be prepared according to the standard format for Springer Verlag's LNCS series (see

Submissions have to be uploaded to:


A volume of proceedings published by Springer in the LNCS/LNAI series will be available by the time of the conference.

A special issue of a major journal will be later published containing peer?reviewed substantially extended versions of some of the papers contributed to the conference. Submissions to it will be by invitation.


The registration form can be found at:


Paper submission: June 23, 2015 (23:59 CET)
Notification of paper acceptance or rejection: July 28, 2015
Final version of the paper for the LNCS/LNAI proceedings: August 11, 2015
Early registration: August 11, 2015
Late registration: November 10, 2015
Submission to the journal special issue: February 26, 2016



SLSP 2015
Research Group on Mathematical Linguistics (GRLMC)
Rovira i Virgili University
Av. Catalunya, 35
43002 Tarragona, Spain

Phone: +34 977 559 543+34 977 559 543
Fax: +34 977 558 386


Budapesti M?szaki és Gazdaságtudományi Egyetem
Universitat Rovira i Virgili


3-3-33(2015-11-27) CfP International conference 'ATYLANG - Atypical Language : what are we really talking about ?' at Université Paris Ouest Nanterre France

Dear colleagues,

We would like to inform that the international conference 'ATYLANG - Atypical Language : what are we really talking about ?' will be held on 27-28 november 2015 at University Paris Ouest Nanterre.

Please find all informations on the website

Call for Papers

The term atypical, which is used in everyday language to refer to specific and unclassifiable behavior, has also recently started to emerge in research, well beyond the clinical setting and the field of language development. The notion of atypical language is increasingly encountered within the field of linguistics without however being clearly defined. Among numerous individual variations, certain language behaviors intrigue researchers by their ?atypicality? and are thus characterized as unusual. But atypical language, which can involve all levels of a linguistic system, from minimal to maximal items, may sometimes reveal a pathological dimension in language use, in which real difficulties, deficits and disorders are present. While it is not always easy to differentiate individual and unusual variation from genuine language disorders, it is important to establish this distinction in view of the fundamental and crucial role that language plays in social interaction at different ages across the lifespan.

We are thus faced with a paradoxical situation, which, despite its stimulating character, challenges both research and practice. A single notion, at the crossroads of different disciplines, fields and specializations, concerned with fundamental research, applied research and clinical reality is used with different definitions. This raises the question as to what we are basically talking about. Is it possible to identify a concept, a common denominator,  that unites the different uses of ?atypical? between clearly distinct domains? If so, what is this common concept?

Thus, the underlying question of the Atylang conference on clinical linguistics is as follows: how can we move from the intuitive use of the term Atypical language towards a usage based on an explicit and well thought out definition, which allows us to create a consensus on how to problematize the issue, while avoiding, from the outset, limiting it solely to the field of dysfunctions and handicap? More specifically:

(i) At what moment is there a change from a singular, strange and unusual language behavior to a pathological one?  And how can we distinguish a short-term atypical phenomenon from a chronic and established dysfunctional one? Thus, from a developmental viewpoint, how can we characterize and distinguish atypical development from an atypical delay and an apparent specific disorder? As regards ageing, what observable evidence can be found to identify atypical constructions that not only appear as simple markers, inevitably associated  to ageing, but turn into clear indicators of pathological ageing?

 (ii) What references should the arguments that underpin and justify the scientific use of the term atypical be based on: the community in which atypical language may occur (family or school environment), the developmental theories suggested in research, clinical practice? What precise indicators and measures can be applied?

(iii) What is the status of the observer (individual vs. collective, expert vs. non expert, researcher and/or clinician), and, as a result, what are his/her expectations and integrated norms (or observed usage)? Finally, to what extent do phenomena that are considered atypical and specific in one context appear as perfectly natural in another?

Taking these questions as a starting point, the purpose of the Atylang conference is to provide points of reference for practitioners, allowing them to approach the notion of atypical language in a reflective and problematizing manner. A second aim is to provide the opportunity for researchers to benefit from feedback based on actual fieldwork, thus enabling them to explore the continuum covered by this notion, to determine its scope, limits and interest for scientific description.

In practice, this conference aims at including simultaneously the issue of so-called atypical uses and the linguistic markers that account for them. In other words, the focus is on the formal and communicative dimension of the central issue. We welcome papers on 10 major non-exclusive domains, both from clinical experience on the field and from research:

(i) Developmental and ageing language use

(ii) Oral and/or written language

(iii) Vocal language and sign language

(iv) Gestures and multimodality

(v) Atypical Language at the structural vs. the pragmatic level

(vi) Developmental versus acquired disorders

(vii) Diagnosis and remediation

(viii) Family support (development, ageing)

(ix) Delay versus deviance / disorder

(x) Atypical language in monolinguals and bilinguals

Call for Papers

Submissions on EasyChair

Languages: French, English and French Sign Language (LSF)

Caroline Bogliotti

27 et 28 novembre 2015 - Colloque ATYLANG

Caroline Bogliotti
MCF en Sciences du Langage
Université Paris Ouest Nanterre & Laboratoire MODYCO - CNRS UMR 7114 (bât A.)
200 av de la République
92000 Nanterre

+33 (0)1 40 97 74 89 ou 76 15

Delete | Reply | Reply to List | Reply to All | Forward | Redirect | View Thread | Blacklist | Whitelist | Message Source | Save as | Print
Move | Copy  

3-3-34(2015-12-03) CfP 12th International Workshop on Spoken Language Translation

12th International Workshop on Spoken Language Translation
                                 (IWSLT 2015)

                         First Call for Papers

                           December 3-4, 2015
                           Da Nang, Vietnam

The International Workshop on Spoken Language Translation (IWSLT) is a
yearly scientific workshop, associated with an open evaluation campaign on
spoken language translation, where both scientific papers and system
descriptions are presented. The 12th International Workshop on Spoken
Language Translation will take place in Da Nang, Vietnam on Dec. 03-04, 2015.

The IWSLT invites submissions of scientific papers to be published in the
workshop proceedings and presented in dedicated technical sessions of the
workshop, either in oral or poster form. The workshop welcomes original,
high quality contributions covering theoretical and practical issues in the fields
of automatic speech recognition and machine translation that are applied to
spoken language translation. Possible topics include, but are not limited to:
*  Speech and text MT
*  Integration of ASR and MT
*  MT and SLT approaches
*  MT and SLT evaluation
*  Language resources for MT and SLT
*  Open source software for MT and SLT
*  Adaptation in MT
*  Simultaneous speech translation
*  Speech translation of lectures
*  Spoken language summarization
*  Efficiency in MT
*  Stream-based algorithms for MT
*  Multilingual ASR and TTS
*  Rich transcription of speech for MT
*  Translation of non-verbal events


* Sep 28, 2015: Paper Submission
* Nov 3, 2015:  Notification of acceptance
* Nov 13, 2015 : Camera-ready Submission
* Dec  3-4, 2015: Workshop


Jan Niehues, KIT (


3-3-35(2015-12-13) 3rd CHiME Speech Separation and Recognition Challenge at ASRU 2015


3rd CHiME Speech Separation and Recognition Challenge
              Supported by IEEE ASRU 2015
                Launch Date: February  2015
               Results: ASRU, Dec 13-17 2015,


Dear colleague,

Following the success of the 2011 and 2013 CHiME challenges it gives us great pleasure to
pre-announce the 3rd CHiME Speech Separation and Recognition Challenge (CHiME-3)

CHiME-3 will be an official IEEE ASRU 2015 Challenge Task. Participants will be invited
to submit CHiME-3 papers to the ASRU workshop to be held in Scottsdale, Arizona 13-17
December. Papers will be presented at a Special Session.


The CHiME-3 scenario will be ASR for a multi-microphone tablet device in everyday, noisy
environments. It will represent a significant step forward in terms of both realism and
difficulty with respect to the previous CHiME challenges.

The challenge will feature:

- 6-channel microphone array data,
- real acoustic mixing, i.e. talkers speaking in challenging noisy environments,
- varied noise settings including cafe, street junction, public transport.

To maintain compatibility with the 2nd CHiME challenge, the new challenge will re-use the
WSJ evaluation framework. Utterances will be provided embedded in continuous audio with
ground truth VAD annotations.


At time of launch in February we will provide:
- a development test set, recorded by 4 US talkers across 4 noise environments,
- a real training set, comprised of 2000 utterances spoken by 4 US talkers in noisy
environments plus several hours of noise background per environment,
- tools for generating a simulated training set by remixing WSJ and background audio with
impulse responses estimated from the real data,
- a reference speech enhancement system and a state-of-the-art DNN-based Kaldi ASR system.

As with previous CHiME challenges we invite participation from both the signal processing
and the speech recognition communities. To support teams who lack access to the necessary
GPU infrastructure required to run the evaluation system, we will offer 'remote
evaluation' as a service.

If you are considering participating please email and you will
be added to the email list for receiving further updates.


Feb 20, 2015         --  Launch - Training data + dev data release
May 15, 2015         --  Test set released
July 15, 2015        --  Challenge paper submission deadline
September 11, 2015   --  Paper notification & release of CHiME-3 results
December 13-17, 2015 --  ASRU Workshop


Jon Barker, University of Sheffield,
Ricard Marxer, University of Sheffield,
Emmanuel Vincent, Inria,
Shinji Watanabe, MERL,

Delete | Reply | Reply to List | Forward | Redirect | View Thread | Blacklist | Whitelist | Message Source | Save as | Print
Move | Copy

3-3-36(2015-12-13) ASRU 2015 : IEEE Automatic Speech Recognition and Understanding Workshop, Scottsdale, AZ, USA

ASRU 2015 : IEEE Automatic Speech Recognition and Understanding Workshop

December 13-17, 2015 Scottsdale, Arizona, USA

Twitter: @ASRU2015




The fourteenth biannual IEEE workshop on Automatic Speech Recognition and Understanding (ASRU) will be held on December 13-17, 2015 in Scottsdale, Arizona - USA. The ASRU workshop meets every two years and has a tradition of bringing together researchers from academia and industry in an intimate and collegial setting to discuss problems of common interest in automatic speech recognition, understanding, and related fields of research.




Authors are encouraged to submit contributions in all areas of spoken language processing, with emphasis placed on the following topics:

-              Automatic speech recognition

-              Spoken language understanding

-              Speech-to-text systems

-              Spoken dialog systems

-              Multilingual language processing

-              Robustness in automatic speech recognition

-              Spoken document retrieval

-              Speech-to-speech translation

-              Text-to-speech systems

-              Spontaneous speech processing

-              Speech summarization

-              New applications of automatic speech recognition




The workshop features one keynote and one or two invited talks a day. Regular papers are presented as posters. See for formatting guidelines. ASRU 2015 will also include challenge tasks, panel discussions and demo sessions.




Three challenge tasks will be reporting results at ASRU 2015:

-              3rd CHiME Speech Separation and Recognition Challenge -

-              Automatic Speech recognition In Reverberant Environments (ASpIRE) -

-              Multi-genre Broadcast Media Transcription Challenge -


Papers related to the challenges will be submitted, reviewed, and evaluated in the same way as all ASRU papers. Accepted papers will be presented as posters in special sessions for each challenge task. 


The challenges themselves are run by their respective organizers, independently of ASRU 2015.   See for participation details. 




Prospective authors are invited to submit full-length, 4-6 page papers, including figures, plus 1-2 additional pages for references only. All papers will be handled and reviewed electronically.




Paper due date: Friday July 10, 2015

Paper Notification: Friday Sept 11, 2015

Registration opens: Friday Sept 11, 2015

Demo/toolkit deadline: Friday Sept 25, 2015

Paper Camera ready version due: Friday Oct 2, 2015

Demo/toolkit notification date: Friday Oct 9, 2015

Author and early registration end: Friday Oct 23, 2015

Demo/toolkit camera ready version due: Monday Oct 26, 2015

Workshop: Dec 13-17, 2015




For updates see, or follow us on twitter: @ASRU2015


3-3-37(2015-12-13) Call for demonstrations at ASRU 2015

tions at ASRU 2015Demonstration & Toolkit Call for Proposals

The program committee for the 14th biannual IEEE workshop on Automatic Speech Recognition and Understanding is accepting proposals for the Demo & Toolkit session that will be held during the workshop. The demonstration session has become an exciting highlight of the ASRU workshops. The event will include demonstrations of latest innovations by research groups in industry, academia, and government. Demonstrations can be related to any of the topics defined by ASRU:

  - ASR / LVCSR systems
  - Language modeling
  - Acoustic modeling
  - Decoder / search
  - Spoken language understanding
  - Spoken dialog systems
  - Multilingual speech & language processing
  - Robustness in speech recognition
  - Spoken document retrieval
  - Speech to speech translation
  - Text-to-speech
  - Speech summarization
  - New applications of ASR
  - Speech signal processing
  - Neural networks in ASR
  - Low / zero resources
  - Mobile applications in speech processing
  - Far field speaker and speech recognition

The deadline for submission of proposals for the Demo & Toolkit session is September 18, 2015 with notification of acceptance by October 2, 2015.

Submissions should be mailed to the Demonstration Chairs ( ).?Proposals should include the demonstration title, list of authors, and an abstract of no more than two pages. The proposal should clearly explain what is novel and innovative in the proposed demonstration or toolkit.  For demonstrations, the proposal should detail what will be demonstrated. For toolkits, the proposal should explain where the toolkit can be obtained.

Each demonstration will be allotted one table, space for a poster, and a power outlet. Presenters are responsible for all other equipment and shipping to and from the workshop. Wireless internet will also be available. If you have any special requirements, please contact the Demonstration Chairs.

ASRU 2015 Demonstration Chairs

Thomas Schaaf, Amazon, (e-mail: Patrick Nguyen, Metanautix, (e-mail: Marsal Gavaldà, Expect Labs, (e-mail:

For updates see <>, or follow us on twitter: @ASRU2015


3-3-38(2015-12-13) Calls for Challenge Task Proposals ASRU 2015, Scottsdale, Az, USA (updated)


IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU) 2015






Submission deadline: December 31, 2014




ASRU 2015 welcomes proposals for challenge tasks. In a challenge task, participants compete or collaborate to accomplish a common or shared task. The results of the challenge will be presented at the ASRU workshop event in the form of papers reporting the achievements of the participants, individually and/or as a whole. We invite organizers to concretely propose such challenge tasks in the form of a 1-2 page proposal. The proposal should include a description of




* The task and its intended goal


* The task organizers and key contact people for the various aspects of the task


* The data or shared resource that is to be used


  * Details on the availability or its collection process


  * Required labeling or other pre-processing and the expected timeline of this process


  * Privacy concerns around the data or resource as it will be released to all participants


  * Licensing terms or conditions for participants


* the evaluation process, how will a test set be defined, what figure of merit will be used to measure success, and how will a common scoring process be put in place to arrive at comparable results for all participants


* the timeline; when will training/test material be made available, when are participant (sub-)system submissions due


* the expected (number of) participants, and whether this is a new installment of an existing challenge or a new challenge series altogether


* any special requests or circumstances, e.g., required timing or format of the challenge execution




Participants will report their achievements in the form of regular format paper submissions to the ASRU workshop. These submissions will undergo the normal ASRU review process, but the organizers can suggest reviewers that would be particularly insightful for the challenge subject matter. Accepted papers will be organized in a special session at the conference (in poster format; the only format used at ASRU). The accepted papers will appear in the ASRU proceedings. Given the possibly lengthy process of organizing and executing a special challenge, prospective organizers are encouraged to submit proposals as soon as possible. The ASRU technical program committee will make acceptance decisions based on a rolling schedule -- i.e., proposals are reviewed as soon as they come in. Challenge proposals should be sent to Technical Program co-chair Michiel Bacchiani at, and will be accepted until the end of 2014.


3-3-39(2015-xx-xx) Dialog State Tracking Challenge 4


              Dialog State Tracking Challenge  4
                 First Call for Participation
                 April 1, 2015 (Registration opens)

*--- MOTIVATION ---*

Dialog state tracking is one of the key sub-tasks of dialog management, which defines the representation of dialog states and updates them at each moment on a given on-going conversation. To provide a common benchmark for this task, the first Dialog State Tracking Challenge (DSTC) was organized [1]. More recently, Dialog State Tracking Challenges 2 & 3 have been successfully completed [2].

In this fourth edition of the Dialog State Tracking Challenge, we will focus on a dialog state tracking task on human-human dialogs. We expect these shared efforts on human dialogs will contribute to progress in developing much more human-like systems. In addition to the main task, we propose four pilot tracks for the core components in developing end-to-end dialog systems, and an open track based on the same dataset.

The provided dataset consists of 35 dialog sessions between 3 tour guides and 35 tourists with a total length of 21 hours, plus their manual transcriptions and speech act and semantic labels annotations at turn level.


Main task:
     ? Dialog State Tracking at Sub-dialog Level: Fill out the frame of slot-value pairs for the current sub-dialog considering all dialog history prior to the turn. A baseline system will be provided.

Pilot tasks (optional):
     ? Spoken language understanding: Tag a given utterance with speech acts and semantic slots.
     ? Speech act prediction: Predict the speech act of the next turn imitating the policy of one speaker.
     ? Spoken language generation: Generate a response utterance for one of the participants.
     ? End-to-end system: Develop an end-to-end system playing the part of a guide or a tourist.

Open track (optional):
?    Proposed by teams willing to work on any task of their interest over the provided dataset


01 Apr 2015: Registration opens
15 Apr 2015: Labeled training data is released
17 Aug 2015: Unlabeled test data is released
31 Aug 2015: Entry submission deadline
04 Sep 2015: Evaluation results are released
      Jan 2016: Results presented at IWSDS 2016


Seokhwan Kim (I2R, Singapore)
Luis Fernando D?Haro (I2R, Singapore)
Rafael E. Banchs (I2R, Singapore)
Jason D. Williams (Microsoft, USA)
Matthew Henderson (U. Cambridge, UK)

Seokhwan Kim: kims AT
Luis Fernando D?Haro: luisdhe AT
1 Fusionopolis Way, #21-01, Singapore 138632
Fax: (+65) 6776 1378

*--- REFERENCES ---*


3-3-40(2016-01-09) CfA Speech Processing in Realistic Environments - SPIRE, Groningen, The Netherlands


Speech Processing in Realistic Environments  - SPIRE

 9 January 2016, Groningen, the Netherlands

 In cooperation with SPIN 2016 and

 Deadline for submission of abstracts: 20 September 2015


Description of the workshop

Although listeners often experience more than one adverse condition simultaneously (e.g., noise and visual distraction), classical research methods have traditionally only addressed adverse conditions individually. This has contributed to the fragmentation of speech communication research into numerous sub-disciplines that rarely interact. While each type of adverse condition can have important consequences on its own, it is often the combination of conditions that conspires to create serious communication problems especially for elderly and hearing-impaired individuals.

In 2012, a Marie Curie Initial Training Network called Investigating Speech Processing in Realistic Environments (INSPIRE) was initiated with the aim of creating a community of researchers who can exploit synergies between the sub-disciplines of speech communication. The purpose of this workshop is to bring together researchers with common interests in human and automatic speech recognition in challenging conditions of real environments (e.g., under increased cognitive load, divided attention, environmental noise, accented speech, non-native knowledge, hearing impairment & hearing loss).

Topics of the workshop include, but are not limited to:

  • State of the art empirical research on speech perception in challenging, realistic listening environments
  • Experimental and clinical methods for research in naturalistic speech perception
  • Computational modeling of speech intelligibility for normal-hearing and hearing-impaired listeners under realistic conditions
  • Tools and corpora for testing and comparing speech intelligibility
  • Integration of human auditory processing and machine speech recognition


Keynote speakers 


Submission process 

We call for extended abstracts (1 page) covering original, unpublished research, or function as a new review, introduction or opinion of a relevant topic.  Submissions can also include work in progress. Submissions must be written in English and are limited to 1 page, excluding references. Abstracts about conducted research should contain analysis results and a brief discussion. References should be put on the second page. Submissions longer than 1 page will be rejected. The conference will be conducted in English. We accommodate both oral talks and poster presentations.

Submission is managed through the website. Please find a direct link here.


Important dates

20 September: abstract submission deadline to SPIN and SPIRE (authors indicate their preference for SPIN or SPIRE)

15 October: Notification of acceptance of submissions

30 October: registration opens

15 November: final submission deadline for  updates of approved contributions

9 January 2016: SPIRE workshop


Program Chairs

Bert Cranen (Radboud University Nijmegen)

Sven Mattys (University of York)


3-3-41(2016-03-14)) 10th ICVPB in Viña del Mar, Chile

Welcome to the 10th ICVPB in Viña del Mar

March 14-17, 2016


We are pleased to invite you to the 10th International Conference on Voice Physiology and Biomechanics, ICVPB 2016, celebrated for the first time in the Southern hemisphere, in Viña del Mar, Chile. ICVPB is one of the prime international forums for current scientific research on the larynx and voice.


Brief History of the Conference


The International Conference on Vocal Fold Physiology and Biomechanics (ICVPB) dates back to 1980. Initially called the Voice Physiology Conference, it began with five individuals who brought together voice scientists from Japan and the United States. The five people were Wilbur James Gould, Osamu Fujimura, Kenneth Stevens, Minoru Hirano, and Ingo Titze. The first meeting was held in Kurume Japan, in 1980. The focus was and has always been basic science, the physical and biological underpinnings of voice production. In total, nine Vocal Fold Physiology meetings were held. After Kurume, the meeting took place in Madison (1982), Iowa City (1984), New Haven (1985), Tokyo (1987), Stockholm (1990), Denver (1992), Kurume (1994), and Sydney (1995). The name of the conference was then changed to ICVPB to include the influx of biomechanics and biology into our field. The first ICVPB meeting was held in 1997 at Northwestern University in Evanston, Illinois, followed by Berlin (1999), Denver (2002), Marseille (2004), Tokyo (2006), Tampere (2008), Madison (2010), Erlangen (2012), Salt Lake City (2014), and now Viña del Mar (2016).

Topic Areas:

  • Fluid-structure-sound interactions in normal and disordered phonation
  • Soft tissue and muscle biomechanics
  • Acoustics aerodynamics and kinematics of voice production
  • Laryngeal and voice physiology and neurophysiology
  • Neuromuscular control of normal and disordered phonation
  • Modeling of normal and disordered voice production
  • Modeling vocal fold molecular and cellular biology
  • Imaging and monitoring techniques for the assessment of vocal function

Important Dates:

    • Abstract submission: October 2, 2015

    • Notification of Acceptance:
       November 30, 2015

  • Final Submission deadline: 
    January 31, 2016

General Chair:
Matias Zañartu

Technical Information:

Logistic Information

Monina Vásquez

Claudia Musalem

Sponsorship Information

Francisco Gutierrez


3-3-42(2016-05-02) 4th International Conference on Learning Representations (ICLR 2016), San Juan, Puerto Rico,

4th International Conference on Learning Representations (ICLR 2016)

Submission deadline for title and abstract: 5:00 pm EST, November 12th, 2015
Submission deadline for arXiv paper ID: 5:00 pm EST, November 19th, 2015
Location: Caribe Hilton, San Juan, Puerto Rico, May 2-4, 2016

It is well understood that the performance of machine learning methods
is heavily dependent on the choice of data representation (or
features) on which they are applied. The rapidly developing field of
representation learning is concerned with questions surrounding how we
can best learn meaningful and useful representations of data. We take
a broad view of the field, and include in it topics such as deep
learning and feature learning, metric learning, kernel learning,
compositional models, non-linear structured prediction, and issues
regarding non-convex optimization.

Despite the importance of representation learning to machine learning
and to application areas such as vision, speech, audio and NLP, there
was no venue for researchers who share a common interest in this
topic. The goal of ICLR has been to help fill this void.

A non-exhaustive list of relevant topics:
 - unsupervised, semisupervised, and supervised representation
 - metric learning and kernel learning
 - dimensionality expansion
 - sparse modeling
 - hierarchical models
 - optimization for representation learning
 - learning representations of outputs or states
 - implementation issues, parallelization, software platforms,
 - applications in vision, audio, speech, natural language processing,
   robotics, neuroscience, or any other field

The program will include keynote presentations from invited speakers,
oral presentations, and posters.

ICLR's Two Tracks
As usual, ICLR will feature two tracks: a Conference Track and a
Workshop Track. However, this year, conference and workshop
submissions will be reviewed separately, in two different
periods. This call for paper is thus only for conference
contributions. Workshop submissions will be received a few months
before the conference and be subject to a lighter review. A future
call for papers will be sent with more details on the Workshop Track.

Also, the reviewing period for conference submissions will be
separated into two short rounds (normally 2 reviews in the first
round, 1 review in the second round). The first round will run as
usual. The second round reviews, however, in addition to evaluating
the submissions, will be required to include comments on the content
of the first round reviews. By asking for such comments, we hope to
ensure a minimum of discussion for every paper, and favour
interactions that might either identify factual errors early or reveal
a clearer consensus. Note that some of the submitted conference track
papers that are not accepted to the conference proceedings will be
invited to be presented under the Workshop Track.

ICLR Submission Instructions
By November 12th, authors are asked to enter in the
title, abstract and author list for their paper, along with
conflict information.  Then, as soon as possible, authors must
post on arXiv their submission:  Finally, by
November 19th, authors must update their submission in with the arXiv ID of their paper.

Note that there can be up to 3 days of delay between sending a
manuscript on arXiv and receiving your arXiv ID. It is thus important
to post your submission on arXiv early. Note also that you can always
update your submission on arXiv later on, anytime during the review
process. Submissions without an arXiv ID after November 19th will be
automatically removed from

Remember to download the style files and paper template and use within
LaTeX to format your paper. Use of the ICLR 2016 style is mandatory.

When you make your arXiv submission, please be sure to correctly
classify your submission into CoRR categories. Typically, you should
consider the following categories:

  CS.LG: machine learning
  CS.NE: neural networks
  CS.CV: computer vision
  CS.CL: computational linguistics

Virtually all of the ICLR papers should have both CS.LG and CS.NE as
categories and then additional categories depending on the nature of
the problem.

Submission deadline: 11:59 pm PST, November 12th for title and
abstract, 11:59 pm PST, November 19th for arXiv ID.

Regarding the conference submission's 6-9 page limits, these are
really meant as guidelines and will not be strictly enforced. For
example, figures should not be shrunk to illegible size to fit
within the page limit. However, in order to ensure a reasonable
workload for our reviewers, papers that go beyond the 9 pages
should be formatted to include a 9 page submission, with
supplementary material appended at the end of the manuscript and
clearly marked as an appendix, which will be optionally reviewed.

Paper revisions will be permitted, and in fact are encouraged, in
response to comments from and discussions with the reviewers (see
An Open Reviewing Paradigm below).

An Open Reviewing Paradigm
1.  Submissions to ICLR are posted on arXiv prior to being submitted
    to the conference.
2.  Authors submit their paper to either the ICLR conference track or
    workshop track via the ICLR 2016 website.
3.  After the authors have submitted their papers via,
    the ICLR program committee designates anonymous reviewers as
4.  The submitted reviews are published without the name of the
    reviewer, but with an indication that they are the designated
5.  Anyone can openly (non-anonymously) write and publish comments on
    the paper. Anyone can ask the program chairs for permission to
    become an anonymous designated reviewer (open bidding). The
    program chairs have ultimate control over the publication of each
    anonymous review. Open commenters will have to use their real
    names, linked with their Google Scholar profiles.
6.  Authors can post comments in response to reviews and
    comments. They can revise the paper as many times as they want,
    possibly citing some of the reviews. Reviewers are expected to
    revise their reviews in light of paper revisions.
7.  The review calendar includes a generous amount of time for
    discussion between the authors, anonymous reviewers, and open
    commentators. The goal is to improve the quality of the final
8.  The ICLR program and area chairs will consider all submitted
    papers, comments, and reviews and will decide which papers are to
    be presented in the conference track, which will be invited to be
    presented in the workshop track, and which will not appear at
9.  Papers that are presented in the workshop track or are not
    accepted will be considered non-archival, and may be submitted
    elsewhere (modified or not), although the ICLR site will maintain
    the reviews, the comments, and the links to the arXiv versions.

General Chairs
Yoshua Bengio, Université de Montreal
Yann LeCun, New York University and Facebook

Senior Program Chair
Hugo Larochelle, Twitter and Université de Sherbrooke

Program Chairs
Brian Kingsbury, IBM
Samy Bengio, Google

The organizers can be contacted at


3-3-43(2016-05-23) LREC 2016 - 10th Conference on Language Resources and Evaluation, Portoro, Slovenia
LREC 2016 - 10th Conference on Language Resources and Evaluation
Grand Hotel Bernardin,  PORTORO?, SLOVENIA
23-28 May 2016

MAIN CONFERENCE: 25-26-27 MAY 2016
WORKSHOPS and TUTORIALS: 23-24-28 MAY 2016
Conference web site:
Twitter: @LREC2016


ELRA is glad to announce the 10th edition of LREC, organised with the support of a wide range of international organisations.

LREC is the major event on Language Resources (LRs) and Evaluation for Human Language Technologies (HLT). LREC aims to provide an overview of the state-of-the-art, explore new R&D directions and emerging trends, exchange information regarding LRs and their applications, evaluation methodologies and tools, on-going and planned activities, industrial uses and needs, requirements coming from e-science and e-society, with respect both to policy issues and to scientific/technological and organisational ones.

LREC provides a unique forum for researchers, industrials and funding agencies from across a wide spectrum of areas to discuss problems and opportunities, find new synergies and promote initiatives for international cooperation, in support of investigations in language sciences, progress in language technologies (LT) and development of corresponding products, services and applications, and standards.

Issues in the design, construction and use of LRs: text, speech, multimodality
?    Guidelines, standards, best practices and models for LRs interoperability
?    Methodologies and tools for LRs construction and annotation
?    Methodologies and tools for extraction and acquisition of knowledge
?    Ontologies, terminology and knowledge representation
?    LRs and Semantic Web
?    LRs and Crowdsourcing
?    Metadata for LRs and semantic/content mark-up
?    Best practices in the use of LR citations

Exploitation of LRs in systems and applications
?    Multimedia information and multimodal communication, including Sign Languages
?    LRs in systems and applications such as: information extraction, information retrieval, audio-visual and multimedia search, speech dictation, audio-visual transcriptions and annotations, computer aided language learning, training and education, mobile communication, machine translation, speech translation, summarisation, semantic search, text mining and analytics, inferencing, reasoning, sentiment analysis, etc.
?    Interfaces: (speech-based) dialogue systems, natural language and multimodal/multisensorial multi-sensory interactions, voice-activated services, etc.
?    Use of (multilingual) LRs in various fields of application like e-commerce, e-government, e-culture, e-health, e-participation, mobile applications, digital humanities, Digital Service Infrastructures, etc.
?    Industrial LRs requirements, user needs

Issues in LT evaluation
?    LT evaluation methodologies, protocols and measures
?    Validation and quality assurance of LRs
?    Benchmarking of systems and products
?    Usability evaluation of HLT-based user interfaces and dialogue systems
?    User satisfaction evaluation

General issues regarding LRs & Evaluation
?    International and national activities, projects and collaboration
?    Priorities, perspectives, strategies in national and international policies for LRs
?    Multilingual issues, language coverage and diversity, less-resourced languages
?    Open, linked and shared data and tools, open and collaborative architectures
?    Organisational, economical, ethical and legal issues.


LRs for Actionable Knowledge
Important information to support a range of applications is hidden in Big Data. Automated content analytics is needed for the interpretation of the data and their context, so that it is accurately understood and can be integrated and used in applications. Content analytics makes use of various technologies, like semantic search, keyword suggestions, clustering, classification, etc. What is the role of LRs in such correlation of digital content and context? Can for example relations between LRs and Knowledge Graphs for entity linking, disambiguation, reasoning, etc. support the generation of actionable knowledge in Big Data analytics?
More generally we would like to bring to discussion all issues related to LRs and evaluation means for semantic processing in the Big Data environment.

LRs for Interaction with Devices
There is a growing interest in adapting and improving Natural Language Processing for providing intelligent language interfaces to all kind of devices that are connected to the Internet (of Things), and also to robots, sensors and the like. We encourage investigating how to relate LRs in this communication set-up with data that are in principle of a non-linguistic nature. How to improve multilingual and multimodal generation of information from sensors, robots and in general from structured data in the Internet of Things? How can LRs optimally be designed and used in this (bi-directional) interaction? How to combine language and sensor streams in multilingual and multimodal virtual worlds?
Are there new or past approaches to Human-Machine dialogue offering easily adaptable solutions, so that we need ?only? to upgrade them to the enormously increased quantity of data and number of interconnected devices?

Identify, Describe and Share your LRs!
Describing your LRs in the LRE Map is now a normal practice in the submission procedure of LREC (introduced in 2010 and adopted by other conferences).
To continue the efforts initiated at LREC 2014 about ?Sharing LRs? (data, tools, web-services, etc.), authors will have the possibility,  when submitting a paper, to upload LRs in a special LREC repository.  This effort of sharing LRs, linked to the LRE Map for their description, may become a new ?regular? feature for conferences in our field, thus contributing to creating a common repository where everyone can deposit and share data.
As scientific work requires accurate citations of referenced work so as to allow the community to understand the whole context and also replicate the experiments conducted by other researchers, LREC 2016 endorses the need to uniquely Identify LRs through the use of the International Standard Language Resource Number (ISLRN,, a Persistent Unique Identifier to be assigned to each Language Resource. The assignment of ISLRNs to LRs cited in LREC papers  will be offered at submission time.

The Scientific Programme will include invited talks, oral presentations, poster and demo presentations, and panels, in addition to a keynote address by the winner of the Antonio Zampolli Prize.

Submission of proposals for oral and poster (or poster+demo) papers: 15 October 2015
?    Abstracts should consist of about 1500-2000 words, will be submitted through START and will be peer-reviewed.

Submission of proposals for panels, workshops and tutorials: 15 October 2015
?    Proposals should be submitted via an online form on the LREC website and will be reviewed by the Programme Committee.

The Proceedings will include both oral and poster papers, in the same format.

There is no difference in quality between oral and poster presentations. Only the appropriateness of the type of communication (more or less interactive) to the content of the paper will be considered.

In addition a Book of Abstracts will be printed.

Nicoletta Calzolari ? CNR, Istituto di Linguistica Computazionale ?Antonio Zampolli?,
Pisa - Italy (Conference chair)
Khalid Choukri  ? ELRA, Paris - France
Thierry Declerck ? DFKI GmbH, Saarbrücken - Germany
Marko Grobelnik - Jozef Stefan Institute, Ljubljana - Slovenia
Bente Maegaard  ? CST, University of Copenhagen - Denmark
Joseph Mariani  ? LIMSI-CNRS & IMMI, Orsay - France
Asuncion Moreno  ? Universitat Politècnica de Catalunya, Barcelona - Spain
Jan Odijk  ? UIL-OTS, Utrecht - The Netherlands
Stelios Piperidis ? Athena Research Center/ILSP, Athens ? Greece


Sara Goggi, ILC-CNR, Pisa, Italy
Hélène Mazo, ELDA/ELRA, Paris, France



                            ODYSSEY 2016:
                   June 21-24, 2016, Bilbao, Spain


- Regular paper submissions:              January 24, 2016
- Industry track and demos:               February 15, 2016
- Notifications:                          March 15, 2016
- Final papers:                           April 1, 2016


The general themes of the conference include speaker and  
language recognition and characterization. The specific topics 
include, but are not limited to, the following:

o Speaker and language recognition, verification, identification
o Speaker and language characterization
o Features for speaker and language recognition
o Speaker and language clustering
o Multispeaker segmentation, detection, and diarization
o Language, dialect, and accent recognition
o Robustness in channels and environment
o System calibration and fusion
o Speaker recognition with speech recognition
o Multimodal and multimedia speaker recognition
o Confidence estimation for speaker and language recognition
o Corpora and tools for system development and evaluation
o Low-resource (lightly supervised) speaker and language recognition
o Speaker synthesis and transformation
o Human and human-assisted recognition of speaker and language
o Analysis and countermeasures against spoofing and tampering attacks
o Forensic and investigative speaker recognition
o Systems and applications


All regular submissions (max 8 pages) will be reviewed by at least 
three members of the scientific review committee. The regular 
submissions must include scientific or methodological novelty;
the paper has to review the relevant prior work and state clearly
the  novelty in the Introduction part. The accepted papers will appear
in electronic proceedings.

The Odyssey Organizing Committee recognizes a large gap between
theoretical research results and real-world deployment of the methods. 
To foster a closer collaboration across industry and academia,
an industry track was introduced in Odyssey 2014 and will be
continued in Odyssey 2016.

Submissions to this track may include a description of your target
application, a product, a demonstrator or any combination of them.
In addition to voice biometrics providers, we encourage submissions
from companies who are in need for speaker or language recognition 
technology. The industry paper submissions do NOT have to present 
methodological novelty, but MUST address one or all of the following
- Description of the application, role of speaker/language recognition
- Research results and methods that worked well in your application
- Negative research results that have NOT worked in practice
- Unsolved problems 'out-in-the-wild' that deserve attention

The industry submissions will NOT undergo full peer review nor will be
included in the proceedings. A poster session will be allocated for the
industry track presentations and demos, with auxiliary equipment (tables,
plugs, etc.) available if requested. The organizing committee may select
the most interesting submissions for oral presentation.

Odyssey 2016 will feature two awards:

- A best paper award
- A best student paper award

All regular papers and all special session papers (if any is scheduled)
are candidates for the awards. The awards are given based on the review 
reports AND the presentation at the conference. For the best student 
paper award, the first author must be a student (meaning that she/he
does not yet hold a PhD degree) at the time of paper submission.


Luis J. Rodríguez-Fuentes, chair  University of the Basque Country, Spain
Eduardo Lleida, co-chair          University of Zaragoza, Spain
Jean-Francois Bonastre            University of Avignon, France
Niko Brümmer                      Agnitio, South Africa
Luká? Burget                      Brno University of Technology, Czech Republic
Joseph Campbell                   MIT Lincoln Laboratory, USA
Jan 'Honza' ?ernocký              Brno University of Technology, Czech Republic
Tomi Kinnunen                     University of Eastern Finland, Finland
Haizhou Li                        Institute for Infocomm Research, Singapore
Alvin Martin                      NIST, USA
Douglas Reynolds                  MIT Lincoln Laboratory, USA


Odyssey 2016 will be hosted by two Spanish groups: GTTS (,
from the Faculty of Science and Technology of the University of the Basque Country,
and ViVoLab (, from the School of Engineering and
Architecture of the University of Zaragoza.

The workshop will be held in Bilbao, a medium-size city in the north of Spain,
with about 350,000 inhabitants. The venue, Bizkaia Aretoa, is located in the heart
of the city. The building, designed by the Portuguese architect Alvaro Siza,
hosts all kind of social, cultural, academic and scientific events, most of them
organized by the University of the Basque Country.

Bilbao is the commercial and administrative head of a large area of about
one million people living by the Ibaizabal-Nervion estuary. After centuries
of trading and iron industry, in the last decades Bilbao has become a service town,
supported by a huge investment in infrastructure and urban renewal, that started
with the construction of an underground network (Metro Bilbao) in 1995 and
the opening of the Bilbao Guggenheim Museum in 1997.

The Bilbao airport can be easily reached from several European airports,
including international hubs such as Frankfurt, London, Paris, Amsterdam or Madrid,
which provide worldwide connectivity. The city is connected to the European
road network by the AP-8 toll motorway, to the north of Spain by the A-8 motorway
and to the rest of Spain by the AP-68 toll motorway.

Located in a hilly countryside, Bilbao offers many outdoor activities.
Hiking is very popular as well as rock climbing in the nearby mountains.
Mount Artxanda, easily accessible from the town centre by a funicular railway,
features a recreational area at the summit, with restaurants, a sports complex
and a balcony with panoramic views. In the south, the natural wonders of
Mount Pagasarri receive hundreds of hikers every weekend.

A few minutes away by public transport, the Bizkaia Bridge, declared World Heritage
in 2006, connects Portugalete and Las Arenas at the left and right banks of the estuary.
In the coast, old fishing villages like Plentzia, Mundaka or Lekeitio have become
touristic spots due to the nearby beaches, where watersports, especially surfing,
are practiced. Just an hour away by car, the beautiful city of San Sebastian,
as well as the vineyards and wineries of La Rioja, are worth a visit.

For more details:

Website (under construction):


3-3-45(2016-07-04) 5ème Congrès Mondial de Linguistique Française, Tours, France


5ème Congrès Mondial de Linguistique Française

Organisé par l’Institut de Linguistique Française (CNRS – FR 2393)

du 4 au 8 juillet 2016,

à l’Université François Rabelais de Tours



Dates : 4 au 8 juillet 2016

Lieu : Université François Rabelais de Tours

Site web :

Contact :

Institution en charge de l’organisation

Institut de Linguistique Française – FR 2393 du CNRS Courriel :

Téléphone : 01 43 13 56 45

Adresse : 44, rue de l’Amiral Mouchez – 75014 Paris

Site web :

Programme prévisionnel

Le Congrès fonctionne par appel à communications. Les réponses à l’appel à communications sont attendues jusqu’au 30 novembre 2015. Le nombre total de communications est estimé à 200 environ.

4 conférences et 2 tables rondes plénières seront organisées.

Les conférences plénières permettent à des chercheurs invités de réputation internationale d’offrir un état de la recherche en linguistique française :

 Marie-José Béguelin, Université de Neuchâtel (Suisse)

Aidan Coveney, University of Exeter (Royaume-Uni)

 Harriet Jisa, Université Lyon 2

 Alain Polguère, Université de Lorraine

Tables rondes plénières thématiques

Philologie et herméneutique numérique(s)

 Le français, langue en contact


15 mai 2015 : Ouverture de la plateforme de dépôt des communications

 30 novembre 2015 : Date limite de réception des communications

 29 février 2016 : Notification de l'acceptation ou du refus des propositions de communication, et directives pour la version définitive

 31 mars 2016 : Réception de la version définitive des articles



- Franck Neveu, Directeur de l’ILF (Institut de Linguistique Française), Université Paris-Sorbonne

- Gabriel Bergounioux, Université d‘Orléans

- Marie-Hélène Côté, Université Laval (Québec)

- Jean-Michel Fournier avec l’assistance de Sylvester Osu et Philippe Planchon, Université François Rabelais de Tours

- Linda Hriba, Université d’Orléans

- Sophie Prévost, CNRS, laboratoire Langues, Textes, Traitements informatiques, Cognition (Lattice)


Les unités de recherche composant l’Institut de Linguistique Française :

Unités Mixtes de Recherche

Analyse et Traitement Informatique de la Langue Française (ATILF)

UMR 7118 CNRS – Université de Lorraine – Direction : Éva Buchi

Bases, Corpus, Langage (BCL)

UMR 7320 CNRS – Université Nice Sophia Antipolis – Direction : Damon Mayaffre

Cognition, Langues, Langage, Ergonomie (CLLE)

UMR 5263 CNRS – Université de Toulouse II - Direction : Hélène Giraudo. Responsable de l’équipe de linguistique CLEE-ERSS : Cécile Fabre

Equipe d’informatique linguistique du Laboratoire d’Informatique Gaspard Monge (LIGM)

UMR 8049 – CNRS – Université Paris-Est Marne-la-Vallée – Direction : Marie-Pierre Béal. Responsable de l’équipe d’informatique linguistique : Eric Laporte et Tita Kyriacopoulou

Interactions, Corpus, Apprentissages, Représentations (ICAR)

UMR 5191 CNRS – Université Lumière Lyon 2 – ENS de Lyon – INRP – Direction : Sandra Teston-Bonnard

Laboratoire Parole et Langage (LPL)

UMR 7309 CNRS – Aix - Marseille Université – Direction : Noël Nguyen

Langues, Textes, Traitements informatiques, Cognition (Lattice)

UMR 8094 CNRS – ENS – Université Sorbonne Nouvelle – Direction : Thierry Poibeau

Lexiques, Dictionnaires, Informatique (LDI)

UMR 7187 CNRS – UP13 – UCP – Direction : Gabrielle Le Tallec Lloret

Modèles, Dynamiques, Corpus (MoDyCo)

UMR 7114 CNRS – Université Paris Ouest Nanterre La Défense – Direction : Jean-Luc Minel

Equipe «Linguistique» de l’Institut des Textes et Manuscrits Modernes (ITEM)

UMR 8132 CNRS – Direction : Paolo d’Iorio, Responsable de l’équipe « Linguistique » : Irène Fenoglio


UMR 5267 CNRS – Université Paul-Valéry – Montpellier 3 – Direction : Agnès Steuckardt. Représentante du laboratoire à l’ILF : Christine Béal

Savoirs, Textes, Langage (STL)

UMR 8163 CNRS – Université de Lille – Direction : Philippe Sabot. Représentante du laboratoire à l’ILF : Georgette Dal

Laboratoire Ligérien de Linguistique (LLL)

UMR 7270 – Université d’Orléans – Université de Tours – CNRS – BnF – Direction : Gabriel Bergounioux 3

Analyse Linguistique Profonde à Grande Echelle (ALPAGE)

UMR-I 001 – INRIA et Université Paris-Diderot – Direction Benoît Sagot

Équipes d’accueil

Centre de Recherche sur les médiations (CREM)

EA 3476 – Université de Lorraine – Pôle PRAXITEXTE – Direction : Jacques Walter. Représentante du laboratoire à l’ILF : Béatrice Fracchiolla

Centre de Recherches Inter-langues sur la Signification en Contexte (CRISCO)

EA 4255 – Université de Caen Basse-Normandie – Direction : Pierre Larrivée


EA 7345 – Langages, systèmes, discours – Direction : Gabriella Parussa. Représentante du laboratoire à l’ILF : Florence Lefeuvre

Linguistique et Didactique des Langues Etrangères et Maternelle (LIDILEM)

EA 609 – Université Stendhal Grenoble 3 – Direction : Marinette Matthey

Linguistique, Langues et Parole (LiLPa)

EA 1339 – Université de Strasbourg – Direction : Rudolph Sock

Sens, Texte, Informatique, Histoire (STIH)

EA 4509 – Université Paris-Sorbonne (Paris 4) – Direction : Joëlle Ducos

Remarques sur l’évaluation des propositions

Le Congrès Mondial de Linguistique Française est une grande manifestation internationale sur et pour la linguistique française qui se caractérise par une procédure exigeante en matière d’évaluation des communications présentées au congrès :

 les propositions de communication ne sont pas des résumés mais de véritables articles (10 pages minimum, 15 pages maximum) comprenant une bibliographie ;

la gestion des propositions, de leur répartition entre comités thématiques et au sein des comités thématiques s'effectue via une plateforme de gestion de congrès scientifique - - et d'EDP - avec publication des actes sur;

l'évaluation des propositions est faite par des experts au moyen d'une grille unifiée et après une anonymisation des soumissions ;

 la production d'un CD-ROM d'actes avec index, moteur de recherche et d'un livret des résumés est assurée par le logiciel dédié, ce qui assure l'homogénéité et la qualité du résultat ;

 les communications acceptées font l'objet d'une publication en version intégrale dans les actes ;

 les actes sont distribués à l'ouverture du congrès.

Partenaires sollicités pour du financement de la manifestation

Agence Universitaire de la Francophonie

 CNRS : Institut des Sciences Humaines et Sociales - Section 34 du CNRS

 Ministère de la Culture et de la Communication - Délégation Générale à la Langue Française et aux Langues de France

 Ministère de l'Éducation Nationale, de l'Enseignement Supérieur et de la Recherche

 Université Paris Ouest Nanterre La Défense

 Ville de Tours

 Communauté d’agglomération Tours Plus

 Département d’Indre-et-Loire

 Région Centre-Val de Loire


Présentation scientifique

Intérêt scientifique

Le cinquième Congrès Mondial de Linguistique Française est organisé par l’Institut de Linguistique Française (ILF), Fédération de Recherche du CNRS (FR 2393) qui est sous la tutelle de cet organisme et du Ministère de l'Éducation Nationale, de l'Enseignement Supérieur et de la Recherche. L’ILF regroupe vingt laboratoires de recherche, qui sont les co-organisateurs de ce congrès en partenariat avec de nombreuses associations nationales et internationales. Une telle organisation, conjointement prise en charge par vingt unités de recherche, est exceptionnelle par son ampleur et la volonté de partenariat scientifique qu’elle révèle.

Le premier Congrès Mondial a été organisé à Paris par l’ILF en 2008, le deuxième à La Nouvelle-Orléans, le troisième à Lyon en 2012 et le quatrième à Berlin en 2014. Chacun de ces quatre congrès a attiré plus de 300 participants et les résultats ont fait l’objet d’une publication en ligne immédiate accompagnée par un volume de résumés et un CD-ROM d’actes.

Ce congrès est organisé sans aucun privilège d'école ou d'orientation et sans exclusive théorique ou conceptuelle. Chaque domaine ou sous-domaine, chaque type d'objet, chaque type de questionnement et chaque problématique portant sur le français peut y trouver sa place.

Le CMLF est organisé en 15 sessions, lesquelles soulignent le fait que la linguistique française n’est pas limitée à tel ou tel domaine érigé en modèle pour les autres sous-disciplines du champ. Quatorze thématiques ont été retenues, qui permettent de balayer la plus grande partie du champ scientifique : (1) Discours, Pragmatique et Interaction, (2) Francophonie, (3) Histoire du français : perspectives diachronique et synchronique, (4) Histoire, Épistémologie, Réflexivité, (5) Lexique(s), (6) Linguistique de l’écrit, Linguistique du texte, Sémiotique, Stylistique, (7) Linguistique et Didactique (français langue première, français langue seconde), (8) Morphologie, (9) Phonétique, Phonologie et Interfaces, (10) Psycholinguistique et Acquisition, (11) Ressources et Outils pour l’analyse linguistique, (12) Sémantique, (13) Sociolinguistique, Dialectologie et Écologie des langues, (14) Syntaxe. A ces quatorze thématiques a été ajoutée une quinzième session « pluri-thématique », laissant ouverte la possibilité de travailler dans plusieurs domaines, voire en marge des territoires disciplinaires traditionnels.

Chaque thématique est pilotée par un Président et coordonnée par un Vice-président (membre du Comité directeur de l’ILF, ou bien choisi par ce comité). Les comités scientifiques comportent une proportion équilibrée de spécialistes français et étrangers. Un soin particulier a été accordé à la sélection des comités afin de s’assurer qu’ils présenteraient les plus grandes garanties scientifiques pour le succès du congrès. On trouve donc dans chaque comité des linguistes connu(e)s mondialement pour leur contribution au domaine. Le rôle de ces comités est de sélectionner les propositions de communications.

Les soumissions se feront sous la forme de brefs articles de 10 à 15 pages.

Toutes les communications (y compris les conférences plénières) seront publiées sous la forme d'un article de 10 à 15 pages dans les actes du congrès (sous forme de CD-ROM accompagnant un livret des titres et des résumés des communications) et maintenues sous forme électronique sur le site du CMLF. L'archive électronique restera accessible après le congrès.

Comité scientifique

Le Comité scientifique est composé des comités des 14 thématiques du Congrès et des responsables de la session pluri-thématique : 5

- Discours, Pragmatique et Interaction

Présidente : Sabine Diao-Klaeger (Universität Koblenz-Landau, Allemagne), Vice-présidente/coordonnatrice : Christine Béal (Université Paul-Valéry – Montpellier 3)

Autres membres du comité : Chantal Claudel (Université Paris 8), Gaétane Dostie (Université de Sherbrooke, Canada), Laurent Fillietaz (Université de Genève, Suisse), Marie-Noëlle Guillot (University of East Anglia, Royaume-Uni), Catherine Kerbrat-Orecchioni (Université Lumière - Lyon 2), Sophie Moirand (Université Sorbonne Nouvelle - Paris 3), Kerry Mullan (Royal Melbourne Institute of Technology, Australie), Juan Manuel Lopez Muñoz (Universidad de Cádiz, Espagne), Christian Plantin (Université Lumière - Lyon 2), Agnès Steuckardt (Université Paul-Valéry - Montpellier 3), Britta Thörle (Universität Siegen, Allemagne), Frédéric Torterat (Université Nice Sophia Antipolis), Patricia Von Münchow (Université Paris Descartes), Véronique Traverso (Université Lumière - Lyon 2)


L’analyse du discours, dans son acception contemporaine, se définit essentiellement par la mise en relation des manifestations concrètes du langage avec ses conditions de production, et implique donc une prise en considération du locuteur, du référent et de la situation de communication. Vu sous cet angle, le discours, qu’il soit écrit ou oral, se caractérise par la présence de la subjectivité de l’énonciateur (linguistique de l’énonciation) et également par la manière dont le locuteur met en scène de façon plus ou moins implicite d’autres voix que la sienne à propos du même objet (dialogisme). La pragmatique possède un champ d’application très large, couvrant tous les aspects pertinents pour l’interprétation des énoncés, liés non seulement au système linguistique mais aussi au contexte de production. Son domaine s’est encore enrichi avec le développement de nouvelles pratiques de constitution de corpus de données orales et vidéo, qui permettent d’intégrer dans les analyses une grande diversité de phénomènes (prosodie, multimodalité notamment). Dans le cas des interactions verbales, c’est la co-présence (en face à face, au téléphone, sur skype) de deux ou plusieurs personnes qui exerce une influence déterminante sur la forme et le contenu que va prendre l’énoncé. Pour certains linguistes, elles constituent simplement une sous-catégorie du discours, qui possède des caractéristiques propres (notamment le contexte interactif), mais qui ne peut être décrite comme un objet entièrement autonome (certains parlent d’ailleurs de discours-en-interaction). Parallèlement, le courant de l’analyse conversationnelle développe une méthodologie et des objectifs distincts de l’analyse du discours (approche strictement empirique et inductive, focalisation sur les usages situés, le contexte séquentiel et les conduites multimodales). Cette section, ouverte à toute forme d’analyse du discours et de l’interaction, privilégiera néanmoins les approches qui sont clairement ancrées sur des données empiriques et qui interrogent les imbrications théoriques des champs de l’analyse du discours, de la pragmatique et de l’interaction.

- Francophonie

Présidente : Chantal Lyche (Université d’Oslo, Norvège), Vice-président/ coordonnateur : André Thibault (Université Paris-Sorbonne)

Autres membres du comité : Fouzia Benzakour (Université de Rabat et Université de Sherbrooke), Peter Blumenthal (Universität zu Köln, Allemagne), Jürgen Erfurt (Goethe Universität Frankfurt am Main, Allemagne), Carole de Féral (Université Nice Sophia Antipolis), Michel Francard (Université Catholique de Louvain, Belgique), Andres Kristol (Université de Neuchâtel, Suisse), Gudrun Ledegen (Université de Rennes 2), Salah Mejri (Université Paris-XIII)


L'étude du français en francophonie occupe de plus en plus de place dans la discussion scientifique, de pair avec l'extension de sa diffusion dans le monde. Cet objet polymorphe peut être appréhendé de plusieurs façons : les points de vue internes, qu'il s'agisse des aspects phonétiques/phonologiques, morpho-syntaxiques et lexico-sémantiques, gagnent à être croisés avec les points de vue externes (facteurs de variation diachronique, diastratique, pragmatique et stylistique; contacts de langue, 6

alternance et mélange codiques; étiolement, accommodation et loyauté linguistiques; étymologie, histoire des mots et lexicographie historico-différentielle ; élaboration de normes nationales; sémiotique littéraire). La session invite à soumettre des articles se rattachant à toutes ces approches, dans le respect de tous les cadres théoriques.

- Histoire du français : perspectives diachronique et synchronique

Présidente : Lene Schøsler (Université de Copenhague, Danemark), Vice-présidente/coordonnatrice : Sophie Prévost (CNRS/ENS/Université Sorbonne Nouvelle)

Autres membres du comité : Wendy Ayres-Bennett (Cambridge University, Royaume Uni) , Eva Buchi (CNRS/Université de Lorraine), Anne Carlier (Université Lille 3), Bernard Combettes (Université de Lorraine), Walter De Mulder (Université d’Anvers, Belgique), Monique Dufresne (Queen’s University, Kingston, Ontario, Canada), Céline Guillot-Barbance (ENS de Lyon), Christiane Marchello-Nizia (ENS de Lyon), Nicolas Mazziotta (Universität Stuttgart, Allemagne), Maria Selig (Universität Regensburg, Allemagne), Richard Waltereit (Newcastle University, Royaume Uni).


Les études proprement diachroniques, portant sur l'évolution de phénomènes à travers les siècles ou sur des diachronies courtes (y compris de la langue des 20-21èmes siècles) sont encouragées, quel que soit le domaine dont elle relèvent (phonétique, morphologie, syntaxe, sémantique, ou pragmatique), qu’il s’agisse d’écrit ou d’oral, et que les analyses soient descriptives ou plus spécifiquement théoriques.

Seront également accueillis des travaux visant à approfondir ou discuter des théories sur le changement.

Enfin, des études synchroniques consacrées à une période ancienne précise, antérieure au 20ème siècle, trouveront également leur place dans cette section.

- Histoire, Épistémologie, Réflexivité

Président : Bernard Colombat (Université Paris-Diderot), Vice-président/coordonnateur : Franck Neveu (Université Paris-Sorbonne)

Autres membres du comité : Danielle Candel (CNRS/Université Paris-Diderot), Marie-Christine Lala, (Université Sorbonne Nouvelle-Paris 3), Jacqueline Léon (Université Paris-Diderot), Sophie Piron (Université du Québec, Montréal), Pierre-Yves Testenoire (Université Sorbonne Nouvelle-Paris 3), Anne-Gaëlle Toutain (Université Sorbonne Nouvelle-Paris 3)


L’histoire et l’épistémologie de la science linguistique ont connu au cours des dernières décennies un développement considérable, témoignant en cela de la nécessité cruciale pour les linguistes de s’interroger sur les objets, les orientations, le langage, les frontières et l’historicité de leur domaine de recherche. La session « Histoire, Épistémologie, Réflexivité » du Congrès se donne pour objectif d’établir un état des lieux de cet ensemble de problématiques. Pour ce faire, elle souhaite susciter des propositions de communication orientées, notamment, vers les questions suivantes :

- la grammatisation et l’histoire du français ;

- la linguistique française comme linguistique du français ou comme théorisation française des langues; les modélisations et les pratiques de recherche en linguistique française ; la notion de

« tradition » en linguistique; la « tradition grammaticale française » ; la notion de « linguistique nationale » ;

- l’histoire des théories des langues et du langage comme composante de la réflexivité linguistique ; la notion d’« école linguistique » ;

- la terminologie et la terminographie linguistiques ;

- l’histoire du métalangage français ; l’historicité de la linguistique française ; les fondements et les objectifs de l’historiographie en linguistique française ; la constitution et l’emploi des bases de données textuelles en histoire de la linguistique ; l’édition de textes grammaticaux anciens ;



l’usage des corpus en terminographie linguistique ; l’exploitation scientifique des premiers outils linguistiques français ;

- l’interface science du langage/philosophie du langage ; le tournant philosophique de la linguistique ; la philosophie de la linguistique, etc.

- Lexique(s)

Président : Jean-François Sablayrolles (Université Paris 13), Vice-président/coordonnateur : Francis Grossmann (Université Stendhal - Grenoble 3)

Autres membres du comité : Xavier Blanco (Universitat Autònoma de Barcelona, Espagne), François Gaudin (Université de Rouen et LDI), Alicja Kacprzak (Université de Lodz, Pologne), Marie-Claude L’Homme (Université de Montréal, Canada), Aïno Niklas-Salminen (Université Aix-Marseille), Alain Polguère (Université de Lorraine et IUF), Agnès Tutin (Université Stendhal - Grenoble 3), Vorger Camille (Université de Lausanne, Suisse), Esme Winter-Froemel (Universität Trier, Allemagne)


Le lexique entretient des relations avec (quasiment) toutes les branches de la langue (à laquelle serait-il complètement étranger ?) et, par voie de conséquence, la lexicologie est donc en relation avec (quasiment) toutes les branches des sciences du langage (à laquelle serait-elle complètement étrangère ?). Les évolutions des approches théoriques dans les sciences du langage (morphologie constructionnelle, études combinatoires, linguistique cognitive, approche computationnelle, linguistique de corpus, lexicométrie, textométrie, analyse du discours… se répercutent donc sur les études lexicales. À côté de ces études synchroniques, diverses, on observe aussi un retour à l’histoire et à l’évolution des mots et de leurs sens. De nouvelles réflexions se sont développées sur la nature des unités lexicales et des éléments qui les forment, sur leur traitement polysémique ou homonymique, sur les processus de figement et de défigement, sur la néologie et sur les évolutions du lexique de la langue, etc. et tout ceci a des répercussions pratiques sur la confection de dictionnaires (traditionnels ou tournés vers le TAL), l’enseignement des langues, la traduction…Cette session souhaite fournir des regards croisés entre lexicologie, terminologie, lexicographie, métalexicographie, constitution de lexiques électroniques pour le traitement automatique de la langue, analyse des textes fondée sur le lexique…La session Lexique(s) invite les contributeurs à soumettre des propositions portant sur tous les aspects de l’étude du lexique français : description et/ ou modélisation soit dans une perspective historico-comparative, soit dans une perspective synchronique.

- Linguistique de l’écrit, Linguistique du texte, Sémiotique, Stylistique

Président : Thomas Broden (Université de Purdue, États-Unis), Vice-présidente/coordonnatrice : Irène Fenoglio (ITEM, CNRS-ENS)

Autres membres du Comité d’évaluation : Driss Ablali (Université de Lorraine) Céline Beaudet (Université de Sherbrooke, Canada), Christophe Leblay (Université de Turku, Finlande), Julie Lefebvre (Université de Lorraine), Aya Ono (Université de Keio, Japon), Gilles Philippe (Université de Lausanne, Suisse)


Cette section invite à s’interroger sur les propriétés linguistiques de l’écrit. Plusieurs angles d’approche peuvent être proposés : l’écriture en production (genèse, cognition, textualisation), l’écrit constitué (formes énonciatives, faits de discours, constitution des genres), le texte (cohérence, composantes, argumentation) mais aussi la sémiotique de l’écrit et la stylistique, dans sa dimension théorique et comparative. Vu l’ampleur de la thématique, on privilégiera les propositions dont les enjeux ne se limitent pas à la seule analyse du corpus d’appui mais manifestent une préoccupation épistémologique et méthodologique claire et innovante. Le Congrès mondial de linguistique française visant tout particulièrement à faire un état des lieux de la recherche et à dégager des perspectives nouvelles, on veillera donc, dans tous les cas, à privilégier la problématique sur le corpus. 8

- Linguistique et Didactique (français langue première, français langue seconde)

Présidente : Carole Fleuret (Université d'Ottawa, Canada), Vice-présidente/coordonnatrice : Béatrice Fracchiolla (Université de Lorraine)

Autres membres du comité : Nathalie Auger ((Université Paul-Valéry – Montpellier 3)), Lucile Cadet (Université Paris 8), Pierre Escudé (Université de Bordeaux), Cécile Gois (Université François Rabelais de Tours), Martine Kervran (Université de Brest), Eva Lemaire (University of Alberta, Canada), Jean-François de Pietro (Institut de recherche et de documentation pédagogique de Neuchâtel, Suisse)


Les domaines de recherche couverts par la didactique du français (langue première ou seconde) sont en lien étroit – mais non exclusifs – avec différents champs des sciences du langage, comme la psycholinguistique et l'acquisition, la linguistique textuelle, l'analyse du discours et l'enseignement, la sociolinguistique, la morphologie et l'enseignement de l’orthographe, de le la lecture et de l'écriture, la syntaxe et l'enseignement de la grammaire, la sémantique, le lexique, la phraséologie et l'enseignement du vocabulaire, etc. Les liens nombreux, divers et complexes qui peuvent lier ces différents champs mériteront d’être investis lors de cette nouvelle édition du CMLF, dans toute leur variété et avec toute la précision requise. De telles exigences sont d’autant plus fortes que sont remarquables la diversité des situations d’enseignement de la langue française et l’étendue des recherches entreprises dans ce cadre thématique ; sans parler des enjeux sociaux de réussite scolaire qui sont associés à la maîtrise du français.

Les contributions soumises devront circonscrire, dans le cadre d’une problématique linguistique et didactique définie, les fondements notionnels et méthodologiques sur lesquels elles se développent, ainsi que les conditions des observations, des applications et des résultats qu’elles auront permis de mettre à jour.

- Morphologie

Présidente : Angela RALLI (Université de Patras, Grèce), Vice-présidente/ coordonnatrice : Georgette Dal (Université de Lille)

Autres membres du comité : Bernard Fradin (Université Paris-Diderot), Nabil Hathout (Université Jean Jaurès), Marianne Kilani-Schoch (Université de Lausanne, Suisse), Judith Meinschaefer (Freie Universität Berlin, Allemagne), Fiammetta Namer (Université de Lorraine), Angela Ralli (Université de Patras, Grèce), Franz Rainer (Institut für romanische Sprachen Wirtschaftsuniversität, Autriche)


La thématique « Morphologie » se conçoit comme un lieu d’échanges, sans exclusive théorique. Elle accueille toute soumission originale portant sur la morphologie constructionnelle ou la morphologie flexionnelle du français, le cas échéant dans une perspective contrastive. La thématique est ouverte aux propositions théoriques ou davantage applicatives, dès lors qu’elles prennent appui sur des données du français. Elles peuvent également porter sur les interfaces, intra- ou extrasystème, se situer dans une perspective psycholinguistique ou dans celle du traitement automatique des langues.

Les principaux critères de sélection des soumissions sont les suivants :

- nouveauté des faits linguistiques étudiés ou originalité de l’analyse proposée,

- assise empirique des analyses et couverture des données,

- clarté de l’exposition et solidité de l’argumentation,

- connaissance de la littérature scientifique du champ, nationale et internationale.

- Phonétique, Phonologie et Interfaces

Président : Zsuzsanna Fagyal (Université d’Illinois Urbana-Champaign, États-Unis), Vice-président/coordonnateur : Rudolph Sock (Université de Strasbourg)

Autres membres du comité : Lorraine Baqué (Universitat Autònoma de Barcelona, Espagne), Marie-Hélène Côté (Université Laval, Québec), Cécile Fougeron (CNRS/ Université Sorbonne Nouvelle-Paris 3), Randall Gess (Université Carleton, Canada), Bernard Harmegnies (Université de Mons, Belgique), Yvan Rose (Memorial University of Newfoundland, Canada) 9


Les grands phénomènes phonologiques du français, domaine longtemps privilégié des modélisations théoriques, ont reçu ces dernières années un éclairage fructueux grâce aux apports de disciplines connexes. La session phonologie a pour objectif de témoigner des bienfaits de cette synergie et de montrer comment la diversité des approches a permis de réelles avancées dans la compréhension de nombreux problèmes et dans la réflexion phonologique en général. Elle est ouverte à la pluralité des thématiques, et s’intéresse aux regards croisés que la phonologie (phonologie théorique, phonologie de laboratoire), la phonétique, et les disciplines qui les côtoient peuvent apporter aux grandes questions de la phonologie du français et de la théorie phonologique. La session phonologie/phonétique invite à des soumissions d’articles originaux sur tous les aspects de la phonologie/phonétique du français. Cela inclut notamment :

- la phonologie segmentale

- la phonologie autosegmentale

- la phonétique et la phonologie de laboratoire

- la prosodie

- l’interface phonétique/phonologie

- l’interface phonologie/morphologie

- l’interface phonologie/syntaxe

- l’interface phonologie/pragmatique

- l’interface phonologie/sémantique

- l’interface phonologie/psycholinguistique

- l’interface phonologie/sociolinguistique

- les phonologies en contact

- phonétique, phonologie et études cliniques

- Psycholinguistique et Acquisition

Présidente : Michèle Kail (CNRS/Université Paris 8), Vice- président/coordonnateur : Christophe Parisse (INSERM, Université Paris Ouest Nanterre La Défense)

Autres membres du comité : Sandra Benazzo (Université Paris 8), Séverine Casalis (Université de Lille), Lucile Chanquoy (Université Nice Sophia Antipolis), Michèle Guidetti (Université Toulouse II – Le Mirail), Heather Hilton (Université Lumière – Lyon 2), Sophie Kern (CNRS/Université Lumière – Lyon 2), Virginie Laval (Université de Poitiers), Christelle Maillart (Université de Liège, Belgique), Armanda Martins da Costa (Université de Lisbonne, Portugual), Colette Noyau (Université Paris Ouest Nanterre La Défense), Anne Salazar Orvig (Université Sorbonne Nouvelle - Paris 3), Hélène Delage (Université de Genève, Suisse), Marie-Anne Schelstraete (Université Catholique de Louvain, Belgique), Annie Tremblay (Université du Kansas, Etats-Unis), Jürgen Weissenborn (Université Humboldt, Allemagne)

Présentation La psycholinguistique étudie les processus mentaux et les structures cognitives intervenant dans la perception, la compréhension, la production et l’acquisition du langage oral et du langage écrit. Elle concerne un large champ de recherches interdisciplinaires. Les études présentées dans la thématique « Psycholinguistique, Acquisition » concerneront des locuteurs adultes et enfants, normaux ou présentant une pathologie du langage. Elles seront centrées sur la langue française notamment lorsque celle-ci est susceptible de mettre en évidence des aspects particuliers du traitement ou du développement, par comparaison ou non avec d’autres langues. Ces études peuvent concerner des locuteurs monolingues francophones ou des locuteurs qui comptent le français dans le répertoire des langues qu’ils utilisent.

- Ressources et Outils pour l’analyse linguistique

Présidente : Christiane Fellbaum (Université de Princeton, Etats-Unis), Vice-président /coordonnateur: Jean-Luc Minel (MoDyCo, Université Paris Ouest Nanterre La Défense et CNRS)

Autres membres du comité : Delphine Battistelli (Université Paris Ouest Nanterre La Défense), Olivier Baude (Université d’Orléans), Farah Benamara (Université Paul Sabatier -Toulouse), Maria Jose Bocorny-Fillato (Federal University of Rio Grande do Sud, Brésil), Anne Condamines (CNRS et Université Toulouse), Serge Heiden (ENS de Lyon), Guy Lapalme (Université de Montréal, Canada), Eric Laporte (Université Paris-Est Marne-la-Vallée), 10

Dominique Longrée (Université de Liège et Université Saint-Louis, Belgique), Yvette Yannick Mathieu (CNRS et Université Paris-Diderot), Emmanuel Morin (Université de Nantes), Jean-Marie Pierrel (Université de Lorraine), Dina Wonsever (Universidad de la Republica, Uruguay)

Présentation La mise à disposition de grands corpus électroniques oraux ou écrits ainsi que celle de ressources annotées à des niveaux divers (morphologique, syntaxique, sémantique et discursif) ouvre la voie à des travaux qui interrogent les approches classiques des Sciences du Langage. Le développement d’outils de traitement informatique (tels que les outils de collectes de données langagières, les outils d'aide à la transcription, les outils d’annotation automatique ou manuelle, les outils d'analyse fondés sur des traitements symboliques et/ou statistiques, les systèmes d’apprentissage, etc.) transforme les méthodes d’accès aux sources et affecte les démarches d'étude linguistique. La question de la mutualisation et de la capitalisation des ressources devient maintenant un enjeu majeur pour l’ensemble de la communauté, soulevant des problématiques d’interopérabilité, de normalisation et des questions d’ordre juridique, éthique et déontologique. Différentes initiatives internationales contribuent ainsi à développer un Web de données linguistiques (LLOD) et l’on observe une tendance des instances à accompagner ce mouvement : divers projets de constitution de « grands » corpus et de groupes de travail d'annotation, mise en place de laboratoires et d’équipements d’excellence dédiés, tels que l’Equipex ORTOLANG, les consortium de la TGIR HumaNum, l’ European Research Infrastructure Consortium DARIAH, etc. Avec une démarche différente des colloques internationaux spécialisés dans le Traitement Automatique des Langues (TAL), cette session du CMLF 2016 voudrait ouvrir un espace d’échanges scientifiques entre différentes approches linguistiques, sans exclusive de cadres théoriques, de méthodologies ou de pratiques axées sur la théorie et/ou l’empirisme. Cette session sera l’occasion de mettre en relief tout aussi bien des recherches émergentes que des travaux qui consolident les approches existantes. La session « Ressources et outils pour l’analyse linguistique» invite à soumettre des propositions d’articles originaux dont l’objet est de construire, développer, exploiter des ressources ou des outils dans tous les domaines de la linguistique française, aussi bien à l’oral qu’à l’écrit : morphologie, syntaxe, sémantique, discursif, phonétique, phonologie.

- Sémantique

Président : Maj-Britt Mosegaard-Hansen (University of Manchester, Royaume Uni), Vice-présidente/coordonnatrice : Catherine Schnedecker (Université de Strasbourg)

Autres membres du comité : Hava Bat-Zeev Shyldkrot (Tel Aviv University, Israël), Claire Beyssade (Institut Jean Nicod, CNRS Paris), Jacques François (Université Caen Basse Normandie et Université Sorbonne Nouvelle-Paris 3), Catherine Fuchs (ENS/Université Paris 3), Agatha Jackiewicz (Université Paris-Sorbonne), Anne Le Draoulec (CNRS/Université Toulouse II - Le Mirail), Wiltrud Mihatsch (Ruhr-Universität Bochum, Allemagne), Jacques Moeschle (Université de Genève), Henning Nolke (Université d’Aarhus, Danemark), Coco Noren (Université d’Uppsala, Suède), Iva Novakova (Université Stendhal - Grenoble 3), Vincent Nyckees (Université Paris-Diderot), Corinne Rossari (Université de Neuchâtel, Suisse), Marleen Van Peteghem (Université de Gand, Belgique)


Le comité scientifique de la thématique Sémantique du CMLF est ouvert à toute proposition de communication en rapport avec le champ tel que caractérisé ci-dessous, sans aucune exclusive, ni théorique ni méthodologique.

Outre l’exploration des sous-domaines désormais bien identifiés (cf. axes 1 à 8) que couvre la sémantique, sera également envisagée une dimension prospective (axes 9 à 10) :

1. Sémantique lexicale et grammaticale en synchronie et en diachronie ;

2. Sémantique et interfaces avec d’autres disciplines linguistiques : prosodie, morphologie lexicale, syntaxe, pragmatique du discours, linguistique textuelle …;

3. Sémantique pragmatique (présupposition, implicatures, …

4. Sémantique générale et typologie des langues, sémantique contrastive ;

5. Sémantique et applications dans les domaines de :

a. la lexicographie uni- et multi-lingue ;



b. le TAL ((faisceaux d’)indices sémantiques utilisés pour la fouille textuelle ; constitution d’ontologies, … ;

c. …

6. Sémantique cognitive

7. Sémantique(s) formelle(s)

8. Sémantique et modélisation(s)

9. Place et rôle de la sémantique dans la réflexion épistémologique en Sciences du Langage

10. Perspectives pour la sémantique de demain

11. Nouvelles méthodes d’investigation en sémantique (apports des grands corpus, techniques de fouille documentaire, …

- Sociolinguistique, Dialectologie et Écologie des langues

Présidente : Annette Gerstenberg (Freie Universität Berlin, Allemagne), Vice-président/coordonnateur : Gabriel Bergounioux (Université d'Orléans)

Autres membres du comité : Hélène Blondeau (Université de Floride, Etats-Unis), Janice Carruthers (Université de Belfast, Royaume-Uni), Federica Diémoz (Université de Neuchâtel, Suisse), Martin Elsig (Université de Francfort, Allemagne), Dominique Fattier (Université de Cergy-Pontoise), Narcis Iglesias (Université de Gérone, Espagne), Marinette Matthey (Université Stendhal - Grenoble 3), Chérif Mbodj (CLAD/Université Cheikh Anta Diop, Sénégal)


La sociolinguistique est à concevoir comme la prise en compte, dans la linguistique, de la variation inhérente aux langues et à leurs emplois. Longtemps fondée sur une pratique philologique des textes et sur une analyse des auteurs qui sous-estimaient l’hétérogénéité des productions, la linguistique, confrontée à la description de langues à tradition orale, a dû établir des données finalisées en constituant des corpus représentatifs du savoir et des pratiques des locuteurs. Les enquêtes ont mis en évidence la grande diversité et variabilité des formes phonétiques, morphosyntaxiques ou lexicales. Elles ont rendu sensibles les différences qu’introduisent les genres du discours et l’imbrication des faits de langue et de culture. L’étude des dialectes et des créoles, des langues mixtes et des pidgins, et plus généralement la notation des langues à tradition orale dans des contextes où les relations d’échange étaient inégales ont transformé les représentations traditionnelles et les outils de description. Les réalités plurilingues des sociétés contemporaines comportent des nouveaux enjeux sociolinguistiques. La sociolinguistique, dans son acception la plus large, participe à une compréhension des phénomènes qui, dans le temps, relèvent de la diachronie, dans l’espace, de la dialectologie, dans l’espace social de la sociologie du langage, dans les emplois de la pragmatique, de la théorie de la communication, voire de l’ethnométhodologie. Cependant, au lieu d’une conception qui raisonne en termes d’écarts les réalisations qui ne coïncident pas avec une image de la langue fixée par une écriture et des principes normatifs, elle conçoit la diversité interne (sociologie) et externe (écologie des langues) comme étant au principe même de leur analyse, précédant les réductions opérées pour en sélectionner une forme stabilisée à des fins de transcription ou d’étude. Dès lors que l’oral a prévalu sur l’écrit, que les langues vivantes ont supplanté les langues mortes, que les effets omniprésents du contact des langues ont ruiné le mythe de leur pureté, les circonstances de leur usage ont été mises en avant et, en même temps, des outils d’analyse efficaces ont été développés. La sociolinguistique est devenue le lieu d’un débat avec des disciplines qui, dans leur domaine, se trouvaient confrontées aux mêmes phénomènes. En linguistique, le français, par l’importance de sa diffusion internationale et les flux migratoires dans son aire d’expansion, par son horizon de rétrospection, son observation attentive des effets du changement linguistique et la grande diversité de ses variations, par sa créolisation et sa présence sur les nouveaux canaux de communication, le français, donc, représente un terrain d’observation privilégié, un champ d’expérimentation pour les théories contemporaines. La tradition sociolinguistique 12

du français l’a illustré qui ne demande qu’à poursuivre son déploiement dans la session « Sociolinguistique, dialectologie et écologie des langues ».

- Syntaxe

Président : Michel Pierrard (Vrije Universiteit Brussel, Belgique),

Vice-présidente/coordonnatrice : Florence Lefeuvre (Université Sorbonne Nouvelle-Paris 3)

Autres membres du comité :

Christophe Benzitoun (Université de Lorraine), Gilles Corminboeuf (Université de Neuchâtel, Suisse), Antoine Gautier (Université Paris-Sorbonne), Eva Havu (Université d’Helsinki, Finlande), Hans Petter Helland (Université d’Oslo, Norvège), Dominique Legallois (Université de Caen Basse Normandie), Nathalie Rossi-Gensane (Université Lumière - Lyon 2), Elisabezth Stark (Université de Zurich, Suisse)


La syntaxe du français est un domaine fondamental dans la connaissance de la langue et sa description. Elle participe à la diversification des méthodes de recherche et au renouveau des approches théoriques qui recouvre les divers domaines linguistiques. Elle s’enrichit de la confrontation à la diversité des structures syntaxiques qui sont étudiées en typologie et syntaxe générale. Grâce à l’élaboration actuelle de corpus variés, aussi bien oraux qu’écrits, elle peut affiner ses modèles conceptuels.

La section « syntaxe » a pour objectif de faire état des dernières avancées sur les plans descriptif et théorique. Elle accueillera des thèmes variés et des approches diversifiées tout en privilégiant des sujets originaux et des démarches novatrices qui contribuent à une meilleure compréhension de la syntaxe du français ou qui constituent des avancées dans la modélisation théorique. Les personnes intéressées sont invitées à soumettre des communications portant sur tous les phénomènes syntaxiques (syntaxe des catégories, syntaxe (inter-)propositionnelle, ordre des mots, variation synt


3-3-46(2016-07-04) Conference JEP 2016 | TALN 2016 | RÉCITAL 2016, Paris France

Conference JEP 2016 | TALN 2016 | RÉCITAL 2016
4th to 8th July 2016
Paris, France (restricted group)

The laboratories of Paris area working on speech and on written,
spoken and signed language processing organize

from the 4th to 8th of July, 2016,
at the INalCO (13rd arrondissement of Paris),

the fifth joint edition of the JEP-TALN-RECITAL conference. It will

- the 31st Journées d'Etudes sur la Parole (JEP),

- the 23rd French Conference on Natural Language Processing (TALN),

- the 18th Meeting of Student Researchers in Computer Science for
  Natural Language Processing (RECITAL).

** Organization **
President JEP : Sophie ROSSET, LIMSI-CNRS
Co-President JEP : Nicolas AUDIBERT, LPP & Université Sorbonne Nouvelle - Paris 3
President TALN : Thierry HAMON, LIMSI-CNRS et Université Paris 13
Co-President TALN : Laurence DANLOS, Alpage & Université Paris Diderot

** Contact **


3-3-47(2016-07-13) LabPhon 15: Speech Dynamics and Phonological Representation,Cornell University, Ithaca, NY USA

LabPhon 15: Speech Dynamics and Phonological Representation

July 13-16, 2016, Cornell University, Ithaca, NY USA

Phonological representations are dynamic, shaped by forces on diverse timescales.  On the timescale of utterances, interactions between perceptual, motoric, and memory-related processes provide constraints on phonological representations. These same processes, embedded in learning systems and dynamic social networks, shape representations on developmental and life-span timescales, and in turn influence sound systems on historical timescales. Laboratory phonology, through its rich quantitative and experimental methodologies, contributes to our understanding of phonological systems by providing insight into the mechanisms from which representations emerge.

Conference themes:


Production dynamics: How are representations constructed and implemented in speech, and what does articulation reveal about the dynamics of production mechanisms? How do these mechanisms shape representations on longer timescales?

Perceptual dynamics: What forms of perceptual representation do speaker-hearers use and what are the temporal dynamics of perception? How does the interaction between perception and production constrain phonological systems on life-span and diachronic timescales?

Prosodic organization: What are the mechanisms of prosodic organization and how do they give rise to cross-linguistic differences? What are the connections between perception and production of prosodic structure?

Lexical dynamics and memory: How do experience and lexical memory influence phonological representations? What are the relations between lexical representation, production, and perception across diverse timescales?

Phonological acquisition and changes over the life-span: What is the nature of early representations and how do they change? How does learning a second-language interact with existing representations?

Social network dynamics: How does the structure of social networks influence phonological representations on diverse timescales? What are the roles of perception and production in relation to social network dynamics?

Contributions to any of these themes or to any other aspects of laboratory phonology will be welcome. A call for papers will be circulated in the fall of 2015.

Questions can be addressed to

Updates will appear on

Abby Cohn and Sam Tilsen, LabPhon 15 co-chairs


 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA