ISCA - International Speech
Communication Association


ISCApad Archive  »  2014  »  ISCApad #191  »  Events

ISCApad #191

Monday, May 12, 2014 by Chris Wellekens

3 Events
3-1 ISCA Events
3-1-1(2014-01) INTERSPEECH 2014 Newsletter January 2014

 

English in Singapore

 

 

 

Attending INTERSPEECH 2014 in Singapore you will probably be glad to know that English is spoken in any corner of the island. Indeed, it is one of the four national languages and the second language spoken in Singapore's homes. According to the last census, in 2010, 89% of the population is literate in English, making of Singapore a very convenient place for tourism, shopping, research or to hold a conference.

 

 

 

 

 

Historical context of English in Singapore

 

 

 

The history of Singapore started as the first settlements were established in the 13th century AD [2]. Along the years, Singapore was part of different kingdoms and sultanas until the 19th century, when modern Singapore was founded under the impulsion of the British Empire. In 1819, Sir Thomas Stamford Raffles landed in Singapore and established a treaty with the local rulers to develop a new trading station. From this date, the importance of Singapore continuously grew under the influence of Sir Raffles who, despite not being very present on the island, was the real builder of modern Singapore. Singapore remained under British administration until the Second World War and became a Crown Colony after the end of the conflict. Followed a brief period during which Singapore was part of the Federation of Malaya before becoming independent in 1965 and part of the Commonwealth of Nations.

 

From this history, Singapore conserved English as one of its four official languages as well as many landmarks that deserve a visit beside of INTERSPEECH. Amongst them, Singapore Botanic Garden, founded in 1859, is internationally renowned [3]. This urban garden of 74 hectares was laid there by Sir Raffles to cultivate and preserve local plants in the tradition of the tropical colonial gardens. Including a Rain Forest, several lakes, an orchid garden and a performance stage, Singapore Botanic Garden is a very popular place to enjoy free concerts on week end afternoons.

 

Other green spot in the “City in a Garden”, the Padang (field in Malay) was created by Sir Raffles, always him, who planned to reserve the space for public purposes. The place is now famous for the two cricket clubs founded in 1870 and 1883 at both ends of the field and the games that can be watched on weekends.

 

Amongst the numerous landmarks inherited from the British colonization, the most famous include St Andrew's Anglican cathedral, the Victoria Theater, the Fullerton building, Singapore's City Hall, Old Parliament house, the Central Fire Station and many black and white bungalows built from the 19th century for the rich expatriate families. Some of those bungalows, now transformed in restaurant, will offer you a peaceful atmosphere to enjoy a local diner.

 

 

 

 

 

The role of English in Singapore

 

 

 

English has a special place in Singapore as it is the only national language which is not a “mother-tongue”. Indeed, Alsagoff [6] framed English as “cultureless” in that it is “disassociated from Western culture” in the Singaporean context. This cultural voiding makes English an ethnically neutral language used as lingua franca between ethnic groups [5] after replacing the local Malay in this role [4]. Interestingly, English is the only compulsory language of education, and its status in school is that of First Language, as opposed to the Second Language status delegated to the other official languages. By promoting the use of English as working language, the will of the government is to not advantage or disadvantage any ethnic group.

 

Nevertheless, the theoretical equality stated in the constitution between the four national languages is not always present in practice. For instance, English is overwhelming parliamentary business and some governmental websites are only available in English. Additionally, all legislation is in English only [4].

 

 

 

 

 

Singapore English

 

 

 

The standard Singapore English is almost similar to the British English although very cosmopolitan, with 42% of the population born outside the country. Nevertheless, a new standard of pronunciation has been emerging recently [1]. Interestingly, this pronunciation is independent of any external standard and some aspects of it cannot be predicted by reference to British English or any other variety of external English.

 

The other form of English that you will hear in Singapore is known as Singlish. It is a colorful Creole including words from the many languages spoken in Singapore such as various Chinese dialects (Hokkien, TeoChew, and Cantonese), Malay or Tamil. Many things might be said about Singlish and another newsletter will be especially dedicated to this local variant. Don't miss it!

 

 

 

 

 

[1] Deterding, David (2003). 'Emergent patterns in the vowels of Singapore English' National Institute of Education, Singapore. Retrieved 7 June 2013.

 

[2] http://www.yoursingapore.com/content/traveller/en/browse/aboutsingapore/a-brief-history.html
(on line January 7
th, 2014)

 

[3] http://whc.unesco.org/en/tentativelists/5786/ , (on line January 7th, 2014)

 

[4] Leimgruber, J. R. (2013). The management of multilingualism in a city-state: Language policy in Singapore. In I. G. Peter Siemund, Multilingualism and Language Contact in Urban Areas: Acquisition development, teaching, communication (pp. 229-258). Amsterdam: John Benjamins.

 

[5] Harada, Shinichi. 'The Roles of Singapore Standard English and Singlish.' 情報研究 40 (2009): 69-81.

 

[6] Alsagoff, L. (2007). Singlish: Negotiating culture, capital and identity. In Language, Capital, Culture: Critical studies of language and education in Singapore (pp. 25-46). Rotterdam: Sense Publishers.

Back  Top

3-1-2(2014-02) INTERSPEECH 2014 Newsletter February 2014

At the southern tip of the Malayan Peninsula...

 

 

…INTERSPEECH 2014 will be held in Singapore, between Malaysia and Indonesia. In the constitution, Malays are acknowledge as the “indigenous people of Singapore”. Indeed, Malays are the predominant ethnic group inhabiting the Malay Peninsula, Eastern Sumatra, Brunei, coastal Borneo, part of Thailand, the Southern Burmese coast and Singapore. You get now a better understanding of why Malay is one of the four official – and the only national – language of Singapore.

 

 

A Malay history of Singapore

 

It is said that the city of Singapore was founded in 1299 BCE by a Prince from Palembang (South Sumatra, Indonesia), descendant of Alexander the Great. According to the legend, the Prince named the city Singapura (“Lion City”) after sighting a beast on the island. If it is highly doubtful that any Lion ever lived in Singapore outside the zoo, another story tells that the last surviving tiger in Singapore was shot at the bar of the Raffles Hotel in 1902.

Despite this auspicious foundation, the population of Pulau Ujong (the “island at the end”) did not exceed a thousand inhabitants when, in 1819, Sir Thomas Raffles decided to establish a new port to reinforce the British trade between China and India. At this time, the population consisted of different Malay groups (Orang Kallang, Orang Seletar, Orang Gelam, Orang Lauts) and a few Chinese. Nowadays, Malay count for 13.3% of Singapore's population with origins as diverse as Johor, Riau islands (for the Malays Proper), Java (Javanese), Baewan island (Baewanese), Celebes islands (Bugis) or Sumatra (Batak and Minangkabaus).

 

Malay language

 

With almost 220 million of speakers, the Malay language in its various forms unites the fifth largest language community in the world [1]. Origins of Malay language can be traced amongst the very first Austronesian languages, back to 2000 BCE [2]. Through the centuries, the major Indian religions brought a number of Sanskrit and Persian words to the Malay vocabulary while islamization of the South East Asia added Arabic influences [3]. Later on, languages from the colonization powers (mainly Dutch and British) and migrants (Chinese and Tamil) contributed to the diversity of Malay influences [4, 5]. In return Malay words have been loaned in other languages, e.g. in English: rice paddy, Orangutan, babirussa, cockatoo, compound, durian, rambutan, etc.

During the golden age of Malay empires, Malay has gained its foothold in territories of modern Malaysia and Indonesia where it became a vector for trade and business. Today, Malay is official language in Malaysia, Indonesia, Brunei and Singapore and is spoken in southern Thailand, Philippines or Cocos and Christmas Islands in Australia [6].

Malay counts a total of 35 phonemes: 6 vowels, 3 diphthongs and 27 consonants [1,5]. As an agglutinative language, its vocabulary can be enriched by adding affixes to the root words [7]. Affixations in Malay consist of prefixation, infixation, suffixation or circumfixation1. Malay languages also have two proclitics, four enclitics and three particles that may be attached to an affixed word [8].

In Singapore, Malaysia, Brunei and Indonesia, Malay is officially written using the Latin alphabet (Rumi) but an Arabic alphabet called Jawi is co-official in Brunei and Malaysia.

 

 

Bahasa Melayu in Singapore

 

Bahasa Melayu (or Malay Language) is one of the four official languages of Singapore, the ceremonial national language and is used in the national anthem or for military commands. However, several creoles remain spoken across the island. Amongst them, Bahasa Melayu Pasar or Bazaar Malay is a creole of Malay and Chinese which used to be the lingua franca and the language for trade between communities [4, 5]. Baba Malay, another variety of Malay Creole influenced by Hokkien and Bazaar Malay is still spoken by around 10,000 people in Singapore.

Today, Bahasa Melayu is the lingua franca among Malay, Javanese, Boyanese, other Indonesian groups and some Arabs in Singapore. It is used as a mean for transmitting familial and religious values amongst the Malay community as well as in “Madrasahs”, mosques and religious schools. However, with 35% of Malay pupils predominantly speaking English at home and a majority of Singaporeans being bilingual in English, Malay is facing competition from English which is taught as first language [5].

 

Selemat datang ke Singapura / Welcome to Singapore

 

Opportunities of discovering the Malay culture in Singapore are everywhere. Depending on time and location you might want to taste the Malay's cuisine or one of the succulent Malay cookies while walking around the streets of Kampong Glam or visiting the Malay heritage center.

 

 

 

 

 

[1] Tan, Tien-Ping, et al. 'MASS: A Malay language LVCSR corpus resource.' Speech Database and Assessments, 2009 Oriental COCOSDA International Conference on. IEEE, 2009.

[2] http://en.wikipedia.org/wiki/History_of_the_Malay_language#Modern_Malay_.2820th_century_CE.29

[3] http://en.wikipedia.org/wiki/Malay_language

[4] http://en.wikipedia.org/wiki/Comparison_of_Malaysian_and_Indonesian

[5] http://en.wikipedia.org/wiki/List_of_loanwords_in_Malay

[6] http://www.kwintessential.co.uk/resources/global-etiquette/malaysia.html

[7] B. Ranaivo-Malacon, 'Computational Analysis of Affixed Words in Malay Language,' presented at International Symposium on Malay /Indonesian Linguistics, Penang, 2004 .

[8] http://www-01.sil.org/linguistics/GlossaryOfLinguisticTerms/WhatIsACliticGrammar.htm

 

1 circumfixation refers here to the simultaneous adding of morphological units, expressing a single meaning or category, at the left and right side of the root word

 

Back  Top

3-1-3(2014-03) INTERSPEECH 2014 Newsletter March 2014

Tamil and Indian Languages in Singapore

 

 

Our fifth step to INTERSPEECH 2014 brings us to the fourth official language of Singapore: Tamil. Today, Indians constitute 9% of the population of Singaporean citizens and permanent residents. They are considered as the third ethnic group in Singapore, although origins of Singaporean-Indians are diverse. Usually locally born, they are second, third, fourth or even fifth generation descendants of Punjabi, Hindi, Sindhi and Gujarati-speaking migrants from the Northern India and Malayalees, Telugu, and Tamil-speaking migrants from the Southern India. This latter group is the core of Singaporean-Indian population with 58% of the Indian community [2, 5].



Before 1819 and Sir Raffles*,

Indianised Kingdoms, such as Srivijaya and Majapahit, radiated over South-East Asia. Influenced by Hindu and Buddhist culture, a large area including Cambodia, Thailand, Malaysia, part of Indonesia and Singapore, formed the Greater India. From this period, Singapore kept some of its most important pre-colonial artifacts such as the Singapore Stone and it is also reported that the hill of Fort Canning was chosen for the first settlement as a reference to the Hindu concept of Mount Meru which was associated to kingship in Indian culture [1].



Under British colony,

Indian migrants arrived to Singapore from different parts of India to fulfill functions such as clerks, soldiers, traders or English teachers. By 1824, 7% of the population was Indian (756 residents). The part of Indian population in Singapore increased until 1860 when it overtook the Malay community and became the second larger ethnic group of 16%. Due to the nature of this migration, Indians in Singapore were predominantly adult men. A settled community, with a more balanced gender and age ratio, only emerged by the mid-20th century [2]. Although the Indian community increased for the following century, its ratio within the Singaporean population decreased until the 1980's, especially when the British withdrew their troupes after Singapore's independence in 1963.

After 1980, the immigration policy aimed at attracting educated people from other Asian countries to settle in Singapore. This change made the Indian population grow from 6.4% to 9%. In addition to this residential population, many ethnic Indian migrant workers temporarily come to work in Singapore (Bangladeshis, Sri Lankans, Malaysian Indians or Indian Indians)[3].



Tamil language

is one of the longest surviving classical languages in the world [8]. Existing for over 2,000 years, Tamil has a rich literature history and was the first Indian language to be declared a classical language by the Government of India in 2004. Earliest records of written Tamil were dated from around the 2nd century BC and, despite the significant amount of grammatical and syntactical change, this language demonstrates grammatical continuity across 2 millennium.

Tamil is the most populous language from the Dravidian language-family, with important groups of speakers in Malaysia, Philippines, Mauritius, South Africa, Indonesia, Thailand, Burma, Reunion and Vietnam. Significant communities can also be found in Canada, England, Fiji, Germany, Netherlands or United States. It is the official language in Indian states of Tamil Nadu, Puducherry, Andaman and Nicobar Islands as well as in Sri Lanka and Singapore.

Like Malay, another local language, Tamil is agglutinative. Affixes are added to words to mark noun class, number, case or verb tense, person, number, mood and voice [7]. Like Finish, not a local language, Tamil sets no limit to the length and extent of agglutination. This leads to long words with a large number of affixes in which its translation might require several sentences in other languages.

Phonology of Tamil is characterized by the use of retroflex consonants and multiple rhotics. Native grammarians classify phonemes into vowels, consonants and a secondary character called āytam. Aytam is an allophone of /r/ or /s/ at the end of an utterance. Vowels are called uyireḻuttu (uyir – life, eḻuttu – letter) and are classified into short (kuṟil), long (neṭil) (with five of each type) and two diphthongs. Unlike most of Indian languages, aspirated and unaspirated consonants are not distinguished in Tamil. Consonants are called meyyeḻuttu (mey—body, eḻuttu—letters) and count three categories: valliṉam—hard, melliṉam—soft or nasal, and iṭayiṉam—medium. Voiced and unvoiced consonants are not distinguished but voice is assigned depending on the position of the consonant in the word.

Tamil writing currently includes twelve vowels, eighteen consonants and one special character for the āytam that combine to form a total of 247 characters.



In Singapore,

Among all the Indian residents in Singapore, 38.8% speaks Tamil daily, 39% speak English, 11% speak Malay, and the remaining 11% speak other Indian languages [2, 4]. Tamil is one the two Indian languages taught as second language (mother tong) in public schools, together with Hindi. It also used in daily newspapers, free-to-air and cable television, radio channels, cinema or theaters [5].

In the multi-cultural environment of Singapore, Tamil influences the other local languages and vice versa. There is especially strong interaction Malay and the colloquial Singaporean English known as Singlish. Singaporean usage of Tamil includes some words from English and Malays while certain words or phrases that are considered archaic in India remain in use in Singapore [2].



During your stay in Singapore,

you can easily get to know Tamil culture through its many aspects. Having a walk in Little India, in which its architecture is protected since 1989, is a great opportunity to be exposed to Tamil music and lifestyle.

The two-storey shop-houses of Singapore's Indian hub host some of the best ambassadors of Indian cuisine. Here you'll find the local version of the Tamil cuisine that has evolved in response to local taste and influences of other cuisines present in Singapore. Other cuisines also include elements of Indian cuisine such as Singapore-Malay cuisine or Peranakan cuisine. Singaporean Tamil must-try include dishes such as achar, curry fish head, rojak, Indian mee goreng, murtabak, roti john, roti prata and teh tarik. Note that other Indian cuisines from Northern India can also be found.

 

 

 

 

 

 

 

 

 

 

 

 

[1] http://en.wikipedia.org/wiki/History_of_Indians_in_Singapore

[2] http://en.wikipedia.org/wiki/Indians_in_Singapore

[3] Leow, Bee Geok (2001). Census of Population 2000: Demographic Characteristics. p.47-49.

[4] Singapore Census 2010

[5] http://en.wikipedia.org/wiki/Indian_languages_in_Singapore

[6] http://en.wikipedia.org/wiki/Dravidian_language

[7] http://en.wikipedia.org/wiki/Tamil_language

[8] http://en.wikipedia.org/wiki/Classical_language

 

 

* Remember the third step to Singapore

 

Back  Top

3-1-4(2014-04) INTERSPEECH 2014 Newsletter April 2014

Multilingualism in Singapore

 

In September (this year), when attending INTERSPEECH, get ready to experience a stimulating multilingual experience. Every year at INTERSPEECH one can hear many languages from all over the world and meet a number of multilingual researchers. However, unlike other editions of this conference, INTERSPEECH 2014 will be held in a highly multilingual environment. From 2000 until 2007, Singapore was ranked the most globalized nation in the world five times1 considering flow of goods, services, people and communications. Indeed, in addition to the four official languages of Singapore, one can also experience many languages across the five continents in the wet markets and shopping malls of Singapore. What’s more interesting is, for many of these Singaporean, they code switch from one language to another naturally and effortlessly.

To speak or not to speak a language

Multilingualism is the ability to use more than one language. When the number of languages is reduced to two, which is the most common form, one talk about bilingualism. There are many ways to use a language so deciding what are the minimum abilities a person should have to be considered as bilingual is a difficult question. For a long time, linguists have limited the definition of bilingual to individuals who had native competency in two languages. This very restrictive definition, which assimilates bilingualism to ambilingualism has now been commonly extended. In its current interpretation, a bilingual person is one who can function at some level in two languages. Whether functioning consists of reading, speaking or, in the case of receptive bilingual, just understanding does not matter. The degree to which the bilingual subject can interact does not matter either and thus, the ability of asking your way in Bahasa Malayu toward a famous conference venue or reading a map in Chinese Mandarin makes you bilingual (as you read these lines).

 

Bilingualism and Multilingualism

Amongst the most famous multilingual speakers, is Giuseppe Mezzofanti, a 19th century Italian Cardinal, was reputed to speak 72 languages. If you consider claiming to know 72 languages to be a bit too gimmicky, consider this other case of a Hungarian interpreter during the cold war who was able to speak 16 languages (including Chinese and Russian) by the age of 86 [1]. Nevertheless, not all multilingual are hyper-polyglot as being able to learn 12 languages or more is not so common.

The complex mechanism of learning a new language is not clear yet and many questions remain, regarding a possible age limitation or the relationship between already mastered languages and the ease of learning a new one. Nevertheless, before considering learning an additional language you should be aware that this is a complicated process that has many effects. It might of course open your mind to other cultures and ways of thinking, but more importantly, it can deeply modify your brain. Neuroscience is a very active field when it comes to multilingualism. The powerful imaging tools available as well as the observation of subjects affected by trauma have led to a better understanding of the language learning process. Different language areas have been located within the brain and an augmented plasticity of the over-whole structure has been demonstrated for the case of multilingual speakers [2, 3]. Interestingly, the brain structure of simultaneous bilinguals, who learned two languages without formal education during childhood, is similar to that of monolingual subjects. On the contrary, learning a second language after gaining proficiency in the first language modifies the brain structure in an age-dependent manner [4].

Amongst the benefits of multilingualism, it has been shown that it increases the ability to detect grammatical errors and improve the reading ability. In the case of bimodal subjects, who use both a spoken language and a signed language, bilingualism improves the brain's executive function that directs the attention processes used for planning, solving problems and performing various mentally demanding tasks [2].

When it comes to society, multilingualism is the fact of having several languages spoken in a reduced area. Speakers don't have to interact or to be multilingual themselves. This phenomenon is observed in many countries or cities in the world and can take different forms. When a structural distribution of languages exists in the society, one talk about polyglossia. Multipart-lingualism refers to the case where most speakers are monolingual and speak different languages, while omnilingualism, the less common, describes the situation where no structural distribution can be observed and that it is nearly impossible to predict which language is going to be spoken in a certain context. That's the former one that you are going to experience in Singapore.

 

Multilingualism in Singapore

The city-state of Singapore is born from multi-multiculturalism and multilingualism. No wonder then that the choice of having four official languages was thought to be a central piece of the community harmony. In 1966, considering the lack of natural resources and the dominance of international trade in their local economy, Singaporean leaders decided to reinforce English as a medium of economic development [5]. In 1987, English was officially acknowledge as first language while others official languages were referred to as mother tongues [6]. Singapore's bilingualism is thus described as “English-knowing” because of the central role of English [7].

The success of Singapore is said to be partly the result of the language policy which fueled the globalization process of the Lion City [8]. Indeed, the promotion of English as the common neutral language amongst ethnic groups in Singapore facilitated Singapore's integration into the world economy. On the other hand, predominance of English has raised concerns about the decreasing usage of mother tongues and the demise of traditional cultural values [8].

In the last 30 years, language education has been undertaken by the state as one way to control globalization and to reduce the impact of Western culture that tends to replace Asian culture [8]. The growing importance of Western culture in Singapore is reflected by the shift in home languages towards English. Therefore, to reinforce the Asian cultural identity, Singapore's government has emphasized the learning of mother tongues. This policy is considered controversial to some as it led to the popularity of Mandarin Chinese and Bahasa Malayu at the expense of the loss of many other Chinese and Malay language varieties. It is no doubt a delicate and challenging trade-off between preserving language diversity and enforcing common languages for the convenience of communication and economic development.

 

Moving away from a language policy stemming from boosting economic development will probably take time. The implicit role of languages in Singapore’s multi-ethnic society can be significant yet complex. However, there is no doubt that Singaporeans consider multilingualism as a major component of their national identity that relies not only on the four official languages. One way to realize that during your stay with us is to ask any Singaporean about her/his language background and to get immersed in the rich diversity of spoken languages in Singapore .

 

 

[1] http://www.linguisticsociety.org/resource/multilingualism (accessed on 7 April, 2014)

[2] http://en.wikipedia.org/wiki/Cognitive_advantages_to_bilingualism (accessed on 7 April, 2014)

[3] http://en.wikipedia.org/wiki/Neuroscience_of_multilingualism (accessed on 7 April, 2014)

[4] Klein, D., Mok, K., Chen, J. K., & Watkins, K. E. (2013). Age of language learning shapes brain structure: A cortical thickness study of bilingual and monolingual individuals. Brain and language.

[5] 'Interview: Chinese Language education in Singapore faces new opportunities'. People's Daily Online. 2005-05-13. (accessed on 7 April, 2014)

[6] Pakir, A. (2001). Bilingual education with English as an official language: Sociocultural implications. GEORGETOWN UNIVERSITY ROUND TABLE ON LANGUAGES AND LINGUISTICS 1999, 341.

[7] Tupas, R. (2011). English knowing bilingualism in Singapore: Economic pragmatics, ethnic relations and class. English language education across greater China, 46-69.

[8] http://www.unesco.org/new/en/culture/themes/cultural-diversity/languages-and-multilingualism/ (accessed on 7 April, 2014)

1 A.T. Kearney/Foreign policy globalization index www.foreignpolicy.com accessed April 3, 2014

 

Back  Top

3-1-5(2014-05) INTERSPEECH 2014 Newsletter May 2014

Language Education

 

 

 

 

“If you talk to a man in a language he understands, that goes to his head. If you talk to him in his own language, that goes to his heart.”

 

Nelson Mandela

 

 

There is no doubt that this quote will continue to inspire generations of language learners despite the recent advancement of statistical machine translation. Before learning more about the latest discovery in language learning and machine translation during INTERSPEECH 2014, the newsletter of this month is dedicated to language education in Singapore.

 

 

Brief history of language education

 

Discovery of the “New World” is sometimes considered as the starting point of globalization. Of course travelers, merchants and scholars didn't wait that long to study languages but, surprisingly, theorization of language education is quite recent. In the 17th century, Latin was commonly used for education, language and religion in the Western countries. Its teaching was almost exclusively done through grammatical aspects until Jan Amos Komenský, a Czech teacher and educator, created a complete course for learning this language [12]. Jan Amos Komenský major contributions also include the invention of the primer and textbook which are now widely used to teach reading and languages.

During the 19th and 20th centuries, research on language education sped up and led to a large number teaching practices which were supposed to improve the experience of language learners. In 1963, Anthony [14] proposed a three-layer hierarchical framework to describe language teaching including approaches, methods and techniques. Approaches are related to general concepts about the nature of languages, while methods refer to the over-whole plan of the language teaching organization which is implemented in class through techniques which aim at achieving short term objectives. Anthony's framework was later extended by Richards and Rogers [15], who especially extended the concepts of methods and techniques to designs and procedures that were intended to be more specific and less descriptive.

Amongst the most popular, the structural methods consider languages through the prism of grammar, functional methods focus more on languages as a vehicle to accomplish certain functions and interactive ones emphasize on social relations such as acts, negotiation and interactions. Of course this list is not exhaustive and do not address the complexity of the whole range of existing methods.

 

 

Language education in the world

 

In Africa, where most countries used to be colonized, language policies strongly depend on the former colonial power and its tolerance of the local languages [16] but also on the post-independence political evolution, on the socio-linguistic contour of each country and on the level of education. During colonization, the French, Portuguese and Spanish used to teach their language at all levels and from the first day of school. The Germans did promote their language while giving prominence to local languages in the first years of schooling and the British conducted the first year of education in the local language before changing it to English in the following years. In some parts of Western Africa, British even encourage the teaching of certain languages such as Hausa, Igbo, Yoruba, Efik, Ga, or Ewe but kept English as a reference point. After independence, most of the African countries considered reforming education to promote indigenous languages but in a lot of cases, those politics have been questioned as teaching in the mother tongue could weaken the national unity. Of course, it has been easier for the few monolingual countries (Somalia, Burundi, Rwanda, Botswana, Lesotho, Swaziland, Madagascar, etc.) to promote education in their native languages while some multilingual countries have chosen to develop regional languages. For instance, Zambia uses six zonal languages in education, Zaire four, and Togo two. For economical reasons, English and French are still taught across the former colonies and are still strong factor of regional cohesion.

 

Bilingual education in South America mostly refers to the teaching of a “mainstream” language such as Spanish or Portuguese to non-Spanish and non-Portuguese speaking people [17]. It usually follows a “transition” or “maintenance” model. In the first one, the official language progressively replaces the mother tongue while the second one makes use of the two languages through the whole curriculum. Language distribution in South America is mainly characterized by the fact that many populations are located in isolated area where communication and outside contact are poor and thus, monolingualism is prevalent. In this context, a number of countries have launched bilingual education programs in the 1970's. Those programs have provided good results since bilingual education improved education amongst indigenous children.

 

North America’s modern history includes periods of colonialism too. However, the language education evolved in a very different way and strongly differ between Canada and U.S.. In the 19th century, U.S. was especially friendly towards bilingualism as immigrant communities commonly maintained and published in their native language [19]. Starting from the 1880's, and due to a huge influx of non-English speaking immigrants, English was used to develop an “American” identity. Monolingualism as then become the norm and second language learning is still uncommon before high school. Therefore, only 15 to 20 percent of Americans consider themselves bilingual compared to 56 percent of European (European commission survey, 2006). The most common second languages taught in the U.S. include Spanish, due to the large number of recent Spanish-speaking immigrants, followed by French, German, Latin, Mandarin, Italian and Japanese, in descending order of frequency. As a multilingual country, Canada allows two languages of instruction: English by default and French in the case of “Francophone children whose parents qualify for minority language rights” [18]. Additionally, aboriginal languages can be taught as a second language as all students are required to learn a second language from 9 to 14 years old.

 

In Europe, all children studied at least one foreign language as part of their compulsory curriculum except in Ireland where instruction includes English and Irish, both considered a native language, and a third European language. In all European countries, English is by far the most commonly learned language before French, Spanish, German and Russian. On average, children start learning a second language between 6 and 9 years old [20]. This age has strongly decreased in the last 15 years and it is now common for children to start learning in pre-school. However, the weekly number of hours spent learning a language did not really increased in the same time. Due to the multilingual context, and in order to encourage cross-border exchanges, the European Union strongly encourages learning of foreign languages and, on average, in 2009/10, 60.8% of lower secondary education students were learning two or more foreign languages. From a local point of view, almost all European countries have regional languages and more than half of the countries use partial immersion to teach both the minority and the state language.

 

In South-East Asia, 550 millions inhabitants speak hundreds of languages including local (Javanese, Hmong for example), national (Khmer, Thai, for instance) and regional languages (varieties of Chinese and Malay) [2]. Amongst the eleven countries of this region, all except Thailand have endured colonization and been exposed to European languages: Dutch in Indonesia, English in Brunei, Burma, Malaysia, Philippines and Singapore, French in Cambodia, Laos and Vietnam and Portuguese in East Timor. After decolonization, most governments used languages to strengthen national cohesion and forge a national identity. All eleven Southeast Asian countries have included English in education, often as a foreign language. In certain countries, however, instruction is given in the national language while sciences and technologies are taught in a foreign language, for instance English in Myanmar and French in Laos.

 

 

Singapore:

 

Under British colonial rule, school systems in the main four languages, namely Chinese Mandarin, English, Malay and Tamil, cohabited in Singapore [4]. After World War II, the schools were gradually brought under the control of the government, which decided to establish one of the existing languages as lingua franca to strengthen the national unity. Amongst the possible languages, Malay was considered a good choice given the integration of Singapore to the Federation of Malaya, and Hokkien was already spoken by the majoritys of Chinese Singaporeans. However, the government decided to chose English as it was both a tool for economic development and an ethnic neutral language in the context of Singapore’s multi-ethnic population including Chinese, Malay and Indian.

The bilingual education policy was officially introduced in 1966 with the possibility to teach English as a first or second language. However, schools teaching English as a second language declined rapidly as English was considered a key element for professional success. By 1986, there remained a single class of 28 secondary school students following a curriculum in Malay and Malay-medium schools, came to a natural demise like the Tamil-medium schools in 1982. Chinese-medium schools were removed by the government [4]. The government then officially defined English as the first language and the three other official languages as mother tongues. In a will of preserving the Asian culture in Singapore, the government imposed the learning of the mother tongue as second language. This mother tongue is determined for each student depending on her/his ethnicity. Therefore, Malay Singaporean have to learn Bahasa Malayu, Chinese learn Mandarin while Indian from a Dravidian language learn Tamil. Indian is a special case as non-Vernacular Languages like Hindi, Bengali, Punjabi, Gujarati and Urdu can be chosen as a mother tongue by non-Tamil but the state does not provide teachers in those languages [3]. On the opposite, all Singaporean Chinese have to learn Mandarin despite the various linguistic backgrounds present in the local community. Due to this policy, importance of non-Mandarin Chinese languages strongly decreased in the last 50 years and Mandarin is now the first-spoken language in Singaporean homes. Since 2002, Chinese associations in Singapore propose dialect classes in order to reconnect the population with its Chinese culture and enable the younger generation to talk to elderly [3]. A third language can be learn starting from secondary school for which students can chose amongst Mandarin (for non-Chinese), Malay (for non-Malays), Bahasa Indonesia (for non-Malays), Arabic, Japanese (only for Chinese), French, German and Spanish [4,6].

Although it is one of the reasons of Singapore's exceptional economic success, bilingual policy has been, according to the government itself, a cultural failure. By promoting English as a business and inter-ethnic language, the bilingual policy made other languages less attractive to the younger generation. Additionally, the mother tongues have been taught as discipline while using methods developed for a native language. As a consequence, many Singaporean students don't see the point of learning a language which is not a vector of culture but only a subject of study. Realizing this mistake, the government recently decided to make language learning more interesting and IT-based. For example, language learning through the use of smart phones and on-line computer games [5,10]

From a wider perspective, Singapore is unique in Asia as it has a strong national education system at a moment where other countries massively privatize instruction [11] and also because of the way, probably unparalleled in any other developed country, the state’s intervention changed the people’s language and speech patterns [1].

 

 

 

[1] Language, Society and Education in Singapore: Issues and Trends (Second Edition); S. Gopinathan, Anne Pakir, Ho Wah Kam and Vanithamani Saravanan (Eds.); Times Academic Press, Singapore, 1998

[2] Language Education Policies in Southeast Asia, T Clayton, elsevier

[3] http://en.wikipedia.org/wiki/Languages_of_Singapore

[4] http://en.wikipedia.org/wiki/Language_education_in_Singapore

[5] news.asiaone.com/News/Education/Story/A1Story20100603-219929.html

[6] http://www.moe.gov.sg/media/press/2004/pr20040318.htm

[7] http://www.moelc.moe.edu.sg/index.php/courses

[8] https://web.archive.org/web/20131002211453/http://libguides.nl.sg/content.php?pid=57257&sid=551371

[9] http://books.google.com.sg/books?id=_Wsh1EbUJB0C&printsec=frontcover#v=onepage&q&f=false

[10] http://enterpriseinnovation.net/article/multimedia-aid-chinese-instruction-singapore-schools

[11] Globalization and Multilingualism in Singapore: Implications for a hybrid identity

[12] http://en.wikipedia.org/wiki/Language_education

[13] http://en.wikipedia.org/wiki/Language_education_by_region

[14] Anthony, E. M. (1963). 'Approach, Method, and Technique'. ELT Journal (2): 63–43. doi:10.1093/elt/XVII.2.63

[15] Richards, Jack; Rogers, Theodore (2001). Approaches and Methods in Language Teaching. Cambridge: Cambridge University Press. ISBN 978-0-521-00843-3

[16] http://fafunwafoundation.tripod.com/fafunwafoundation/id7.html

[17] http://faculty.smu.edu/rkemper/anth_6306/anth_6306_language_and_education_in_latin_america.htm

[18] http://www2.gov.bc.ca/gov/topic.page?id=93A2746B883E4DA89C4E7E584D447E4B

[19] http://www.dailytexanonline.com/opinion/2013/10/06/americans-suffer-from-inadequate-foreign-language-education

[20] http://europa.eu/rapid/press-release_IP-12-990_en.htm

 

 

Back  Top

3-1-6(2014-09-14) CfP INTERSPEECH 2014 Singapore URGENT Action Required

Interspeech 2014

 Singapore 

September 14-18, 2014

 

INTERSPEECH 2014 paper submission deadline is on 24 March 2014.  There will be no extension of deadline.  Get ready your paper submissions and gear up for INTERSPEECH in Singapore.

 

 

INTERSPEECH is the world's largest and most comprehensive conference on issues surrounding the science and technology of spoken language processing, both in humans and in machines.

The theme of INTERSPEECH 2014 is 'Celebrating the Diversity of Spoken Languages'. INTERSPEECH 2014 emphasizes an interdisciplinary approach covering all aspects of speech science and technology spanning basic theories to applications. In addition to regular oral and poster sessions, the conference will also feature plenary talks by internationally renowned experts, tutorials, special sessions, show & tell sessions, and exhibits. A number of satellite events will take place immediately before and after the conference. Please follow the details of these and other news at the INTERSPEECH website www.interspeech2014.org.

We invite you to submit original papers in any related area, including but not limited to:

1: Speech Perception and Production

2: Prosody, Phonetics, Phonology, and Para-/Non- Linguistic Information

3: Analysis of Speech and Audio Signals

4: Speech Coding and Enhancement

5: Speaker and Language Identification

6: Speech Synthesis and Spoken Language Generation

7: Speech Recognition - Signal Processing, Acoustic Modeling, Robustness, and Adaptation

8: Speech Recognition - Architecture, Search & Linguistic Components

9: LVCSR and Its Applications, Technologies and Systems for New Applications

10: Spoken Language Processing - Dialogue, Summarization, Understanding

11: Spoken Language Processing -Translation, Info Retrieval

12: Spoken Language Evaluation, Standardization and Resources 

A detailed description of these areas is accessible at: 

 

http://www.interspeech2014.org/public.php?page=conference_areas.html

 

Paper Submission

Papers for the INTERSPEECH 2014 proceedings should be up to 4 pages of text, plus one page (maximum) for references only. Paper submissions must conform to the format defined in the paper preparation guidelines and provided in the Authors’ kit, on the INTERSPEECH 2014 website, along with the Call for Papers. Optionally, authors may submit additional files, such as multimedia files, which will be included in the official conference proceedings USB drive. Authors must declare that their contributions are original and are not being submitted for publication elsewhere (e.g. another conference, workshop, or journal). Papers must be submitted via the online paper submission system, which will be opened in February 2014. The conference will be conducted in English. Information on the paper submission procedure is available at:

http://www.interspeech2014.org/public.php?page=submission_procedure.html

There will be NO extension to the full paper submission deadline.

 

Important Dates

Full Paper submission deadline

:

24 March 2014

Notification of acceptance/rejection

:

10 June 2014

Camera-ready paper due

:

20 June 2014

Early registration deadline

:

10 July 2014

Conference dates

:

14-18 Sept 2014

We look forward to welcoming you to INTERSPEECH 2014 in Singapore!

 

Helen Meng and Bin Ma

Technical Program Chairs

 

 

Contact

 

Email: tpc@interspeech2014.org

organizers.interspeech2014@isca-speech.org— For general enquiries

 

Conference website: www.interspeech2014.org

Back  Top

3-1-7(2014-09-14) CfP Speech Technology for the Interspeech App

Call for Proposals

Speech Technology for the Interspeech App

During the past Interspeech conference in Lyon, a mobile application (app) was provided for accessing the conference program, designing personal schedules, inspecting abstracts, full papers and the list of authors, navigating through the conference center, or recommending papers to colleagues. This app was designed by students and researchers of the Quality and Usability Lab, TU Berlin, and will be made available to ISCA and to future conference and workshop organizers free-of-charge. It will also be used for the upcoming Interspeech 2014 in Singapore, and is available under both iOS and Android.

In its current state, the app is limited to mostly touch-based input and graphical output. However, we would like to develop the app into a useful tool for the spoken language community at large, which should include speech input and output capabilities, and potentially full spoken-language and multimodal interaction. The app could also be used for collecting speech data under realistic environmental conditions, for distributing multimedia examples or surveys during the conference, or for other research purposes. In addition, the data which is being collected with the app (mostly interaction usage patterns) could be analyzed further.

The Quality and Usability Lab of TU Berlin would like to invite interested parties to contribute to this development. Contributions could be made by providing ready-built modules (e.g. ASR, TTS, or alike) for integration into the app, by proposing new functionalities which would be of interest to a significant part of the community, and preferably by offering workforce for such future developments.

If you are interested in contributing to this, please send an email with your proposals to

interspeechapp@qu.tu-berlin.de

by October 31, 2013. In case that a sufficient number of interested parties can be found, we plan to submit a proposal for a special session around speech technology in mobile applications for the upcoming Interspeech in Singapore.

More information on the current version of the app can be found in: Schleicher, R., Westermann, T., Li, J., Lawitschka, M., Mateev, B., Reichmuth, R., Möller, S. (2013). Design of a Mobile App for Interspeech Conferences: Towards an Open Tool for the Spoken Language Community, in: Proc. 14th Ann. Conf. of the Int. Speech Comm. Assoc. (Interspeech 2013), Aug. 25-29, Lyon.

Back  Top

3-1-8(2014-09-14) INTERSPEECH 2014 Singapore

 

 

It is a great pleasure to announce that the 15th edition of the Annual Conference of the International Speech Communication Association (INTERSPEECH) will be held in Singapore during September 14-18, 2014. INTERSPEECH 2014 will bring together the community to celebrate the diversity of spoken languages in the vibrant city state of Singapore.  INTERSPEECH 2014 is proudly organized by the Chinese and Oriental Languages Information Processing Society (COLIPS), the Institute for Infocomm Research (I2R), and the International Speech Communication Association (ISCA).

 

 

 

Ten steps to Singapore

 

You want to know more about Singapore?

 

During the next ten months, the organization committee will introduce you to Singaporean culture through a series of brief newsletters featuring topics related to spoken languages in Singapore. Please stay tuned!

 

 

 

Workshops

 

Submission deadline:  December 1, 2013

 

Satellite workshops related to speech and language research will be hosted in Singapore as well as in Phuket Island, Thailand (1 hr 20 min flight from Singapore) and in Penang, Malaysia (1 hr flight from Singapore).

 

Proposals must be submitted by email to workshops@interspeech2014.org before December 1, 2013. Notification of acceptance and ISCA approval/sponsorship will be announced by January 31, 2014.

 

 

 

Sponsorship and Exhibition

 

The objective of INTERSPEECH 2014 is to foster scientific exchanges in all aspects of Speech Communication sciences with a special focus on the diversity of spoken languages. We are pleased to invite you to take part in this major event as a sponsor. For more information, view the Sponsorship
Brochure
.

 

 

 

Conference venue

 

INTERSPEECH 2014 main conference will be held in the MAX Atria @ Singapore Expo.

 

 

 

Organizers

 

Lists of the organizing, advisory and technical program committees are available on line (here).

 

 

 

Follow us

 

Facebook: ISCA

 

Twitter: @Interspeech2014 follow hash tags: #is2014 or #interspeech2014

 

LinkedIn Interspeech

 

 

 

Contact

 

Conference website: www.interspeech2014.org

 

organizers.interspeech2014@isca-speech.org— For general enquiries

 

sponsorship@interspeech2014.org — For Exhibition & Sponsorship workshops@interspeech2014.org — For Workshops & Satellite Events

 

 

 

 

 

 

Back  Top

3-1-9(2014-09-14) Interspeech 2014 special session : Speech technologies for Ambient Assisted Living.

Interspeech 2014 special session : Speech technologies for Ambient Assisted Living.

Submission deadline: 24th March 2014

Singapore, 14-18 September 2014

http://www.interspeech2014.org/public.php?page=special_sessions.html#speech-technologies-ambient This special session focuses on the use of speech technologies for ambient assisted living, the creation of smart spaces and intelligent companions that can preserve independence and executive function, social communication and security of people with special needs. Currently, speech input for assistive technologies remains underutilized despite its potential to deliver highly informative data and serve as the primary means of interaction with the home. Speech interfaces could replace or augment obtrusive and sometimes outright inaccessible conventional computer interfaces. Moreover, the smart home context can support speech communication by providing a number of concurrent information sources (e.g., wearable sensors, home automation sensors, etc.), enabling multimodal communication. In practice, its use remains limited due to challenging real-world conditions, and because conventional speech interfaces can have difficulty with the atypical speech of many users. This, in turn, can be attributed to the lack of abundant speech material, and the limited adaptation to the user of these systems. Taking up the challenges of this domain requires a multidisciplinary approach to define the user's needs, record corpora in realistic usage conditions, develop speech interfaces that are robust to both environment and user's characteristics and are able to adapt to specific users. This special session aims at bringing together researchers in speech and audio technologies with people from the ambient assisted living and assistive technologies communities to meet and foster awareness between members of either community, discuss problems, techniques and datasets, and perhaps initiate common projects. Topics of the session will include: Assistive speech technology Applications of speech technology (ASR, dialogue, synthesis) for ambient assisted living Understanding, modelling, or recognition of aged and atypical speech Multimodal speech recognition (context-aware ASR) Multimodal emotion recognition Audio scene and smart space context analysis Assessment of speech and language processing within the context of assistive technology Speech synthesis and speech recognition for physical or cognitive impairments Symbol languages, sign languages, nonverbal communication Speech and NLP applied to typing interface applications Language modelling for Augmentative and Alternative Communication text entry and speech generating devices Deployment of speech and NLP tools in the clinic or in the field Linguistic resources; corpora and annotation schemes Evaluation of systems and components. Submission instructions: Researchers who are interested in contributing to this special session are invited to submit a paper according to the regular submission procedure of INTERSPEECH 2014, and to select 'Speech technologies for Ambient Assisted Living' in the special session field of the paper submission form. Please feel free to contact the organisers if you have any question regarding the special session.

Organizers: Michel Vacher michel.vacher [at] imag.fr Laboratoire d'Informatique de Grenoble, François Portet francois.portet [at] imag.fr Laboratoire d'Informatique de Grenoble, Frank Rudzicz frank [at] cs.toronto.edu University of Toronto, Jort F. Gemmeke jgemmeke [at] amadana.nl KU Leuven, Heidi Christensen h.christensen [at] dcs.shef.ac.uk University of Sheffield,

Back  Top

3-1-10(2014-09-14) Special sessions at Interspeech 2014: call for submissions

--- INTERSPEECH 2014 - SINGAPORE

--- September 14-18, 2014

--- http://www.INTERSPEECH2014.org

INTERSPEECH is the world's largest and most comprehensive conference on issues surrounding

the science and technology of spoken language processing, both in humans and in machines.

The theme of INTERSPEECH 2014 is

--- Celebrating the Diversity of Spoken Languages ---

INTERSPEECH 2014 includes a number of special sessions covering interdisciplinary topics

and/or important new emerging areas of interest related to the main conference topics.

Special sessions proposed for the forthcoming edition are:

• A Re-evaluation of Robustness

• Deep Neural Networks for Speech Generation and Synthesis

• Exploring the Rich Information of Speech Across Multiple Languages

• INTERSPEECH 2014 Computational Paralinguistics ChallengE (ComParE)

• Multichannel Processing for Distant Speech Recognition

• Open Domain Situated Conversational Interaction

• Phase Importance in Speech Processing Applications

• Speaker Comparison for Forensic and Investigative Applications

• Text-dependent for Short-duration Speaker Verification

• Tutorial Dialogues and Spoken Dialogue Systems

• Visual Speech Decoding

A description of each special session is given below.

For paper submission, please follow the main conference procedure and chose the Special Session track when selecting

your paper area.

Paper submission procedure is described at:

http://www.INTERSPEECH2014.org/public.php?page=submission_procedure.html

For more information, feel free to contact the Special Session Chair,

Dr. Tomi H. Kinnunen, at email tkinnu [at]cs.uef.fi

----------------------------------------------------------------------------------------------------

Special Session Description

----------------------------------------------------------------------------------------------------

A Re-evaluation of Robustness

The goal of the session is to facilitate a re-evaluation of robust speech

recognition in the light of recent developments. It’s a re-evaluation at two levels:

• a re-evaluation in perspective brought by breakthroughs in performance obtained

by Deep Neural Network which leads to a fresh questioning of the role and

contribution of robust feature extraction.

• A literal re-evaluation on common databases to be able to present and compare

performances of different algorithms and system approaches to robustness.

Paper submissions are invited on the theme of noise robust speech recognition

and required to submit results on the Aurora 4 database to facilitate cross comparison

of the performance between different techniques.

Recent developments raise interesting research questions that the session aims to help

Progress by bringing focus and exploration of these issues. For example

1. What role is there for signal processing to create feature representations to use as

inputs to Deep Learning or can deep learning do all the work?

2. What feature representations can be automatically learnt in a deep learning architecture?

3. What other techniques can give great improvement in robustness?

4. What techniques don’t work and why?

The session organizers wish to encourage submissions that bring insight and understanding to

the issues highlighted above. Authors are requested not only to present absolute performance

of the whole system but also to highlight the contribution made by various components in a

complex system.

Papers that are accepted for the session are encouraged to also evaluate their techniques on new test

data sets (available in July) and submit their results at the end of August.

Session organization

The session will be structured as a combination of

1. Invited talks

2. Oral paper presentations

3. Poster presentations

4. Summary of contributions and results on newly released test sets

5. Discussion

Organizers:

David Pearce, Audience dpearce [at]audience.com

Hans-Guenter Hirsch, Niederrhein University of Applied Sciences, hans-guenter.hirsch [at]hs-niederrhein.de

Reinhold Haeb-Umbach, University of Paderborn, haeb [at]nt.uni-paderborn.de

Michael Seltzer, Microsoft, mseltzer [at]microsoft.com

Keikichi Hirose, The University of Tokyo, hirose [at]gavo.t.u-tokyo.ac.jp

Steve Renals, University of Edinburgh, s.renals [at]ed.ac.uk

Sim Khe Chai, National University of Singapore, simkc [at]comp.nus.edu.sg

Niko Moritz, Fraunhofer IDMT, Oldenburg, niko.moritz [at]idmt.fraunhofer.de

K K Chin, Google, kkchin [at]google.com

Deep Neural Networks for Speech Generation and Synthesis

This special session aims to bring together researchers who work actively on deep neural

networks for speech research, particularly, in generation and synthesis, to promote and

to understand better the state-of-art DNN research in statistical learning and compare

results with the parametric HMM-GMM model which has been well-established for speech synthesis,

generation, and conversion. DNN, with its neuron-like structure, can simulate human speech

production system in a layered, hierarchical, nonlinear and self-organized network.

It can transform linguistic text information into intermediate semantic, phonetic and prosodic

content and finally generate speech waveforms. Many possible neural network architectures or

typologies exist, e.g. feed-forward NN with multiple hidden layers, stacked RBM or CRBM,

Recurrent Neural Net (RNN), which have been used to speech/image recognition and other applications.

We would like to use this special session as a forum to present updated results in the research frontiers,

algorithm development and application scenarios. Particular focused areas will be on

parametric TTS synthesis, voice conversion, speech compression, de-noising and speech enhancement.

Organizers:

Yao Qian, Microsoft Research Asia, yaoqian [at]microsoft.com

Frank K. Soong, Microsoft Research Asia, frankkps [at]microsoft.com

Exploring the Rich Information of Speech Across Multiple Languages

Spoken language is the most direct means of communication between human beings. However,

speech communication often demonstrates its language-specific characteristics because of,

for instance, the linguistic difference (e.g., tonal vs. non-tonal, monosyllabic vs. multisyllabic)

across languages. Our knowledge on the diversities of speech science across languages is still limited,

including speech perception, linguistic and non-linguistic (e.g., emotion) information, etc.

This knowledge is of great significance to facilitate our design of language-specific application of

speech techniques (e.g., automatic speech recognition, assistive hearing devices) in the future.

This special session will provide an opportunity for researchers from various communities

(including speech science, medicine, linguistics and signal processing) to stimulate further discussion

and new research in the broad cross-language area, and present their latest research on understanding

the language-specific features of speech science and their applications in the speech communication of

machines and human beings. This special session encourages contributions all fields on speech science,

e.g., production and perception, but with a focus on presenting the language-specific characteristics

and discussing their implications to improve our knowledge on the diversities of speech science across

multiple languages. Topics of interest include, but are not limited to:

1. characteristics of acoustic, linguistic and language information in speech communication across

multiple languages;

2. diversity of linguistic and non-linguistic (e.g., emotion) information among multiple spoken languages;

3. language-specific speech intelligibility enhancement and automatic speech recognition techniques; and

4. comparative cross-language assessment of speech perception in challenging environments.

Organizers:

Junfeng Li, Institute of Acoustics, Chinese Academy of Sciences, junfeng.li.1979 [at]gmail.com

Fei Chen, The University of Hong Kong, feichen1 [at]hku.hk

INTERSPEECH 2014 Computational Paralinguistics ChallengE (ComParE)

The INTERSPEECH 2014 Computational Paralinguistics ChallengE (ComParE) is an open Challenge

dealing with speaker characteristics as manifested in their speech signal's acoustic properties.

This year, it introduces new tasks by the Cognitive Load Sub-Challenge, the Physical Load

Sub-Challenge, and a Multitask Sub-Challenge: For these Challenge tasks,

the COGNITIVE-LOAD WITH SPEECH AND EGG database (CLSE), the MUNICH BIOVOICE CORPUS (MBC),

and the ANXIETY-DEPRESSION-EMOTION-SLEEPINESS audio corpus (ADES) with high diversity of

speakers and different languages covered (Australian English and German) are provided by the organizers.

All corpora provide fully realistic data in challenging acoustic conditions and feature rich

annotation such as speaker meta-data. They are given with distinct definitions of test,

development, and training partitions, incorporating speaker independence as needed in most

real-life settings. Benchmark results of the most popular approaches are provided as in the years before.

Transcription of the train and development sets will be known. All Sub-Challenges allow contributors

to find their own features with their own machine learning algorithm. However, a standard feature set

will be provided per corpus that may be used. Participants will have to stick to the definition of

training, development, and test sets. They may report on results obtained on the development set,

but have only five trials to upload their results on the test sets, whose labels are unknown to them.

Each participation will be accompanied by a paper presenting the results that undergoes peer-review

and has to be accepted for the conference in order to participate in the Challenge.

The results of the Challenge will be presented in a Special Session at INTERSPEECH 2014 in Singapore.

Further, contributions using the Challenge data or related to the Challenge but not competing within

the Challenge are also welcome.

More information is given also on the Challenge homepage:

http://emotion-research.net/sigs/speech-sig/is14-compare

Organizers:

Björn Schuller, Imperial College London / Technische Universität München,schuller [at]IEEE.org

Stefan Steidl, Friedrich-Alexander-University, stefan.steidl [at]fau.de

Anton Batliner, Technische Universität München / Friedrich-Alexander-University,

batliner [at]cs.fau.de

Jarek Krajweski, Bergische Universität Wuppertal, krajewsk [at]uni-wuppertal.de

Julien Epps, The University of New South Wales / National ICT Australia, j.epps [at]unsw.edu.au

Multichannel Processing for Distant Speech Recognition

Distant speech recognition in real-world environments is still a challenging problem: reverberation

and dynamic background noise represent major sources of acoustic mismatch that heavily decrease ASR

performance, which, on the contrary, can be very good in close-talking microphone setups.

In this context, a particularly interesting topic is the adoption of distributed microphones for

the development of voice-enabled automated home environments based on distant-speech interaction:

microphones are installed in different rooms and the resulting multichannel audio recordings capture

multiple audio events, including voice commands or spontaneous speech, generated in various locations

and characterized by a variable amount of reverberation as well as possible background noise.

The focus of the proposed special session will be on multichannel processing for automatic speech recognition (ASR)

in such a setting. Unlike other robust ASR tasks, where static adaptation or training with noisy data sensibly

ameliorates performance, the distributed microphone scenario requires full exploitation of multichannel

information to reduce the highly variable dynamic mismatch. To facilitate better evaluation of the proposed

algorithms the organizers will provide a set of multichannel recordings in a domestic environment.

The recordings will include spoken commands mixed with other acoustic events occurring in different

rooms of a real apartment.

The data is being created in the frame of the EC project DIRHA (Distant speech Interaction for Robust

Home Applications)

which addresses the challenges of speech interaction for home automation.

The organizers will release the evaluation package (datasets and scripts) on February 17;

the participants are asked to submit a regular paper reporting speech recognition results

on the evaluation set and comparing their performance with the provided reference baseline.

Further details are available at: http://dirha.fbk.eu/INTERSPEECH2014

Organizers:

Marco Matassoni, Fondazione Bruno Kessler, matasso [at]fbk.eu

Ramon Fernandez Astudillo, Instituto de Engenharia de Sistemas e Computadores, ramon.astudillo [at]inesc-id.pt

Athanasios Katsamanis, National Technical University of Athens, nkatsam [at]cs.ntua.gr

Open Domain Situated Conversational Interaction

Robust conversational systems have the potential to revolutionize our interactions with computers.

Building on decades of academic and industrial research, we now talk to our computers, phones,

and entertainment systems on a daily basis. However, current technology typically limits conversational

interactions to a few narrow domains/topics (e.g., weather, traffic, restaurants). Users increasingly want

the ability to converse with their devices over broad web-scale content. Finding something on your PC or

the web should be as simple as having a conversation.

A promising approach to address this problem is situated conversational interaction. The approach leverages

the situation and/or context of the conversation to improve system accuracy and effectiveness.

Sources of context include visual content being displayed to the user, Geo-location, prior interactions,

multi-modal interactions (e.g., gesture, eye gaze), and the conversation itself. For example, while a user

is reading a news article on their tablet PC, they initiate a conversation to dig deeper on a particular topic.

Or a user is reading a map and wants to learn more about the history of events at mile marker 121.

Or a gamer wants to interact with a game’s characters to find the next clue in their quest.

All of these interactions are situated – rich context is available to the system as a source of priors/constraints

on what the user is likely to say.

This special session will provide a forum to discuss research progress in open domain situated

conversational interactions.

Topics of the session will include:

• Situated context in spoken dialog systems

• Visual/dialog/personal/geo situated context

• Inferred context through interpretation and reasoning

• Open domain spoken dialog systems

• Open domain spoken/natural language understanding and generation

• Open domain semantic interpretation

• Open domain dialog management (large-scale belief state/policy)

• Conversational Interactions

• Multi-modal inputs in situated open domains (speech/text + gesture, touch, eye gaze)

• Multi-human situated interactions

Organizers:

Larry Heck, Microsoft Research, larry [at]ieee.org

Dilek Hakkani-Tür, Microsoft Research, dilek [at]ieee.org

Gokhan Tur, Microsoft Research, gokhan [at]ieee.org

Steve Young, Cambridge University, sjy [at]eng.cam.ac.uk

Phase Importance in Speech Processing Applications

In the past decades, the amplitude of speech spectrum is considered to be the most important feature in

different speech processing applications and phase of the speech signal has received less

attention. Recently, several findings justify the phase importance in speech and audio processing communities.

The importance of phase estimation along with amplitude estimation in speech enhancement,

complementary phase-based features in speech and speaker recognition and phase-aware acoustic

modeling of environment are the most prominent

reported works scattered in different communities of speech and audio processing. These examples suggest

that incorporating the phase information can push the limits of state-of-the-art phase-independent solutions

employed for long in different aspects of audio and speech signal processing. This Special Session aims

to explore the recent advances and methodologies to exploit the knowledge of signal phase information in different

aspects of speech processing. Without a dedicated effort to bring researchers from different communities,

a quick advance in investigation towards the phase usefulness in speech processing applications

is difficult to achieve. Therefore, as the first step in this direction, we aim to promote the 'phase-aware

speech and audio signal processing' to form a community of researchers to organize the next steps.

Our initiative is to unify these efforts to better understand the pros and cons of using phase and the degree

of feasibility for phase estimation/enhancement in different areas of speech processing including: speech

enhancement, speech separation, speech quality estimation, speech and speaker recognition,

voice transformation and speech analysis and synthesis. The goal is to promote the importance of

the phase-based signal processing and studying its importance and sharing interesting findings from different

speech processing applications.

Organizers:

Pejman Mowlaee, Graz University of Technology, pejman.mowlaee [at]tugraz.at

Rahim Saeidi, University of Eastern Finland, rahim.saeidi [at]uef.fi

Yannis Styilianou, Toshiba Labs Cambridge UK / University of Crete, yannis [at]csd.uoc.gr

Speaker Comparison for Forensic and Investigative Applications

In speaker comparison, speech/voice samples are compared by humans and/or machines

for use in investigation or in court to address questions that are of interest to the legal system.

Speaker comparison is a high-stakes application that can change people’s lives and it demands the best

that science has to offer; however, methods, processes, and practices vary widely.

These variations are not necessarily for the better and though recognized, are not generally appreciated

and acted upon. Methods, processes, and practices grounded in science are critical for the proper application

(and non-application) of speaker comparison to a variety of international investigative and forensic applications.

This special session will contribute to scientific progress through 1) understanding speaker comparison

for investigative and forensic application (e.g., describe what is currently being done and critically

analyze performance and lessons learned); 2) improving speaker comparison for investigative and forensic

applications (e.g., propose new approaches/techniques, understand the limitations, and identify challenges

and opportunities); 3) improving communications between communities of researchers, legal scholars,

and practitioners internationally (e.g., directly address some central legal, policy, and societal questions

such as allowing speaker comparisons in court, requirements for expert witnesses, and requirements for specific

automatic or human-based methods to be considered scientific); 4) using best practices (e.g., reduction of bias

and presentation of evidence); 5) developing a roadmap for progress in this session and future sessions; and 6)

producing a documented contribution to the field. Some of these objectives will need multiple sessions

to fully achieve and some are complicated due to differing legal systems and cultures.

This special session builds on previous successful special sessions and tutorials in forensic applications

of speaker comparison at INTERSPEECH beginning in 2003. Wide international participation is planned,

including researchers from the ISCA SIGs for the Association Francophone de la Communication Parlée (AFCP)

and the Speaker and Language Characterization (SpLC).

Organizers:

Joseph P. Campbell, PhD, MIT Lincoln Laboratory, jpc [at]ll.mit.edu

Jean-François Bonastre, l'Université d'Avignon, jean-francois.bonastre [at]univ-avignon.fr

Text-dependent for Short-duration Speaker Verification

In recent years, speaker verification engines have reached maturity and have been deployed in

commercial applications. Ergonomics of such applications is especially demanding and imposes

a drastic limitation in terms of speech duration during authentication. A well known tactic to address

the problem of lack of data, due to short duration, is using text-dependency. However, recent breakthroughs

achieved in the context of text-independent speaker verification in terms of accuracy and robustness

do not benefit text-dependent applications. Indeed, large development data required by the recent

approaches is not available in the text-dependent context. The purpose of this special session is

to gather the research efforts from both academia and industry toward a common goal of establishing

a new baseline and explore new directions for text-dependent speaker verification.

The focus of the session is on robustness with respect to duration and modeling of lexical information.

To support the development and evaluation of text-dependent speaker verification technologies,

the Institute for Infocomm Research (I2R) has recently released the RSR2015 database,

including 150 hours of data recorded from 300 speakers. The papers submitted to the special

session are encouraged, but not limited, to provide results based on the RSR2015 database

in order to enable comparison of algorithms and methods. For this purpose, the organizers strongly

encourage the participants to report performance on the protocol delivered with the database

in terms of EER and minimum cost (in the sense of NIST 2008 Speaker Recognition evaluation).

To get the database, please contact the organizers.

Further details are available at: http://www1.i2r.a-star.edu.sg/~kalee/is2014/tdspk.html

Organizers:

Anthony LARCHER (alarcher [at]i2r.a-star.edu.sg) Institute for Infocomm Research

Hagai ARONOWITZ (hagaia [at]il.ibm.com) IBM Research – Haifa

Kong Aik LEE (kalee [at]i2r.a-star.edu.sg) Institute for Infocomm Research

Patrick KENNY (patrick.kenny [at]crim.ca) CRIM – Montréal

Tutorial Dialogues and Spoken Dialogue Systems

The growing interest in educational applications that use spoken interaction and dialogue technology has boosted

research and development of interactive tutorial systems, and over the recent years, advances have been achieved

in both spoken dialogue community and education research community, with sophisticated speech and multi-modal

technology which allows functionally suitable and reasonably robust applications to be built.

The special session combines spoken dialogue research, interaction modeling, and educational applications,

and brings together the two INTERSPEECH SIG communities: SLaTE and SIGdial. The session focuses

on methods, problems and challenges that are shared by both communities, such as sophistication

of speech processing and dialogue management for educational interaction, integration of the models

with theories of emotion, rapport, and mutual understanding, as well as application of the techniques

to novel learning environments, robot interaction, etc. The session aims to survey issues related

to the processing of spoken language in various learning situations, modeling of the teacher-student

interaction in MOOC-like environments, as well as evaluating tutorial dialogue systems from

the point of view of natural interaction, technological robustness, and learning outcome.

The session encourages interdisciplinary research and submissions related to the special focus

of the conference, 'Celebrating the Diversity of Spoken Languages'.

For further information click http://junionsjlee.wix.com/INTERSPEECH

Organizers:

Maxine Eskenazi, max+ [at]cs.cmu.edu

Kristiina Jokinen, kristiina.jokinen [at]helsinki.fi

Diane Litman, litman [at]cs.pitt.edu

Martin Russel, M.J.RUSSELL [at]bham.ac.uk

Visual Speech Decoding

Speech perception is a bi-modal process that takes into account both the acoustic (what we hear)

and visual (what we see) speech information. It has been widely acknowledged that visual clues play

a critical role in automatic speech recognition (ASR) especially when audio is corrupted by,

for example, background noise or voices from untargeted speakers, or even inaccessible.

Decoding the visual speech is utterly important for ASR technologies to be widely implemented

to realize truly natural human-computer interactions. Despite the advances in acoustic ASR,

visual speech decoding remains a challenging problem.

The special session aims to attract more effort to tackle this important problem. In particular,

we would like to encourage researchers to focus on some critical questions in the area.

We propose four questions as the initiative as follows:

1. How to deal with the speaker dependency in visual speech data?

2. How to cope with the head-pose variation?

3. How to encode temporal information in visual features?

4. How to automatically adapt the fusion rule when the quality of the two individual (audio and visual)

modalities varies?

Researchers and participants are encouraged to raise more questions related to visual speech decoding.

We expect the session to draw a wide range of attention from both the speech recognition and machine vision

communities to the problem of visual speech decoding.

Organizers:

Ziheng Zhou, University of Oulu, ziheng.zhou [at]ee.oulu.fi

Matti Pietikäinen, University of Oulu, matti.pietikainen [at]ee.oulu.fi

Guoying Zhao, University of Oulu, gyzhao [at]ee.oulu.fi

Back  Top

3-1-11(2015-09-06) Call for Satellite Workshops of INTERSPEECH 2015, Dresden, Germany
**** Call for Satellite Workshops **** 
INTERSPEECH 2015 will be held in the beautiful city of Dresden, Germany, on September 6-10, 2015
The theme is 'Speech beyond Speech - Towards a Better Understanding of the Most Important 
Biosignal'. The Organizing Committee of INTERSPEECH 2015 is now inviting proposals for 
satellite workshops, which will be held in proximity to the main conference. 
The Organizing Committee will work to facilitate the organization of such satellite workshops, 
to stimulate discussion in research areas related to speech and language, at locations in Central 
Europe, and around the same time as INTERSPEECH. We are particularly looking forward to 
proposals from neighboring countries. If you are interested in organizing a satellite workshop, 
or would like a planned event to be listed as an official satellite event, please contact the organizers
 or the Satellite Workshop Chair at fmetze@cs.cmu.edu The Satellite Workshop coordinator along 
with the INTERSPEECH team will help to connect (potential) workshop organizers with local 
contacts in Germany, if needed, and will try to be helpful with logistics such as payment, publicity,
 and coordination with ISCA or other events. Proposals should include:
 * workshop name and acronym 
* organizers' name and contact info 
* website (if already known) 
* date and proposed location of the workshop 
* estimated number of participants 
* a short description of the motivation for the workshop 
* an outline of the program and invited speakers 
* a description of the submission process (e.g. deadlines, target acceptance rate) 
* a list of the scientific committee members 
 
Proposals for satellite workshops should be submitted by email to workshops@interspeech2015.org
 by August 31st, 2014 We strongly recommend that organizers also apply for
 ISCA approval/ sponsorship, which will greatly facilitate acceptance as an INTERSPEECH satellite 
event. We plan to notify proposers no later than October 30, 2014. If you have any questions about 
whether a potential event would be a good candidate for an INTERSPEECH 2015 satellite workshop 
feel free to contact the INTERSPEECH 2015 Satellite Workshops Chair. 
 
Sincerely, 
Florian Metze 
Satellite Workshops Chair fmetze@cs.cmu.edu

 

 
Back  Top

3-1-12(2015-09-06) INTERSPEECH 2015 Dresden RFA

Interspeech 2015

 

September 6-10, 2015, Dresden, Germany

www.interspeech2015.org

 

SPECIAL TOPIC

Speech Beyond Speech: Towards a Better Understanding of the Most Important Biosignal

 

MOTIVATION

Speech is the most important biosignal humans can produce and perceive. It is the most common means of human-human communication, and therefore research and development in speech and language are not only paramount for understanding humans, but also to facilitate human-machine interaction. Still, not all characteristics of speech are fully understood, and even fewer are used for developing successful speech and language processing applications. Speech can exploit its full potential only if we consider the characteristics which are beyond the traditional (and still important) linguistic content. These characteristics include other biosignals that are directly accessible to human perception, such as muscle and brain activity, as well as articulatory gestures.

 

INTERSPEECH 2015

will therefore be organized around the topic “Speech beyond Speech: Towards a Better Understanding of the Most Important Biosignal”. Our conviction is that spoken language processing can make a substantial leap if it caters for the full information which is available in the speech signal. By opening our prestigious conference to researchers in other biosignal communities, we expect that substantial advances can be made discussing ideas and approaches across discipline and community boundaries.

 

ORGANIZERS

The following preliminary list of principal organizers plan INTERSPEECH 2015:

  • Sebastian Möller, Telekom Innovation Laboratories, Technische Universität Berlin (General Chair)
  • Rüdiger Hoffmann, Chair for System Theory and Speech Technology, Technische Universität Dresden
  • Ercan Altinsoy, Chair for Communication Acoustics, Technische Universität Dresden
  • Ute Jekosch, Chair for Communication Acoustics, Technische Universität Dresden
  • Siegfried Kunzmann, European Media Laboratory GmbH, Heidelberg
  • Bernd Möbius, Dept. of Computational Linguistics and Phonetics, Saarland University
  • Hermann Ney, Chair of Computer Science 6, RWTH Aachen
  • Elmar Nöth, Friedrich-Alexander-Universität Erlangen-Nürnberg
  • Alexander Raake, Telekom Innovation Laboratories, Technische Universität Berlin
  • Gerhard Rigoll, Institute of Human-Machine Communication, Technische Universität München
  • Tanja Schultz, Cognitive Systems Lab, Universität Karlsruhe (TH)

 

LOCATION

The event will be staged in the recently built Maritim International Congress Center (ICD) in Dresden, Germany. As the capital of Saxony, an up-and-coming region located in the former eastern part of Germany, Dresden combines glorious and painful history with a strong dedication to future and technology. It is located in the heart of Europe, easily reached via two airports, and will offer a great deal of history and culture to INTERSPEECH 2015 delegates. Guests are well catered for in a variety of hotels of different standards and price ranges, making INTERSPEECH 2015 an exciting as well as an affordable event.

 

CONTACT

Prof. Dr.-Ing. Sebastian Möller, Quality and Usability Lab, Telekom Innovation Laboratories, TU Berlin

Sekr. TEL-18, Ernst-Reuter-Platz 7, D-10587 Berlin, Germany

Web: www.interspeech2015.org

 

 

Back  Top

3-1-13(2016) INTERSPEECH 2016, San Francisco, CA, USA

Interspeech 2016 will take place

from September 8-12 2016 in San Francisco, CA, USA

General Chair is Nelson Morgan.

 

Back  Top

3-2 ISCA Supported Events
3-2-1(2014-06-09) eNTERFACE 2014 - 10th SUMMER WORKSHOP ON MULTIMODAL INTERFACES, Bilbao, Spain

eNTERFACE 2014 - 10th SUMMER WORKSHOP ON MULTIMODAL INTERFACES
Bilbao, Spain, June 9th – July 5th, 2014
==============================================================

http://aholab.ehu.es/eNTERFACE14


CALL FOR PROJECTS

Aholab Signal Processing research group, Faculty of Engineering of the University of the 
Basque Country, in Bilbao (Spain), invites project proposals for eNTERFACE’14.

Following the tremendous success of the previous eNTERFACE workshops (www.enterface.net), eNTERFACE’14 aims at continuing and enhancing the tradition of collaborative, localized research and development work by gathering, in a single place, leading researchers in multimodal interfaces and students to work on specific projects for 4 complete weeks.

eNTERFACE’14 will encompass presentation sessions, including tutorial state-of-the-art surveys on several aspects of design of multimodal interfaces, given by invited senior researchers, and periodical presentations of the results achieved by each project group. The ultimate goal is to make this event a unique opportunity for students and experts all over the world to meet and effectively work together, so as to foster the development of tomorrow’s multimodal research community. The results of the projects are expected to be published in the Workshop proceedings.


THEMES (not exhaustive list):

- Multimodal signal analysis and synthesis 
- Intuitive interfaces and personalized systems in real and virtual environments 
- Assistive technologies for education and social inclusion 
- Assistive and rehabilitation technologies 
- Search in multimedia and multilingual documents 
- Affective and social signal processing 
- Multimodality for biometrics and security 
- Innovative musical interfaces 
- Augmented reality 
- Embodied agents 
- Human-robot and human-environment interactions in smart environments 
- Multimodal conversational systems 
- Self-learning and adapting systems 
- Innovative modalities and modalities conversion 
- Applications of Multimodal interfaces 
- Performing arts applications 
- Teleoperation and telerobotics


IMPORTANT DATES

November 30th, 2013
Reception of a 1-page Notification of Interest, with a summary of projects goals, temptative workpackages, and deliverables.

December 15th, 2013
Reception of the Full Project Proposal.

January 10th, 2014
Notification of acceptance to project leaders. Start Call for Participation.

February 28th, 2014 
End Call for Participation. Team building.

March 28th, 2014
Notification of acceptance to participants.


IMPORTANT NOTES

Proposals should be submitted in PDF format to enterface14@aholab.ehu.es.

The proposals will be evaluated by the Scientic Committee with respect to suitability to the workshop goals and format. A call for PhD students and researchers participation will then be launched on January 10th, 2014. Authors of the accepted proposals will then be invited to build their teams.

There is no registration fee for participants. Participants are expected to pay for their own lodging and meals (see http://aholab.ehu.es/eNTERFACE14 for information about facilities and prices). Some grants for students will be offered in due course.


CONTACT

For more information, please do not hesitate to contact us: enterface14@aholab.ehu.es.

Back  Top

3-2-2(2014-06-23) JEP 2014 (www.jep2014.org) Le Mans France
Appel à Communications JEP 2014 (www.jep2014.org)
 Les Journées d'Études de la Parole (JEP) sont consacrées à l'étude de la communication parlée 
ainsi qu'à ses applications. Ces journées ont pour but de rassembler l'ensemble des 
communautés scientifiques francophones travaillant dans le domaine. La conférence se veut 
aussi un lieu d'échange convivial entre doctorants et chercheurs confirmés.
 En 2014, les JEP sont organisées au Mans par le Laboratoire d'Informatique de l'Université 
du Maine (LIUM) et par le Laboratoire d'Informatique de Nantes Atlantique (LINA), sous l'égide 
de l'Association Francophone de la Communication Parlée (AFCP). 
Dates importantes 
Soumission des communications : 31 janvier 2014 (modification de la soumission possible 
jusqu'au 7 février, sous condition d'une première version soumise avant le 31 janvier)
Notification d’acceptation : 28 mars 2014 
Soumission des versions définitives : 11 avril 2014 
Conférence : du 23 au 27 juin 2014
Thématiques 
Les communications porteront sur la communication parlée et le traitement de la parole dans
 leurs différents aspects. Les thèmes de la conférence incluent, de façon non limitative : 
Acoustique de la parole 
Acquisition de la parole et du langage 
Analyse, codage et compression de la parole 
Applications à composantes orales (dialogue, indexation, etc) 
Apprentissage d'une langue seconde 
Communication multimodale 
Dialectologie 
Évaluation, corpus et ressources 
Langues en danger 
Modèles de langage 
Parole audio-visuelle 
Pathologies de la parole 
Perception de parole 
Phonétique et phonologie
 Phonétique clinique 
Prises de position présentant un point de vue sur les sciences et technologies de la parole 
Production de parole 
Prosodie 
Psycholinguistique 
Reconnaissance et compréhension de la parole 
Reconnaissance de la langue 
Reconnaissance du locuteur 
Signaux sociaux, sociophonétique 
Synthèse de la parole 
Critères de Sélection 
Les auteurs sont invités à soumettre des travaux de recherche originaux, n'ayant pas fait l'objet 
de publications antérieures. Les contributions proposées seront examinées par au moins deux 
spécialistes du domaine. Seront considérées en particulier : - l’importance et l’originalité de la contribution ; - la discussion critique des résultats, en particulier par rapport aux autres travaux du domaine ; - la situation des travaux présentés dans le contexte de la recherche internationale ; - l’organisation et la clarté de la présentation ; - l’adéquation aux thèmes de la conférence. Les articles sélectionnés seront publiés dans les actes de la conférence. Bourses L’AFCP offre un certain nombre de bourses pour les doctorants et jeunes chercheurs désireux de 
prendre part à la conférence, voir le site de l’AFCP. L’ISCA apporte également un soutien financier aux jeunes chercheurs participant à des 
manifestations scientifiques sur la parole et le langage, voir le site de l’ISCA. Contacts : yannick.esteve@univ-lemans.fr ou emmanuel.morin@univ-nantes.fr 
Back  Top

3-2-3(2014-09-08) Seventeenth International Conference on TEXT, SPEECH and DIALOGUE (TSD 2014)

17th International Conference on TEXT, SPEECH and DIALOGUE (TSD 2014) Brno, Czech Republic,

8-12 September 2014

http://www.tsdconference.org/

The conference is organized by the Faculty of Informatics, Masaryk University, Brno, and the Faculty of Applied Sciences, University of West Bohemia, Pilsen. The conference is supported by International Speech Communication Association. Venue: Brno, Czech Republic

TSD SERIES

TSD series evolved as a prime forum for interaction between researchers in both spoken and written language processing from all over the world. Proceedings of TSD form a book published by Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI) series.

TOPICS

Topics of the conference will include (but are not limited to): Corpora and Language Resources (monolingual, multilingual, text and spoken corpora, large web corpora, disambiguation, specialized lexicons, dictionaries) Speech Recognition (multilingual, continuous, emotional speech, handicapped speaker, out-of-vocabulary words, alternative way of feature extraction, new models for acoustic and language modelling) Tagging, Classification and Parsing of Text and Speech (morphological and syntactic analysis, synthesis and disambiguation, multilingual processing, sentiment analysis, credibility analysis, automatic text labeling, summarization, authorship attribution) Speech and Spoken Language Generation (multilingual, high fidelity speech synthesis, computer singing) Semantic Processing of Text and Speech (information extraction, information retrieval, data mining, semantic web, knowledge representation, inference, ontologies, sense disambiguation, plagiarism detection) Integrating Applications of Text and Speech Processing (machine translation, natural language understanding, question-answering strategies, assistive technologies) Automatic Dialogue Systems (self-learning, multilingual, question-answering systems, dialogue strategies, prosody in dialogues) Multimodal Techniques and Modelling (video processing, facial animation, visual speech synthesis, user modelling, emotions and personality modelling) Papers on processing of languages other than English are strongly encouraged.

PROGRAM COMMITTEE

Hynek Hermansky, USA (general chair) Eneko Agirre, Spain Genevieve Baudoin, France Paul Cook, Australia Jan Cernocky, Czech Republic Simon Dobrisek, Slovenia Karina Evgrafova, Russia Darja Fiser, Slovenia Radovan Garabik, Slovakia Alexander Gelbukh, Mexico Louise Guthrie, GB Jan Hajic, Czech Republic Eva Hajicova, Czech Republic Yannis Haralambous, France Ludwig Hitzenberger, Germany Jaroslava Hlavacova, Czech Republic Ales Horak, Czech Republic Eduard Hovy, USA Maria Khokhlova, Russia Daniil Kocharov, Russia Ivan Kopecek, Czech Republic Valia Kordoni, Germany Steven Krauwer, The Netherlands Siegfried Kunzmann, Germany Natalija Loukachevitch, Russia Vaclav Matousek, Czech Republic Diana McCarthy, United Kingdom France Mihelic, Slovenia Hermann Ney, Germany Elmar Noeth, Germany Karel Oliva, Czech Republic Karel Pala, Czech Republic Nikola Pavesic, Slovenia Fabio Pianesi, Italy Maciej Piasecki, Poland Adam Przepiorkowski, Poland Josef Psutka, Czech Republic James Pustejovsky, USA German Rigau, Spain Leon Rothkrantz, The Netherlands Anna Rumshisky, USA Milan Rusko, Slovakia Mykola Sazhok, Ukraine Pavel Skrelin, Russia Pavel Smrz, Czech Republic Petr Sojka, Czech Republic Stefan Steidl, Germany Georg Stemmer, Germany Marko Tadic, Croatia Tamas Varadi, Hungary Zygmunt Vetulani, Poland Pascal Wiggers, The Netherlands Yorick Wilks, GB Marcin Wolinski, Poland Victor Zakharov, Russia KEYNOTE SPEAKERS Ralph Grishman, New York University, USA Bernardo Magnini, FBK - Fondazione Bruno Kessler, Italy

FORMAT OF THE CONFERENCE

The conference program will include presentation of invited papers, oral presentations, and poster/demonstration sessions. Papers will be presented in plenary or topic oriented sessions. Social events including a trip in the vicinity of Brno will allow for additional informal interactions.

CONFERENCE PROGRAM

The conference program will include oral presentations and poster/demonstration sessions with sufficient time for discussions of the issues raised.

IMPORTANT DATES

March 15 2014 ............ Submission of abstract

March 22 2014 ............ Submission of full papers

May 15 2014 .............. Notification of acceptance

May 31 2014 .............. Final papers (camera ready) and registration

August 3 2014 ............ Submission of demonstration abstracts

August 10 2014 ........... Notification of acceptance for demonstrations sent to the authors

September 8-12 2014 ...... Conference date

The contributions to the conference will be published in proceedings that will be made available to participants at the time of the conference. OFFICIAL LANGUAGE The official language of the conference is English. ADDRESS All correspondence regarding the conference should be addressed to Ales Horak, TSD 2014 Faculty of Informatics, Masaryk University Botanicka 68a, 602 00 Brno, Czech Republic phone: +420-5-49 49 18 63 fax: +420-5-49 49 18 20 email: tsd2014@tsdconference.org The official TSD 2014 homepage is: http://www.tsdconference.org/

LOCATION

Brno is the second largest city in the Czech Republic with a population of almost 400.000 and is the country's judiciary and trade-fair center. Brno is the capital of South Moravia, which is located in the south-east part of the Czech Republic and is known for a wide range of cultural, natural, and technical sights. South Moravia is a traditional wine region. Brno had been a Royal City since 1347 and with its six universities it forms a cultural center of the region. Brno can be reached easily by direct flights from London, Moscow, Saint Petersburg, Eindhoven, Rome and Prague and by trains or buses from Prague (200 km) or Vienna (130 km).

Back  Top

3-2-4(2014-12-07) 2014 IEEE Spoken Language Technology Workshop (SLT 2014)
2014 Spoken Language Technology Workshop
December 7-10, 2014 - South Lake Tahoe, NV, USA

IEEE - IEEE Signal Processing Society

http://www.slt2014.org - Follow @SLT_2014


The Fifth IEEE Workshop on Spoken Language Technology (SLT 2014) will be held in South Lake Tahoe, Nevada, on Dec 7-10, 2014.


Workshop Technical Theme & Main Goals & Novelties

The main theme of the workshop will be 'machine learning in spoken language technologies'. There will be keynote/guest speakers from the machine learning community.
One of the workshop goals is to increase both intra and inter community interactions. Towards this goal, in addition to tutorials and keynote speeches on main workshop theme and emerging areas, this year's SLT will host special sessions and self-organizing Special Interest Group (SIG) meetings, as well as panel discussions before/during workshop. If you want to excite the community about a topic and to have an impact on the workshop content, now this is your chance!
In addition to submitting papers and/or proposing/organizing SIG meetings, you can get involved in workshop organization in different ways: by nominating keynote speakers (nominations@slt2014.org), or by volunteering to be part of workshop organization (volunteers@slt2014.org). Please visit www.slt2014.org for more details.


Call for Papers: Areas/Topics

Submission of papers in all areas of spoken language technology is encouraged, with emphasis on the following topics, including both traditional SLT areas as well as emerging ones:

- Traditional topic coverage: speech recognition and synthesis, spoken language understanding, spoken dialog systems, spoken document summarization, machine translation for speech, question answering from speech, speech data mining, spoken document retrieval, spoken language databases, speaker/language recognition, multimodal processing, human/computer interaction, assistive technologies, natural language processing, educational and healthcare applications.  

- Emerging areas: large scale spoken language understanding, massive data resources for SLT,
unsupervised methods in SLT, capturing and representing world knowledge in SLT, web search with SLT, SLT in social networks, multimedia applications, intelligent environments.

Prospective authors are invited to submit full-length, 4-6 page papers, including figures and references, to the SLT 2014 website (slt2014.org).


Important Dates

Paper submission: July 21, 2014
Notification of acceptance: September 5, 2014
Demo submission: September 10, 2014
Notification of Demo acceptance: October 10, 2014
Special Session (SS) proposal submission: June 6, 2014
Notification of SS proposals (1st/2nd decision):  June 15 / September 19, 2014
Special Interest Group (SIG) proposal submission: November 21, 2014
Early registration deadline: October 17, 2014
Workshop: December 7-10, 2014


Supported by:
- Association for Computational Linguistics (ACL)
- International Speech Communication Association (ISCA)

Back  Top

3-3 Other Events
3-3-1(2014-05-13) CfP 4th International Symposium on Tonal Aspects of Languages (TAL 2014) will be held in Nijmegen, The Netherlands

The Fourth International Symposium on Tonal Aspects of Languages (TAL 2014) will be held in Nijmegen, The Netherlands, on 13-16 May 2014. The symposium, which follows the successful TAL 2012 in Nanjing, is held on a vibrant campus, which boasts three highly visible academic institutions whose research is relevant to the topic of the symposium: the Centre for Language Studies, the Donders Institute for Brain, Cognition and Behaviour, both part of Radboud University Nijmegen, and the Max Planck Institute for Psycholinguistics.

 

Nijmegen (by Maarten Takens)

Nijmegen (by Maarten Takens)

TAL 2014 is held in the week before the International Conference on Speech Prosody 2014, which is held in Dublin from 21 to 25 May 2014 and like that conference enjoys the support of ISCA SProSig and ISCA SIG-CSLP.

The symposium continues the tradition of the previous three symposia of focusing on tone languages, aiming to present state-of-the–art research on the typological, phonetic, phonological, psycholinguistic, acquisitional and technological aspects of tonal contrasts, but will also welcome contributions on non-tone languages and art forms like songs and poetry.

Important dates

 

Submission of full papers
15 February 2014 (this date will not be postponed)
Notification of acceptance
20 March 2014
Camera-ready paper due
15 June 2014
Registration deadline
30 April 2014
Registration desk open
13 May 2014, 2 p.m.
Conference meeting
14-16 May 2014
Tutorial Speech Analysis
13 May 2014, 4 p.m.
Final versions paper due
15 June 2014
Back  Top

3-3-2(2014-05-14) CfP SLTU-2014 WORKSHOP , St Petersburg, Russia

SLTU-2014 – LAST CALL FOR PAPERS

*****************************************************

 

4th International Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU-2014)

14-16 May, 2014

St. Petersburg, Russia

www.mica.edu.vn/sltu2014

 

Organized by St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS) in cooperation with LIG (France), LIA (France), and MICA (Vietnam). SLTU-2014 is sponsored or supported by the ISCA association, EURASIP association, Russian Foundation for Basic Research.

 

The Workshop on Spoken Language Technologies for Under-Resourced Languages is the fourth in a series of even-year SLTU events. Three previous Workshops were organized: SLTU’12 in Cape Town, SLTU’10 in Penang, and SLTU’08 in Hanoi. SLTU’14 Workshop is held in St. Petersburg (Russia) and has the special focus on Eastern European under-resourced languages (Slavic, Baltic, Uralic, Altaic, Caucasian, Turkic, etc.), but papers on automatic processing other under-resourced languages are also encouraged.

 

SLTU-2014 Workshop topics include all areas related to processing any under-resourced and endangered languages:

- Language resources development, acquisition, and representation: dictionary, language model, grammars, text and speech corpora, etc.

- Automatic speech recognition and synthesis of low-resourced languages and dialects, etc.

- Multi-lingual spoken language processing including analysis and synthesis.

- Machine translation and spoken dialogue systems, etc.

 

Speech Communication special issue on processing under-resourced languages was recently prepared by SLTU board: www.sciencedirect.com/science/journal/01676393/56

 

SCIENTIFIC COMMITTEE:

Etienne Barnard, North-West University, South Africa

Laurent Besacier, Laboratory of Informatics of Grenoble, France

Eric Castelli, MICA Institute, Vietnam

Dirk Van Compernolle, KU Leuven, Belgium

Marelie Davel, North-West University, South Africa

Alexey Karpov, SPIIRAS, Russia

Daniil Kocharov, St.Petersburg State University, Russia

Lori Lamel, LIMSI, France

Haizhou Li, A-Star, Singapore

Roger K. Moore, University of Sheffield, UK

Pedro Moreno, Google, USA

Satoshi Nakamura, NAIST, Japan

Pascal Nocera, University of Avignon, France

Francois Pellegrino, Lyon, France

Andrey Ronzhin, SPIIRAS, Russia

Yoshinori Sagisaka, Waseda University, Japan

Ruhi Sarikaya, Microsoft, USA

Tanja Schultz, Karlsruhe Institute of Technology, Germany

Pavel Skrelin, St.Petersburg State University, Russia

Tan Tien Ping, USM University, Malaysia

 

Two keynote lectures on SLTU topics will be given by Prof. Satoshi Nakamura (Nara Institute of Science and Technology, Japan) and Prof. Mark Gales (University of Cambridge, UK).

 

IMPORTANT DATES:

- Full paper submission: 10 February 2014 (extended deadline)

- Notification of paper acceptance: 03 March 2014

- Submission of final papers: 17 March 2014

- Registration due: 17 March 2014

- Workshop dates: 14-16 May 2014

 

Independently of the scientific actions we will provide excellent possibilities for acquaintance with cultural and historical valuables of St. Petersburg city and its beautiful surroundings.

 

SLTU-2014 Workshop Chairs:

Alexey Karpov (SPIIRAS, Russia)

Laurent Besacier (LIG, France)

Pascal Nocera (LIA, France)

Eric Castelli (MICA, Vietnam)

 

For the latest information, please check the Workshop web page: www.mica.edu.vn/sltu2014

 

Back  Top

3-3-3(2014-05-20) The 7th Speech Prosody Conference, Dublin, Ireland

The 7th Speech Prosody Conference will be held in Dublin, Ireland, May 20-23, 2014, at Trinity College Dublin, directly preceding LREC, the Linguistic Resources and Evaluation Conference.

The special theme is social prosody, but we invite papers addressing any aspect of the science and technology of prosody, speaking styles, and voice quality.  Papers are due December 15th.

Topics of interest include: communicative situation and speaking style, dynamics of register and style, l2 prosody, phonology and phonetics of prosody, pitch accent, prosody and spoken language systems, prosody and the sounds of language, prosody development in first language acquisition, prosody for forensic applications, prosody in face-to-face interaction: audiovisual modeling and analysis, prosody in neurological disorders, prosody in speech synthesis, recognition and understanding; prosody models and theoretical issues, prosody of sign language, prosody of under-resourced languages and dialects; psycholinguistic, cognitive, and neural correlates of prosody; signal processing; voice quality, phonation, and vocal dynamics, and prosodic characteristics of individuals; and as special review areas, the prosody of nonverbal vocalisations, speech-gesture interaction, and joint/choral speech.

More information is available at http://www.speechprosody2014.org/ .

Back  Top

3-3-4(2014-05-21) Workshop From Sound to Gesture (S2G): Communication as speech, prosody, gestures and signs, Univ Padova Italy
Conference Announcement and Call for Papers

From Sound to Gesture (S2G): Communication as speech, prosody, gestures and signs, University of Padova, Italy - May 21-23, 2014 
 
Understanding how human communication works requires investigating the complex relations between abstract representations of both sounds and gestures and their concrete realizations. The study of both spoken and signed language can provide a key to understanding the fascinating connections between human sounds, gestures and language. 

The aim of this conference is to bring together scholars interested in the relationships between different modalities (acoustic, visual) and between different levels within the same modality (e.g., phonology-phonetics, verbal-nonverbal, prosodic-segmental, etc.). 
We invite the submission of abstracts on the following topics: 

- Prosody 
- Spoken language phonology/phonetics 
- Non-verbal communication/gestures 
- Sign language phonology/phonetics 

In particular, we encourage papers focused the relationship between two (or more) of the proposed topics, e.g.: 

- Spoken and sign language phonology: Is there only one, modality-independent phonology, or is there one phonology for each modality? Is there evidence for the existence of phonological primitives common to both modalities? 
- Linguistic and non-linguistic signs: How are non-verbal gestures integrated in sign language? How are these distinguished from linguistic signs? 
- Prosody and sign language: What are the prosodic units of sign language? Are they organized in a similar way as prosodic units in speech? 
- Suprasegmental and segmental phonology: What are relationships between the two levels? What are the best theoretical approaches and why? 
- Prosody and language: How is prosody used to convey linguistic and non-linguistic meanings in L1 and/or L2? What are the best theoretical models to describe it? 
- Prosody and gestures: What is the relation between speakers' intonation, prosody and gestures? How are language-specific prosody and gestures used in second language communication? 
- Speech and gestures: Wow are gestures and language connected? What is the relation between language-specific/universal gestures and language? How can gestures be employed effectively to communicate with people who speak a different language? 

Invited speakers:

Karen Emmorey (San Diego State University), http://emmoreylab.sdsu.edu/director.php
Marianne Gullberg (University of Lund), www.sol.lu.se/en/person/ MarianneGullberg 
Pilar Prieto (Universitat Pompeu Fabra, Barcelona), http://prosodia.upf.edu/membres/pilarprieto/
Wendy Sandler (University of Haifa), http://sandlersignlab.haifa.ac.il/wendy.htm


Abstracts (max 1000 words, Times New Roman, 12pt, PDF or DOC format) can be summited through Easy Abstract at the link: http://linguistlist.org/easyabs/s2g2014


Important Dates: 

Abstract submission extended deadline: 16 February 2014 
Acceptance notification (extended) : 10 March 2014 
Conference: 21-23 May 2014 

Scientific Committee: 

Maria Grazia Busà 
Antonio Baroni 
Laura Vanelli 
Motoko Ueyama
 
 

For info: s2g.conference@gmail.com      

Back  Top

3-3-5(2014-05-26) 9th edition of the Language Resources and Evaluation Conference, Reykjavik, Iceland

ELRA and the LREC Programme Committee are very pleased to announce that the

LREC 2012 Proceedings have been accepted for inclusion in the Conference Proceedings

Citation Index of Thomson Reuters. The CPCI is searchable through the

Web of Science

platform and will provide authors with unprecedented recognition.

LREC 2010 proceedings are currently under review, and chances that they will be

accepted are high! Once published, the proceedings of LREC 2014 will be submitted for

inclusion in the CPCI.

Schedule of all the LREC 2014 Workshops and Tutorials is now online at http://lrec2014.lrec-conf.org/en/conference-programme/workshops-and-tutorials.
On this web page you will find a link to each Workshop Call for Papers and each Tutorial Outline, when available.
Don't hesitate to contact Workshop and/or Tutorial organisers if you have specific questions on their event.

For general LREC 2014 matters, please contact us at http://lrec2014.lrec-conf.org/en/contact/

 

ELRA is glad to announce the 9th edition of LREC, organised with the support of a wide range of international organisations.

The online registration to the Main conference, the workshops and tutorials is now open @ http://lrec2014.lrec-conf.org/en/registration/.
Contact: registration@lrec-conf.org


www.lrec-conf.org/lrec2014
Follow us on Twitter: @LREC2014

   


CONFERENCE AIMS

LREC is the major event on Language Resources (LRs) and Evaluation for Human Language

Technologies (HLT). LREC aims to provide an overview of the state-of-the-art, explore new

R&D directions and emerging trends, exchange information regarding LRs and their

applications, evaluation methodologies and tools, on-going and planned activities, industrial

uses and needs, requirements coming from e-science and e-society, with respect both to policy

issues and to scientific/technological and organisational ones.

LREC provides a unique forum for researchers, industrials and funding agencies from across

a wide spectrum of areas to discuss problems and opportunities, find new synergies and

promote initiatives for international cooperation, in support of investigations in language

sciences, progress in language technologies (LT) and development of corresponding products,

services and applications, and standards.

CONFERENCE TOPICS

Issues in the design, construction and use of LRs: text, speech, multimodality

* Guidelines, standards, best practices and models for LRs interoperability

* Methodologies and tools for LRs construction and annotation

* Methodologies and tools for extraction and acquisition of knowledge

* Ontologies, terminology and knowledge representation

* LRs and Semantic Web

* LRs and Crowdsourcing

* Metadata for LRs and semantic/content mark-up

Exploitation of LRs in systems and applications

* Sign language, multimedia information and multimodal communication

* LRs in systems and applications such as: information extraction, information retrieval,

audio-visual and multimedia search, speech dictation, meeting transcription, Computer Aided

Language Learning, training and education, mobile communication, machine translation,

speech translation, summarisation, web services, semantic search, text mining, inferencing,

reasoning, etc.

* Interfaces: (speech-based) dialogue systems, natural language and

multimodal/multisensorial interactions,

* Use of (multilingual) LRs in various fields of application like e-government, e-culture, ehealth,

e-participation, mobile applications, digital humanities, etc.

* Industrial LRs requirements, user needs

Issues in LT evaluation

* LT evaluation methodologies, protocols and measures

* Validation and quality assurance of LRs

* Benchmarking of systems and products

* Usability evaluation of HLT-based user interfaces and dialogue systems

* User satisfaction evaluation

General issues regarding LRs & Evaluation

* International and national activities, projects and collaboration

* Priorities, perspectives, strategies in national and international policies for LRs

* Multilingual issues, language coverage and diversity, less-resourced languages

* Open, linked and shared data and tools, open and collaborative architectures

* Organisational, economical, ethical and legal issues.

LREC 2014 HOT TOPICS

Big Data, Linked Open Data, LRs and HLT

The ever-increasing quantities of large and complex digital datasets, structured or

unstructured, multilingual, multimodal or multimedia, pose new challenges but at the same

time open up new opportunities for HLT and related fields. Ubiquitous data and information

capturing devices, social media and networks, the web at large with its big data / knowledge

bases and other information capturing / aggregating / publishing platforms are providing

useful information and/or knowledge for a wide range of LT applications.

LREC 2014 puts a strong emphasis on the synergies of the big Linked Open Data and LRs/LT

communities and their complementarity in cracking LT problems and developing useful

applications and services.

LRs in the Collaborative Age

The amount of collaboratively generated and used language data is constantly increasing and

it is therefore time to open a wide discussion on such LRs at LREC. There is a need to discuss

the types of LRs that can be collaboratively generated and used.

Are lexicons, dictionaries, corpora, ontologies (of language data), grammars, tagsets, data

categories, all possible fields in which a collaborative approach can be applied? Can

collaboratively generated LRs be standardised/harmonised? And how can quality control be

applied to collaboratively generated LRs? How can a collaborative approach ensure that lessresourced

languages receive the same digital dignity as mainstream languages?

There is also a need to discuss legal aspects related to collaboratively generated LRs. And last

but not least: are there different types of collaborative approaches, or is the Wikimedia style

the best approach to collaborative generation and use of LRs?

LREC 2014 SPECIAL HIGHLIGHT

Share your LRs!

In addition to describing your LRs in the LRE Map – now a normal step in the submission

procedure of many conferences – LREC 2014 recognises that the time is ripe to launch

another important initiative, the LREC Repository of shared LRs!

When submitting a paper, you will be offered the possibility to share your LRs (data, tools,

web-services, etc.), uploading them in a special LREC META-SHARE repository set up by

ELRA.

Your LRs will be made available to all LREC participants before the conference, to be reused,

compared, analysed, …

This effort of sharing LRs, linked to the LRE Map for their description, may become a new

'regular' feature for conferences in our field, thus contributing to creating a common

repository where everyone can deposit and share data.

PROGRAMME

The Scientific Programme will include invited talks, oral presentations, poster and demo

presentations, and panels, in addition to a keynote address by the winner of the Antonio

Zampolli Prize.

SUBMISSIONS AND DATES

Submission of proposals for oral and poster (or poster+demo)

papers: 15 October 2013

Abstracts should consist of about 1500-2000 words, will be submitted through START @

https://www.softconf.com/lrec2014/main/

and will be peer-reviewed.

Submission of proposals for panels, workshops and tutorials: 15 October 2013

Proposals should be submitted via an online form on the LREC website (

click Submission

from the Home page

) and will be reviewed by the Programme Committee.

PROCEEDINGS

The Proceedings will include both oral and poster papers, in the same format.

There is no difference in quality between oral and poster presentations. Only the

appropriateness of the type of communication (more or less interactive) to the content of the

paper will be considered.

In addition a Book of Abstracts will be printed.

CONFERENCE PROGRAMME COMMITTEE

Nicoletta Calzolari – CNR, Istituto di Linguistica Computazionale “Antonio Zampolli”, Pisa -

Italy (Conference chair)

Khalid Choukri – ELRA, Paris - France

Thierry Declerck – DFKI GmbH, Saarbrücken - Germany

Hrafn Loftsson – School of Computer Science, Reykjavík University - Iceland

Bente Maegaard – CST, University of Copenhagen - Denmark

Joseph Mariani – LIMSI-CNRS & IMMI, Orsay - France

Asuncion Moreno – Universitat Politècnica de Catalunya, Barcelona - Spain

Jan Odijk – UIL-OTS, Utrecht - The Netherlands

Stelios Piperidis – Athena Research Center/ILSP, Athens - Greece

 

 

voice-activated services, etc.

 

Back  Top

3-3-6(2014-05-27) 7th WORKSHOP ON BUILDING AND USING COMPARABLE CORPORA, Reykjavik (Iceland), MODIFIED
Final Call for Papers
 
  7th WORKSHOP ON BUILDING AND USING COMPARABLE CORPORA
 
  Building Resources for Machine Translation Research
 
 
  May 27, 2014
  Co-located with LREC 2014
  Harpa Conference Centre, Reykjavik (Iceland)
 
  EXTENDED DEADLINE FOR PAPERS: February 23, 2014
  https://www.softconf.com/lrec2014/BUCC2014/
 

  *** INVITED SPEAKER ***
 
  Chris Callison-Burch (University of Pennsylvania)
 
============================================================
 
MOTIVATION
 
In the language engineering and the linguistics communities, research
in comparable corpora has been motivated by two main reasons. In
language engineering, on the one hand, it is chiefly motivated by the
need to use comparable corpora as training data for statistical
Natural Language Processing applications such as statistical machine
translation or cross-lingual retrieval. In linguistics, on the other
hand, comparable corpora are of interest in themselves by making
possible inter-linguistic discoveries and comparisons. It is generally
accepted in both communities that comparable corpora are documents in
one or several languages that are comparable in content and form in
various degrees and dimensions. We believe that the linguistic
definitions and observations related to comparable corpora can improve
methods to mine such corpora for applications of statistical NLP. As
such, it is of great interest to bring together builders and users of
such corpora.
 
The scarcity of parallel corpora has motivated research concerning
the use of comparable corpora: pairs of monolingual corpora selected
according to the same set of criteria, but in different languages
or language varieties. Non-parallel yet comparable corpora overcome
the two limitations of parallel corpora, since sources for original,
monolingual texts are much more abundant than translated texts.
However, because of their nature, mining translations in comparable
corpora is much more challenging than in parallel corpora. What
constitutes a good comparable corpus, for a given task or per se,
also requires specific attention: while the definition of a parallel
corpus is fairly straightforward, building a non-parallel corpus
requires control over the selection of source texts in both languages.
 
Parallel corpora are a key resource as training data for statistical
machine translation, and for building or extending bilingual lexicons
and terminologies. However, beyond a few language pairs such as
English- French or English-Chinese and a few contexts such as
parliamentary debates or legal texts, they remain a scarce resource,
despite the creation of automated methods to collect parallel corpora
from the Web. To exemplify such issues in a practical setting, this
year's special focus will be on
 
    Building Resources for Machine Translation Research
 
This special topic aims to address the need for:
(1) Machine Translation training and testing data such as spoken or
written monolingual, comparable or parallel data collections, and
(2) methods and tools used for collecting, annotating, and verifying
MT data such as Web crawling, crowdsourcing, tools for language
experts and for finding MT data in comparable corpora.
 

TOPICS
 
We solicit contributions including but not limited to the following topics:
 
Topics related to the special theme:
  * Methods and tools for collecting and processing MT data,
        including crowdsourcing
  * Methods and tools for quality control
  * Tools for efficient annotation
  * Bilingual term and named entity collections
  * Multilingual treebanks, wordnets, propbanks, etc.
  * Comparable corpora with parallel units annotated
  * Comparable corpora for under-resourced languages and specific domains
  * Multilingual corpora with rich annotations:
        POS tags, NEs, dependencies, semantic roles, etc.
  * Data for special applications: patent translation, movie
        subtitles, MOOCs, meetings, chat-rooms, social media, etc.
  * Legal issues with collecting and redistributing data
        and generating derivatives
 
Building comparable corpora:
  * Human translations
  * Automatic and semi-automatic methods
  * Methods to mine parallel and non-parallel corpora from the Web
  * Tools and criteria to evaluate the comparability of corpora
  * Parallel vs non-parallel corpora, monolingual corpora
  * Rare and minority languages, across language families
  * Multi-media/multi-modal comparable corpora
 
Applications of comparable corpora:
  * Human translations
  * Language learning
  * Cross-language information retrieval & document categorization
  * Bilingual projections
  * Machine translation
  * Writing assistance
 
Mining from comparable corpora:
  * Extraction of parallel segments or paraphrases from comparable corpora
  * Extraction of bilingual and multilingual translations of single words
        and multi-word expressions; proper names, named entities, etc.
 

IMPORTANT DATES
 
  February 10, 2014    Deadline for submission of full papers
      March 10, 2014    Notification of acceptance
      March 27, 2014    Camera-ready papers due
         May 27, 2014    Workshop date
 

SUBMISSION INFORMATION
 
Papers should follow the LREC main conference formatting details (to be
announced on the conference website http://lrec2014.lrec-conf.org/en/ )
and should be submitted as a PDF-file via the START workshop manager at
  https://www.softconf.com/lrec2014/BUCC2014/
 
Contributions can be short or long papers. Short paper submission must
describe original and unpublished work without exceeding six (6)
pages. Characteristics of short papers include: a small, focused
contribution; work in progress; a negative result; an opinion piece;
an interesting application nugget. Long paper submissions must
describe substantial, original, completed and unpublished work without
exceeding ten (10) pages.
 
Reviewing will be double blind, so the papers should not reveal the
authors' identity. Accepted papers will be published in the workshop
proceedings.
 
Double submission policy: Parallel submission to other meetings or
publications is possible but must be immediately notified to the
workshop organizers.
 
When submitting a paper from the START page, authors will be asked to
provide essential information about resources (in a broad sense,
i.e. also technologies, standards, evaluation kits, etc.) that have
been used for the work described in the paper or are a new result of
your research.  Moreover, ELRA encourages all LREC authors to share
the described LRs (data, tools, services, etc.), to enable their
reuse, replicability of experiments, including evaluation ones, etc.
 

JOURNAL SPECIAL ISSUE
 
Authors of selected papers will be encouraged to submit substantially
extended versions of their manuscripts to an upcoming special issue
on ‘Machine Translation Using Comparable Corpora’ of the Journal
of Natural Language Engineering.
 

ORGANISERS
 
  Pierre Zweigenbaum, LIMSI, CNRS, Orsay (France)
  Ahmet Aker, University of Sheffield (UK)
  Serge Sharoff, University of Leeds (UK)
  Stephan Vogel, QCRI (Qatar)
  Reinhard Rapp, Universities of Mainz (Germany) and Aix-Marseille (France)
 
 
FURTHER INFORMATION
 
Pierre Zweigenbaum:  pz (at) limsi (dot) fr
 

SCIENTIFIC COMMITTEE
 
  * Ahmet Aker, University of Sheffield (UK)
  * Srinivas Bangalore (AT&T Labs, US)
  * Caroline Barrière (CRIM, Montréal, Canada)
  * Chris Biemann (TU Darmstadt, Germany)
  * Hervé Déjean (Xerox Research Centre Europe, Grenoble, France)
  * Kurt Eberle (Lingenio, Heidelberg, Germany)
  * Andreas Eisele (European Commission, Luxembourg)
  * Éric Gaussier (Université Joseph Fourier, Grenoble, France)
  * Gregory Grefenstette (INRIA, Saclay, France)
  * Silvia Hansen-Schirra (University of Mainz, Germany)
  * Hitoshi Isahara (Toyohashi University of Technology)
  * Kyo Kageura (University of Tokyo, Japan)
  * Adam Kilgarriff (Lexical Computing Ltd, UK)
  * Natalie Kübler (Université Paris Diderot, France)
  * Philippe Langlais (Université de Montréal, Canada)
  * Michael Mohler (Language Computer Corp., US)
  * Emmanuel Morin (Université de Nantes, France)
  * Dragos Stefan Munteanu (Language Weaver, Inc., US)
  * Lene Offersgaard (University of Copenhagen, Denmark)
  * Ted Pedersen (University of Minnesota, Duluth, US)
  * Reinhard Rapp (Université Aix-Marseille, France)
  * Sujith Ravi (Google, Mountain View, US)
  * Serge Sharoff (University of Leeds, UK)
  * Michel Simard (National Research Council Canada)
  * Richard Sproat (OGI School of Science & Technology, US)
  * Tim Van de Cruys (IRIT-CNRS, Toulouse, France)
  * Stephan Vogel (QCRI, Qatar)
  * Guillaume Wisniewski (Université Paris Sud & LIMSI-CNRS, Orsay, France)
  * Pierre Zweigenbaum (LIMSI-CNRS, Orsay, France)
 
 
 
Back  Top

3-3-7(2014-05-27) CfP WILDRE2- 2nd Workshop on Indian Language Data: Resources and Evaluation, Reykjavik, Iceland

WILDRE2- 2nd Workshop on Indian Language Data: Resources and Evaluation

Date: Tuesday, 27th May 2014     

Venue: Harpa Conference Centre, Reykjavik, Iceland (Organized in under the platform of LREC2014 (26-31 May 2014))   

Website:

  • main website - http://sanskrit.jnu.ac.in/conf/wildre2
  • submit papers on - http://www.softconf.com/lrec2014/WILDRE/

    WILDRE – the 2nd workshop on Indian Language Data: Resources and Evaluation is being organized in Reykjavik, Iceland on 27th May, 2014 under the LREC platform.  India has a huge linguistic diversity and has seen concerted efforts from the Indian government and industry towards developing language resources. European Language Resource Association (ELRA) and its associate organizations have been very active and successful in addressing the challenges and opportunities related to language resource creation and evaluation. It is therefore a great opportunity for resource creators of Indian languages to showcase their work on this platform and also to interact and learn from those involved in similar initiatives all over the world.
    The broader objectives of the WILDRE will be

    • To map the status of Indian Language Resources
    • To investigate challenges related to creating and sharing various levels of language resources
    • To promote a dialogue between language resource developers and users
    • To provide opportunity for researchers from India to collaborate with researchers from other parts of the world
DATES      

February 17, 2014 Paper submissions due     
March 10, 2014 Paper notification of acceptance     
April 4, 2014 Camera-ready papers due     
May 27, 2014 Workshop

SUBMISSIONS     

Papers must describe original, completed or in progress, and  unpublished work. Each submission will be reviewed by two program committee members.     

Accepted papers will be given up to 10 pages (for full papers) 5 pages (for short papers and posters) in the workshop proceedings, and will be presented oral presentation or poster.     

Papers should be formatted according to the style-sheet, which will be provided on the LREC 2014 website (lrec2014.lrec-conf.org/en/).   

Please submit papers in PDF/doc format to the LREC website

We are seeking submissions under the following category

  • Full papers (10 pages)
  • Short papers (work in progress – 5 pages)
  • Posters (innovative ideas/proposals, research proposal of students)
  • Demo (of working online/standalone systems)  

Though our area of interest covers all NLP/language technology related activity for Indian languages, we would like to focus on the resource creation in the following areas-

  • Text corpora
  • Speech corpora
  • Lexicons and Machine-readable dictionaries
  • Ontologies
  • Grammars
  • Annotation of corpora
  • Language resources for basic NLP, IR and Speech Technology tasks, tools and
  • Infrastructure for constructing and sharing language resources
  • Standards or specifications for language resources  applications
  • Licensing and copyright issues

Both submission and review processes will handled electronically using the Start interface of the LREC website. The workshop website will provide the submission guidelines and the link for the electronic submission.

When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. Moreover, ELRA encourages all LREC authors to share the described LRs (data, tools, services, etc.), to enable their reuse, replicability of experiments, including evaluation ones, etc...

For further information on this initiative, please refer to http://lrec2014.lrec-conf.org/en/.

Conference Chairs
  • Girish Nath Jha, Jawaharlal Nehru University, New Delhi
  • Kalika Bali, Microsoft Research India Lab, Bangalore
  • Sobha L, AU-KBC Research Centre, Anna University, Chennai
Program Committee (to be updated)
  • A. Kumaran, Microsoft Research, India
  • Amba Kulkarni, University of Hyderabad, India
  • Ashwani Sharma, Google India
  • Chris Cieri, LDC, University of Pennsylvania
  • Dafydd Gibbon, Universität Bielefeld, Germany
  • Dipti Mishra Sharma, IIIT, Hyderabad, India
  • Girish Nath Jha, Jawaharlal Nehru University, New Delhi, India
  • Hans Uszkoreit, Saarland University, Germany
  • Indranil Datta, English & Foreign Language University, Hyderabad, India
  • Jopseph Mariani, LIMSI-CNRS, France
  • Jyoti Pawar, Goa University, India
  • Kalika Bali, MSRI, Bangalore, India
  • Karunesh Arora, CDAC Noida, India
  • Khalid Choukri, ELRA, France
  • L Ramamoorthy, LDC-IL, CIIL, Mysore, India
  • Malhar Kulkarni, IIT Bombay, India
  • Monojit Choudhary, Microsoft Research, India
  • Nicoletta Calzolari, ILC-CNR, Pisa, Italy
  • Niladri Shekhar Dash, ISI Kolkata, India
  • Panchanan Mohanty, University of Hyderabad, India
  • Pushpak Bhattacharya, IIT Bombay, India
  • Sobha L, AU-KBC Research Centre, Anna University, Chennai, India
  • Umamaheshwar Rao, University of Hyderabad, India
  • Vikram Dendi, Microsoft Research, USA
  • Zygmunt Vetulani, Adam Mickiewicz University, Poland
Workshop contact:

Esha Banerjee, Sr Linguist, ILCI project @JNU  esha.jnu@gmail.com

Back  Top

3-3-8(2014-05-31) LREC 2014 Workshop Language Technology Service Platforms: Synergies, Standards, Sharing, Reykjavik, Iceland

LREC 2014 Workshop Language Technology Service Platforms: Synergies, Standards, Sharing - May 31, 2014, Reykjavik
Contact: lrec-infra@lrec-conf.org

CALL FOR PARTICIPATION

Motivation and background
Increasingly, Human Language Technology (HLT) requires sophisticated infrastructures to support research, development, innovation, collaboration and deployment as services ready for production use. To address this need, several supporting infrastructures have been established over the past few years, and others are being built or planned.
The LREC 2014 Workshop on Language Technology Service Platforms brings together major infrastructural/coordination initiatives from all over the world. The overall goal is to explore means by which these and other infrastructure projects can best collaborate and interoperate in order to avoid duplication of effort, fragmentation, and above all, to facilitate the use and reuse of technologies and resources distributed throughout the world.

Focus
Web services are an increasingly common means to provide access to language technologies and resources. These services typically work in combination with repositories of language resources and workflow managers. This development brings with it its own set of issues in relation to collaboration and interoperability, including:
interoperability of input to and output from language technologies deployed as web services;
means to provide services for evaluation/replicability of results and iterative development;
means to support multi-site collaborative work;
licensing and cataloguing of language technologies and resources;     
sharing and access mechanisms to language technologies and resources;
quality assessment and sustainability of language technologies and resources.

Aims
This workshop aims to foster discussion on these (and related) issues in order to arrive at a set of concrete plans for future collaboration and cooperation as well as immediate next steps.
General discussions will focus on the following questions: How can the various infrastructures collaborate, in both the near and long-term future? What are the steps needed in order to share both language technologies and resources? How can the projects and initiatives (including not only those involved in the workshop, but also others) join forces in order to eventually create a global infrastructure for Human Language Technologies?

The goal is to leave the workshop with a resolution that 1. lists all active infrastructure and platform initiatives, 2. describes the consensus of all initiatives involved in the workshop, 3. outlines the requirements for collaboration and 4. proposes solutions.

Researchers and technologists interested in platforms, services, sharing of language resources etc. are encouraged to participate in the workshop in order to make sure that their voice is heard. As described above, the consensus and outcome of the workshop will be put down in writing in a short resolution document meant to be used by the whole community for public relation and dissemination purposes, especially with regard to discussions with journalists, administrators, politicians and funding agencies.

Preliminary program plan
The first session will provide short introductions to the infrastructural/coordination initiatives involved in the organisation.
In order to outline some concrete next steps for the immediate future, there will be sessions devoted to surveying two to four currently implemented solutions to crucial problems, with an eye toward assessing and comparing the various solutions in order to determine immediate action items. These sessions will address topics such as:
interoperability and the use of standards, for example, syntax and semantics used to exchange information between web services and/or technologies that may not have been developed at the same site (i.e., that do not necessarily utilize the same formats, categories, etc.) 
implemented means to provide evaluation/replicability and means to enable multi-site collaboration
licensing for data and tools shared over networks and services.
  
Contribute to an overview of the Language Resources and Technologies landscape!
In order to facilitate the discussion we ask workshop participants to answer the following questions and to send their answers to the organisers (see Contact mail below) at the beginning of May. A summary of the responses will be provided at the workshop to inform and to focus the discussion.
(1) Access – How do you make information about your tools and/or resources available to the world? How and where do you find information on tools and resources you would like to use?
(2) Obstacles to Data and Technology Exchange – What do you see as the major obstacle(s) to the exchange of data between technologies?
(3) Data or Technology Gaps – Are there tools, technologies or resources that do not exist at this time that are required to answer your research or development questions?
(4) Interoperability and Standards – What syntax and semantics do you use to exchange information between web services and/or tools that may not have been developed at the same site (i.e., do not necessarily utilize the same formats, categories, etc.)?  
(5) Evaluation – What have you implemented to provide evaluation/replicability?  
(6) Licensing – How are you handling licensing for data shared over networks and services?
(7) Collaboration – How would you propose to promote collaboration among the various infrastructure projects located around the world? 

We also welcome any additional comments or views that you wish to express.

We look forward to welcoming you in Reykjavik!

Organising Initiatives
COCOSDA
ELRA – European Language Resource Association
FLaReNet
Language Applications (LAPPS) Grid
Language Grid
META-NET
MLi – Towards a MultiLingual Data & Services infrastructure
MUSE FP7 – ClowdFlows initiative
Research Data Alliance

Organisers
Nicoletta Calzolari (ILC-CNR, Italy and ELRA, France)
Khalid Choukri (ELRA, France)
Christopher Cieri (LDC, USA)
Tomaž Erjavec (Jožef Stefan Institute, Slovenia)
Nancy Ide (Vassar College, USA)
Toru Ishida (Kyoto University, Japan)
Oleksandr Kolomiyets (KU Leuven, Belgium)
Joseph Mariani (LIMSI-CNRS and IMMI, France)
Yohei Murakami (Kyoto University, Japan)
Satoshi Nakamura (Nara Institute of Science and Technology, Japan)
Senja Pollak (Jožef Stefan Institute, Slovenia)
James Pustejovsky (Brandeis University, USA)
Georg Rehm (DFKI GmbH, Germany)
Herman Stehouwer (Max Planck, Germany)
Hans Uszkoreit (DFKI GmbH, Germany)
Andrejs Vasiļjevs (Tilde, Latvia)
Peter Wittenburg (Max Planck Institute for Psycholinguistics, The Netherlands)

     

Back  Top

3-3-9(2014-06-11) 15th ICPLA Conference 2014
15th ICPLA Conference 2014
International Clinical Phonetics and Linguistics Association
 
 
 

Dear Colleagues
It is our great pleasure to announce the 15th International Clinical Phonetics and Linguistics Association Conference to be held in Stockholm, Sweden in June 11-13, 2014.


We hereby cordially invite you and wish you welcome to Stockholm, at it's best in June, to participate in the conference held at Karolinska Institutet, Solna, at the centrally located campus near Stockholm city center. 

Please, visit www.icpla2014.se for Programme at a glance and important dates for Panel proposals (September 1st) and Abstract submissions (November 1st).  


We are looking forward to welcome you in Stockholm 2014!

Best wishes,
The Organizing Committee of the ICPLA 2014
Back  Top

3-3-10(2014-06-16) Odyssey 2014, Joensuu, Finland UPDATED
ODYSSEY 2014: THE SPEAKER AND LANGUAGE RECOGNITION WORKSHOP 
June 16-19, 2014, Joensuu, Finland

http://cs.uef.fi/odyssey2014/
https://www.facebook.com/SpeakerOdyssey2014

Registration now opened:
http://cs.uef.fi/odyssey2014/registration/

Early-bird deadline: April 30
Late-bird deadline:  May 31

------------------------------------------------------------------------
KEYNOTE TALKERS:

Dr. Samy Bengio, Google Research
http://research.google.com/pubs/bengio.html
'Large scale learning of a joint embedding space'

Prof. Martin Cooke, University of the Basque Country
http://laslab.org/martin
'Speaking in adverse conditions: from behavioural observations to
intelligibility-enhancing speech modifications'

Dr. Joseph P. Campbell, MIT Lincoln Lab
http://www.ll.mit.edu/mission/cybersec/HLT/biographies/campbell-bio.html
'Speaker Recognition for Forensic Applications'

------------------------------------------------------------------------
CONFERENCE TOPICS:

The general themes of the conference include speaker  and
language  recognition and characterization. The specific topics
include, but are not limited to, the following:

o Speaker characterization and adaptation
o Features for speaker and language recognition
o Multi-speaker training, detection and diarization
o Robustness in channels and environment
o Robust classification and fusion
o Speaker recognition corpora and evaluation
o Speaker recognition with speech recognition
o Forensics, multimodality, and multimedia speaker recognition
o Speaker and language confidence estimation
o Language, dialect, and accent recognition
o Speaker synthesis and transformation
o Human recognition of speaker and language
o Analysis and countermeasures against spoofing attacks
o Commercial applications

------------------------------------------------------------------------
REGULAR PAPER SUBMISSIONS:

All regular submissions (max 8 pages) will be reviewed by at least
three members of the scientific review committee. The regular
submissions must include scientific or methodological novelty; the
paper has to review the relevant prior work and state clearly the
novelty in the Introduction part. The accepted papers will appear in
electronic proceedings.

------------------------------------------------------------------------
INDUSTRY TRACK AND DEMOS:

Odyssey committee recognizes a large gap between theoretical research
results and real-world deployment of the methods. To foster closer
collaboration across industry and academia, Odyssey 2014 features an
industry submission track. This can include a description of your
target application, a product, a demonstrator, or any combination. In
addition to voice biometrics providers, we encourage submissions from
companies who are in need for speaker or language recognition
technology. The industry paper submissions do NOT have to present
methodological novelty, but the submission MUST address one or all of
the following aspects:

- Description of the application, role of speaker/language recognition
- Research results and methods that worked well in your application
- Negative research results that have NOT worked in practice
- Unsolved problems 'out-in-the-wild' that deserve attention

The industry submissions will NOT undergo full peer review
nor will be included to the proceedings. All the industry track
submissions can be presented as a posters. The organizing committee
may select a few most interesting ones for oral presentation.

------------------------------------------------------------------------
NIST SPECIAL SESSIONS: I-VECTOR CHALLENGE & NIST SRE-2012 FOLLOW-UP

In addition to regular and industry paper submissions, Odyssey 2014
features two special sessions co-organized with National Institute of
Standards and Technologies (NIST). NIST SRE-2012 special session
focuses on extended analyses on the latest, NIST 2012 speaker
recognition evaluation (SRE) benchmark, and is targeted for the
participants of NIST SRE 2012.

The i-vector challenge is a new type of challenge targeted for anyone
interested for a 'quick start-up' in speaker recognition. Building
modern speaker and language recognition systems requires a lot of
preprocessing, corpus engineering and computations, making it
challenging for newcomers to enter the field. This prohibits piloting
of possibly promising modeling ideas developed outside of speaker
recognition community (e.g. machine learning and image processing
communities). To bridge this gap, NIST organizes a new type of
benchmark, i-vector challenge, synchronized with Odyssey 2014.

Preliminary due-date for paper submissions to both special sessions is
February 2014. Submissions by this deadline will undergo review both
by NIST and by the scientific review committee, and will be included
to the conference proceedings if accepted. Late challenge submissions
(without a paper) are also encouraged; they can be presented as
posters, but will not undergo peer review nor will be included to the
conference proceedings.

To ensure smooth organization, early (non-binding) preliminary sign-up
is required. More details and registration for the i-vector challenge
will be available in November 2013.

See https://ivectorchallenge.nist.gov/ for more info about the i-vector
challenge.

------------------------------------------------------------------------
AWARDS:

Odyssey 2014 features three awards:

- A best paper award
- A best student paper award
- Free registration for 1 to 2 top-performers in the i-vector challenge

All regular and special session papers submitted in time are
candidates for the awards. The awards are given based on the review
reports AND the presentation at the conference. For the best student
paper award, the first author must be a student (does not yet hold a
PhD degree) at the time of paper submission.

Best 1 to 2 teams (max 1 person per site) in i-vector challenge are
provided FREE REGISTRATION to the full Odyssey 2014 workshop.

------------------------------------------------------------------------
ORGANIZING COMMITTEE:

Tomi Kinnunen, chair        University of Eastern Finland, Finland
Pasi Franti, co-chair       University of Eastern Finland, Finland
Jean-Francois Bonastre      University of Avignon, France
Niko Brummer                Agnitio, South Africa
Lukas Burget                Brno Univ. Technology, Czech Republic
Joseph Campbell             MIT Lincoln Lab, USA
Jan 'Honza' Cernocky        Brno Univ. Technology, Czech Republic
Haizhou Li                  Inst. Infocomm Research, Singapore
Alvin Martin                NIST, USA
Douglas Reynolds            MIT Lincoln Lab, USA

------------------------------------------------------------------------
SCIENTIFIC REVIEW COMMITTEE:

Eliathamby Ambikairajah     Univ. New South Wales, Australia
Tim Anderson                Air Force Research Lab, USA
Walt Andrews                BBN, USA
Hagai Aronowitz             IBM Haifa Research Lab, Israel
Roland Auckenthaler         NMS Comm., USA
Claude Barras               LIMSI, France
Kay Berkling                Inline, Intern. Online Dienste GmbH, Germany
Jean-Francois Bonastre      Univ. Avignon, France
Hynek Boril                 Univ. Texas at Dallas, USA
Niko Brummer                AGNITIO, South Africa
Lukas Burget                Brno Univ. Tech, Czech Republic
Joseph Campbell             MITLL, USA
Bill Campbell               MITLL, USA
Jan 'Honza' Cernocky        Brno Univ. Tech, Czech Republic
Nancy Chen                  I2R, Singapore
Sandro Cumani               Brno Univ. Tech, Czech Republic
Najim Dehak                 MIT, USA
George Doddington           US. Gov. Consultant, USA
Nicholas Evans              EURECOM, France
Mauro Falcone               Fondazione Ugo Bordoni, Italy
Kevin Farrell               Nuance, USA
Benoit Fauve                Validsoft, UK
Daniel Garcia-Romero        Johns Hopkins Univ, USA
Ondrej Glembek              Brno Univ. Tech, Czech Republic
Joaquin Gonzalez-Rodriguez  Univ. Autónoma de Madrid, Spain
Craig Greenberg             NIST, USA
Cemal Hanilci               Uludag Univ., Turkey
John H.L. Hansen            Univ. Texas at Dallas, USA
Taufiq Hasan                Univ. Texas at Dallas, USA
Ville Hautamaki             Univ. Eastern Finland, Finland
Hynek Hermansky             Johns Hopkins Univ, USA
Chien-Lin Huang             National Inst. Inf. and Comm. Tech., Japan
Michael Jessen              Bundeskriminalamt, Germany
Patrick Kenny               CRIM, Canada
Tomi Kinnunen               Univ. Eastern Finland, Finland
Tina Kohler                 DoD, USA
Pietro Laface               Politecnico di Torino, Italy
Anthony Larcher             I2R, Singapore
Kong Aik Lee                I2R, Singapore
Yun Lei                     SRI International, USA
Cheung-Chi Leung            I2R, Singapore
Haizhou Li                  I2R, Singapore
Ming Li                     Sun Yat-sen Univ, China
Bin Ma                      I2R, Singapore
Dave Marks                  Sandia, DoE National Labs, USA
Alvin Martin                NIST, USA
David Martinez              Univ. Zaragoza, Spain
John Mason                  Swansea Univ, UK
Pavel Matejka               Brno Univ. Tech, Czech Republic
Driss Matrouf               Univ. Avignon, France
Tomoko Matsui               Inst. Stat. Math, Japan
Yuri Matveev                ITMO University, Russia
Alan McCree                 Johns Hopkins Univ, USA
Pejman Mowlaee              Graz Univ. Tech., Austria
Jiri Navratil               IBM Research, USA
Raymond W.M. Ng             Univ. Sheffield, UK
Javier Ortega-Garcia        Univ. Autónoma de Madrid, Spain
Jason Pelecanos             IBM Research, USA
Oldrich Plchot              Brno Univ. Tech, Czech Republic
Padmanabhan Rajan           Univ. Eastern Finland, Finland
Daniel Ramos                Univ. Autónoma de Madrid, Spain
Douglas Reynolds            MITLL, USA
Fred Richardson             MITLL, USA
Rahim Saeidi                Univ. Eastern Finland, Finland
Nicolas Scheffer            SRI International, USA
Jan Silovsky                TU Liberec, Czech Republic
Themos Stafylakis           CRIM, Canada
Doug Sturim                 MITLL, USA
Rong Tong                   I2R, Singapore
Pedro Torres-Carrasquillo   MITLL, USA
Michael Wagner              Univ. Canberra, Australia
David van Leeuwen           Radboud Univ. Nijmegen, The Netherlands
Jesus Villalba              Univ. Zaragoza, Spain
Changhuai You               I2R, Singapore

------------------------------------------------------------------------
VENUE AND TRAVEL:

Odyssey 2014 will be hosted by School of Computing of the University of
Eastern Finland (UEF). Joensuu is a small town of 75,000 inhabitants
in the lakeside Finland -- the capital of green. It is famous for its
peaceful nature, excellent outdoor opportunities as well as many saunas.

Joensuu can be easily reached from Helsinki (50 min flight), which can
be reached via direct flights from several European, North American
and Asian cities.

For more details:
http://cs.uef.fi/odyssey2014/

Email: odyssey@cs.uef.fi
Back  Top

3-3-11(2014-06-17) 10th Oxford Dysfluency Conference (ODC) at St Catherine's College Oxford , UK

We are  pleased to announce the 10th Oxford Dysfluency Conference (ODC) is to be held  at St Catherine's College Oxford from 17 - 20 July, 2014.

ODC has a reputation as one of the leading international scientific  conferences in the field of dysfluency. The conference brings together researchers and clinicians, providing a showcase and forum for discussion and  collegial debate about the most current and innovative research and clinical  practices.  Throughout the history of ODC, the primary aim has been to bridge the gap between research and clinical practice.                           

The conference seeks  to promote research that informs management, with interventions that are  supported by sound theory and which inform future research.

In 2014, the  goal of the Oxford Dysfluency Conference is to lead a challenging international  debate about the latest research in disorders of fluency and its clinical  applications. The 2014 conference will enable delegates to:

                 

                       
  • Present the latest research developments and       findings
  •                    
  • Explore issues relating to the nature of       stuttering and its treatment
  •                    
  • Develop knowledge and clinical skills working       with children and adults who stutter
  •                    
  • Consider ways to integrate research into       clinical practice
  •                    
  • Support and encourage new researchers in the       field
  •                    
  • Develop collaborations with researchers       working in dysfluency
  •                    
  • Provide informal opportunities to meet and       discuss ideas with leading experts in the field in a friendly environment
  •                    
  • Advance research in the field of dysfluency
  •                  

Conference Co-Chairs                    

David Rowley, Faculty of Health and Life Sciences, De Montfort University, UK
                    Sharon Millard, The Michael Palin Centre, UK                  

Back  Top

3-3-12(2014-06-18) 12th International Workshop on Content-Based Multimedia Indexing (CBMI), Klagenfurt, Austria MODIFIED

   CALL FOR PAPERS 
        Extended paper submission deadline:  February 28, 2014

        C B M I    2014
        12th International Workshop on Content-Based Multimedia Indexing
        18-20 June 2014, Klagenfurt, Austria

        http://cbmi2014.itec.aau.at/
        in cooperation with IEEE CSS and ACM SIGMM
******************************************************************************

The 12th International Content Based Multimedia Indexing Workshop aims
to bring together the various communities involved in all aspects of
content-based multimedia indexing, retrieval, browsing and presentation.
Following the eleven successful previous events of CBMI (Toulouse 1999,
Brescia 2001, Rennes 2003, Riga 2005, Bordeaux 2007, London 2008, Chania
2009, Grenoble 2010, Madrid 2011, Annecy 2012, and Veszprem 2013), CBMI
2014 will take place in Klagenfurt, in the very south of Austria from
June 18th to June 20th 2014. CBMI 2014 is organized in cooperation with
IEEE Circuits and Systems Society and ACM SIG Multimedia. Topics of the
workshop include but are not limited to visual indexing, audio and
multi-modal indexing, multimedia information retrieval, and multimedia
browsing and presentation. Additional special sessions are planned in
the fields of endoscopic videos and images and multimedia metadata.


Paper submission: Authors are invited to submit full-length and special
session papers of 6 pages and short (poster) and demo papers of 4 pages
maximum. All peer-reviewed, accepted and registered papers will be
published in the CBMI 2014 workshop proceedings to be indexed and
distributed by the IEEE Xplore. The submissions are peer reviewed in
single blind process, the language of the workshop is English. Selected
papers of the conference will be invited to submit extended versions of
their contributions to a special issue of Multimedia Tools and
Applications journal (MTAP).


* Important dates:
   Paper submission deadline:    February 16, 2014
   Notification of acceptance:   March 30, 2014
   Camera-ready papers due:      April 14, 2014
   Author registration:          April 14, 2014
   Early registration:           May 25, 2014

* Topics of interest include, but are not limited to:
      * Visual Indexing (image, video, graphics)
      * Visual content extraction Identification and tracking of semantic regions
      * Identification of semantic events
      * Audio and Multi-modal Indexing
      * Audio indexing (audio, speech, music)
      * Audio content extraction
      * Multi-modal and cross-modal indexing
      * Multimedia fusion
      * Metadata generation, coding and transformation
      * Multimedia Information Retrieval (image, audio, video, …)
      * Matching and similarity search
      * Content-based search
      * Multimedia data mining
      * Multimedia recommendation
      * Large scale multimedia database management
      * Multimedia Browsing and Presentation
      * Summarization, browsing and organization of multimedia content
      * Personalization and content adaptation
      * User interaction and relevance feedback
      * Multimedia interfaces, presentation and visualization tools

* Contact
For more information please visit http://cbmi2014.itec.aau.at/ and for
additional questions, remarks, or clarifications please contact
cbmi2014@itec.aau.at 

Back  Top

3-3-13(2014-06-18) SIGDIAL 2014 CONFERENCE, Philadelphia, PA, USA

 

SIGDIAL 2014 CONFERENCE
Wednesday, June 18 to Friday, June 20, 2014   

The 15th Annual Meeting of the Special Interest Group on Discourse and Dialog will be
co-located with the 8th International Conference on Natural Language Generation (INLG 2014) in Philadelphia, PA, USA and immediately preceding ACL 2014.

     http://www.sigdial.org/workshops/conference15

 

CALL FOR PAPERS

The 2014 SIGDIAL conference continues a series of fourteen conferences, providing a regular forum for the presentation of cutting edge research across the areas of discourse and dialog and attracting a diverse set of participants from academia and industry. The conference is sponsored by the SIGdial organization, which serves as the Special Interest Group on discourse and dialog for both ACL and ISCA.

KEYNOTE SPEAKERS

We are pleased to announce our keynote speakers will be Prof. Steve Young from the University of Cambridge and Prof. Lillian Lee from Cornell University.

TOPICS OF INTEREST

We welcome formal, corpus-based, system-building or analytical work on discourse and dialog including but not restricted to the following themes and topics:

- Discourse Processing and Dialog Systems

- Corpora, Tools and Methodology

- Pragmatic and/or Semantic Modeling

- Dimensions of Interaction

- Open Domain Dialog

- Style, Voice and Personality in Spoken Dialog and Written Text

- Applications of Dialog and Discourse Processing Technology

- Novel Methods for Generation Within Dialog for a joint special session with INLG

SPECIAL SESSIONS

There will be one special session co-located with INLG on the morning of June 20th. There will be a second special session on the Dialogue State Tracking Challenge (DSTC). Proposals for additional special sessions are also still welcome. Special sessions submission will undergo regular SIGdial review process.

IMPORTANT DATES

Special Session Proposal Deadline:          Sunday, 9 February 2014 (23:59, GMT-11)

Special Session Notification:                     Monday, 17 February 2014

Long, Short and Demonstration

Paper Submission Deadline:                       Sunday, 9 March 2014 (23:59, GMT-11)

Paper Notification:                                       Friday, 18 April 2014

Final Paper Due

- For papers accepted subject

 to receiving mentoring                                   Wednesday, 14 May  2014

- For accepted papers                                    Friday, 23 May 2014

Conference                                                  Wednesday, June 18, 2014 to Friday, June 20, 2014

 

SUBMISSIONS

Special Session Proposals

The SIGdial organizers welcome the submission of special session proposals.  A SIGDIAL special session is the length of a regular session at the conference; may be organized as a poster session, a poster session with panel discussion, or an oral presentation session; and will be held on the last day of the conference.  Special sessions may, at the discretion of the SIGdial organizers, be held as parallel sessions.  Those wishing to organize a special session should prepare a two-page proposal containing:  a summary of the topic of the special session; a list of organizers and sponsors; a list of people who may submit and participate; and a requested format (poster/panel/oral session).  These proposals should be sent to conference[a]sigdial.org by the special session proposal deadline.  Special session proposals will be reviewed jointly by the general and program co-chairs. 

Papers 

The program committee welcomes the submission of long papers, short papers, and demonstration descriptions. All accepted submissions will be published in the conference proceedings.

Long papers may, at the discretion of the technical program committee, be accepted for oral or poster presentation. They must be no longer than 8 pages, including title, content, and examples. Two additional pages are allowed for references and appendices, which may include extended example discourses or dialogs, algorithms, graphical representations, etc.

Short papers will be presented as posters. They should be no longer than 4 pages, including title and content. One additional page is allowed for references and appendices.

Demonstration papers should be no longer than 3 pages, including references. A separate one-page document should be provided to the program co-chairs for demonstration descriptions, specifying furniture and equipment needed for the demo.

Authors of a submission may designate their paper to be considered for a SIGDIAL special session, which would highlight a particular area or topic.  All papers will undergo regular peer review. 

Papers that have been or will be submitted to other meetings or publications must provide this information (see submission format). A paper accepted for presentation at SIGDIAL 2014 must not have been presented at any other meeting with publicly available proceedings. Any questions regarding submissions can be sent to the program co-chairs at program-chairs[at]sigdial.org.

Authors are encouraged to submit additional supportive material such as video clips or sound clips and examples of available resources for review purposes.

Submission is electronic using paper submission software.

FORMAT

All long, short, and demonstration submissions should follow the two-column ACL 2014 format. We strongly recommend the use of ACL LaTeX style files or Microsoft Word style files tailored for the ACL 2014 conference. Submissions must conform to the official ACL 2014 style guidelines (http://www.cs.jhu.edu/ACL2014/CallforPapers.htm), and they must be electronic in PDF. As in most previous years, submissions will not be anonymous.

MENTORING SERVICE

For several years, the SIGDIAL conference has offered a mentoring service. Submissions with innovative core ideas that may need language (English) or organizational assistance will be flagged for 'mentoring' and conditionally accepted with recommendation to revise with a mentor. An experienced mentor who has previously published in the SIGDIAL venue will then help the authors of these flagged papers prepare their submissions for publication. Any questions about the mentoring service can be addressed to the mentoring chair at mentoring[at]sigdial.org.

STUDENT SUPPORT

SIGdial also offers a limited number of scholarships for students presenting a paper accepted to the conference. Application materials will be posted at the conference website.

BEST PAPER AWARDS

In order to recognize significant advancements in dialog and discourse science and technology, SIGDIAL will recognize two best paper awards. A selection committee consisting of prominent researchers in the fields of interest will select the recipients of the awards.

SPONSORSHIP

SIGDIAL offers a number of opportunities for sponsors. For more information, email the conference organizers at sponsor-chair[at]sigdial.org.

DIALOG AND DISCOURSE

SIGDIAL authors are encouraged to submit their research to the journal Dialog and Discourse, which is endorsed by SIGdial.

ORGANIZING COMMITTEE

General Co-Chairs

Kallirroi Georgila, University of Southern California, USA

Matthew Stone, Rutgers, The State University of New Jersey, USA

 

Technical Program Co-Chairs

Helen Hastie, Heriot-Watt University, Edinburgh, UK

Ani Nenkova, University of Pennsylvania, USA

 

Mentoring Chair

Svetlana Stoyanchev, AT&T Research Labs, USA

 

Local Chair

Keelan Evanini, Educational Testing Service, USA

 

Sponsorships Chair

Giuseppe Di Fabbrizio, Amazon.com, USA

 

SIGdial President

Amanda Stent, Yahoo! Labs, USA

 

SIGdial Vice President

Jason Williams, Microsoft Research, USA

 

SIGdial Secretary/Treasurer

Kristiina Jokinen, University of Helsinki, Finland

 

Back  Top

3-3-14(2014-06-21) The REAL Challenge

The REAL Challenge – Call for Participation

 

The Dialog Research Center at Carnegie Mellon (DialRC) is organizing the REAL Challenge. The goal of the REAL Challenge (dialrc.org/realchallenge) is to build speech systems that are used regularly by real users to accomplish real tasks. These systems will give the speech and spoken dialog communities steady streams of research data as well as platforms they can use to carry out studies. It will engage both seasoned researchers and high school and undergrad students in an effort to find the next great speech applications.

 

Why have a REAL Challenge?

Humans greatly rely on spoken language to communicate, so it seems natural that we would be likely to communicate with objects via speech as well. Some speech interfaces do exist and they show promise, demonstrating that smart engineering can palliate indeterminate recognition. Yet the general public has not yet picked up this means of communication as easily as they have the tiny keyboards. About two decades ago, many researchers were using the internet, mostly to send and receive email. They were aware of the potential that it held and waited to see when and how the general public would adopt it. Practically a decade later, thanks to providers such as AmericaOnline, who had found how to create easy access, everyday people started to use the internet. And this has dramatically changed our lives. In the same way, we all know that speech will eventually replace the keyboard in many situations when we want to speak to objects. The big question is what is the interface or application that will bring us into that era.

 

Why hasn’t speech become a more prevalent interface? Most of today’s speech applications have been devised by researchers in the speech domain. While they certainly know what types of systems are “doable”, they may not be the best at determining which speech applications would be universally acceptable.

 

We believe that students who have not yet had their vision limited by knowledge of the speech and spoken dialog domains and who have grown up with computers as a given, are the ones that will find new, compelling and universally appealing speech applications. Along with the good ideas, they will need some guidance to gain focus. Having a mentor, attending webinars and participating in a research team can provide this guidance.

 

The REAL challenge will combine the talents of these two very different groups. First it will call upon the speech research community who know what it takes to implement real applications. Second, it will advertise to and encourage participation from high school students and college undergrads who love to hack and have novel ideas about using speech.

 

How can we combine these two types of talent?

The REAL Challenge is starting with a widely-advertised call for proposals. Students can propose an application. Researchers can propose to create systems or to provide tools. A proposal can target any type of application in any language. The proposals will be lightly filtered and the successful proposers will be invited to a workshop on June 21, 2014 to show what they are proposing and to team up. The idea is for students to meet researchers and for the latter to take one or more students on their team. Students will present their ideas and have time for discussion with researchers. A year later, a second workshop will assemble all who were at the first workshop to show the resulting systems, measure success and award prizes.

Student travel will be taken care of by DialRC through grants.

 

Preparing students

Students will have help from DialRC and from researchers as they formulate their proposals. DialRC will provide webinars on such topics as speech processing tool basics and how to present a poster. Students will also be assigned mentors. Researchers in speech and spoken dialog can volunteer to be a one-on-one mentor to a student. This consists of being in touch either in person or virtually. Mentors can tell the students about what our field consists of, what the state of the art is, and what it is like to work in research. They can answer questions about how the student can talk about their ideas. If you are a researcher in speech and/or spoken dialog and you would like to be a mentor, please let us know at realchallenge@speechinfo.org

 

What is an entry?

The groups will create entries. Here are the characteristics of a successful entry:

  • there is a stateful interaction (not stateless, not on-off)

  • the interaction is sustained over multiple turns

  • language is central to the entry – it is the primary medium of exchange (not necessarily the only medium, but it is not peripheral to the main use of the entry)

  • the entry makes a meaningful contribution to the interaction (so, it does not just pass messages)

  • the entry must do some meaningful processing (not just passing messages like an email router). It has to make meaningful contributions to the interaction.

 

How can we assess success?

Success will be judged on the basis of originality, amount of regular users and of data and on other criteria to be agreed upon by the Challenge scientific committee and the participants.

 

Possible prize areas for an entry include:

  • how much usage it gets

  • how engaging it is / how novel is the interaction

  • how good it is as a platform for future research – a platform is defined here as the output/result of an entry that would be of use for the research community. A platform is not just a computer program toolkit. It could, for example, be used the following year as the basis for a competition (like best ASR or best belief tracking)

 

Details of the measures of success will be refined at the workshop with input from the participants.

 

 

Timeline

The REAL Challenge was announced at several major conferences during the summer of 2013: SIGDIAL, Interspeech, ACL. It is also being announced to younger participants through their schools and hacker websites.

 

March 20, 2014 : Proposals due

April 20, 2014: Feedback on proposals and invitations to attend the workshop sent out.

June 21, 2014 : Workshop in Baltimore Maryland USA.

Early summer of 2015 : Resulting systems are presented a year after the first workshop.

 

What advantage is there for a student to participate?

For students, participation in the REAL Challenge will present several unique opportunities:

  • the chance to work in a group with real researchers, on a real world problem

  • the chance to see how ideas are turned into reality

  • the chance to make something that works and that people actually use

  • the chance to learn about new technology and use it to solve new problems

  • the chance to observe what careers in technology are like and to be in contact with possible future employers

 

What does this Challenge contribute to the speech community?

For researchers, participation reaps several benefits:

  • the number and type of speech applications will be greatly expanded

  • there will be more datasets available for research

  • there will be more platforms to run studies on and to use in speech and spoken dialog classes

  • the enrichment that comes from mentoring

 

Why should industrial research groups be interested in the Challenge?

Industrial research groups should be interested to see:

  • which types of applications actually appeal to the general public and which ones fail, which could be revenue-generating

  • how students learn to apply the latest speech technologies in novel directions and which of these students could become future collaborators

 

Organization

This Challenge is run by the Dialog Research Center at Carnegie Mellon (DialRC)

 

REAL Challenge Scientific Committee

 

Alan W Black, Carnegie Mellon University, USA

Maxine Eskenazi, Carnegie Mellon University, USA

Helen Hastie, Heriot Watt University, Scotland

Gary Geunbae Lee, Pohang University of Science and Technology, South Korea

Sungjin Lee, Carnegie Mellon University, USA

Santoshi Nakamura, Nara Institute of Science and Technology, Japan

Elmar Noeth, Friedrich-Alexander University Erlangen-Nuremberg, Germany

Antoine Raux, Lenovo, USA

David Traum, University of Southern California, USA

Jason Williams, Microsoft Research, USA

 

Contact information:

Website : http://dialrc.org/realchallenge

Email : realchallenge@speechinfo.org

Back  Top

3-3-15(2014-06-23) 2ème APPEL À DÉMONSTRATION — JEP’2014
2ème APPEL À DÉMONSTRATION — JEP’2014

Le Mans du 23 au 27 juin 2014

CALENDRIER

Date limite de soumission : 18 mai 2014
Notification aux auteurs : 23 mai 2014

PRÉSENTATION

Organisée par l’équipe LST du LIUM (Laboratoire d'Informatique de l'Université du Maine) et l'équipe TALN du LINA (Laboratoire d'Informatique de Nantes Atlantique), la conférence JEP’2014 se tiendra du 23 au 27 juin 2014 au Mans.

La conférence JEP’2014 comprendra des communications orales et affichées, des conférences invitées et une session de démonstration.

ORGANISATION

Les organisateurs de la conférence ont le plaisir d’inviter les participants à présenter des démonstrations de logiciels, de prototypes qui s’appuient sur des méthodes de traitement automatique de la parole. Dans ce cadre, les professionnels de l’industrie peuvent faire acte de candidature pour présenter leur logiciel au cours de cette session. 
L’objet de cette dernière est d’offrir un cadre d’interaction entre les milieux industriel et académique sur les questions inhérentes au traitement automatique de la parole. 
Lorsqu'elles permettent ce type d'interaction, les présentations d'études sur la parole qui ne concernent ni logiciel ni prototype sont également invitées.

La session DEMONSTRATIONS ET INDUSTRIELLE, accueillera des présentations sous les formes suivantes (selon les besoins et disponibilités) :

stand d’exposant ;
affiche de présentation ;
démonstration de produits logiciels.

La première partie cette session ne sera accessible qu'aux conférenciers inscrits à la conférence. Dans un effort de dissémination scientifique et technique, la seconde partie de la session sera ouverte au grand public.

Pour participer, les candidats devront envoyer un résumé (2 page maximum au format de la conférence) aux adresses : yannick.esteve@univ-lemans.fr et emmanuel.morin@univ-nantes.fr le 11 mai 2014 au plus tard. Les participants seront choisis par le comité d’organisation, indépendament du processus de sélection scientifique habituel. Les critères de sélection s’appuieront sur la pertinence des propositions au regard des thématiques affichées par la conférences JEP et de leur potentiel d'interactions entre milieu industriel et académique.


Contact : yannick.esteve@univ-lemans.fr et emmanuel.morin@univ-nantes.fr
Back  Top

3-3-16(2014-06-24) CfP International Conference of young researchers in Language Didactics and Linguistics, Grenoble Fr.

CALL FOR PAPERS

International Conference of young researchers in Language Didactics and Linguistics

Multidisciplinary conference on the study of language

 

24 juin – 27 juin 2014

LIDILEM laboratory Stendhal University, Grenoble, France

http://cedil2014.u-grenoble3.fr

In line with the areas of research of our laboratory, this multidisciplinary conference’s objective is to allow the community of PhD

students and young researchers to submit their research topics in the fields of language, its teaching and or literacy, psychology,

education sciences, ethnology, neurolinguistics, human-machine communication.

RESEARCH THEMES

·

Linguistics

·

Psycholinguistics

·

Linguistic development

·

Sociolinguistics

·

Multilingualism

·

Language didactics,

·

Natural Language Processing (NLP),

·

Digital Humanities.

CALENDAR

·

Submission deadline : 15th November 2013 29th November 2013

 

·

Announcement of acceptances : March 3rd, 2013

·

Preliminary program : May, 2014

·

Reception of final articles : June 2nd, 2014

·

Conference dates : Tuesday, 24 June (afternoon) to Friday, 27 June 2014

SCIENTIFIC COMMITTEE

Laurent BESACIER (Université Joseph Fourier de Grenoble, France), Jacqueline BILLIEZ (Université

Stendhal de Grenoble, France), Annette BOUDREAU (Université de Moncton, Canada), Gabrièle BUDACH (University of

Southampton, England) , Cécile CANUT (Université Paris-Descartes, France), Jean-Pierre CHEVROT (Université Stendhal de

Grenoble, France), Jean-Louis CHISS (Université Paris 3, France), Jean-François de PIETRO (Université de Neuchâtel, Institut de

Recherche et de Documentation Pédagogique, Switzerland), Jean-Marc DEWAELE (Birbeck, University of London, England),

Cécile FABRE (ERSS, Université de Toulouse, France), Isabel GONZALEZ REY (Université St Jacques de Compostelle, Spain),

Heather HILTON (Université Paris 8, France), Alexandra JAFFE (California State University Long Beach, United-States), Sophie

KERN (Université de Lyon 3, France), Marinette MATTHEY (Université Stendhal de Grenoble, France), Christophe PARISSE

(Université de Paris 10, France), Ludovic TANGUY (ERSS, Université de Toulouse, France).

LANGUAGES

The languages used during the conference will be French or English.

SUBMISSION REQUIREMENTS:

This conference addresses only young researchers (PhD students and recent doctors). Abstracts must be in French or in English.

The abstract should not be more than 2 pages long, including references.

Deadline for submission : 15

th November 2013 29th November 2013

 

For more information refer to the instructions indicated on the conference site:

http://cedil2014.u-grenoble3.fr

MODALITIES OF COMMUNICATION

Presentations and posters of young researchers will follow one another and will be accompanied with plenary conferences of

renowned lecturers and researchers stemming from different disciplinary fields.

Communication in workshops (20 minutes presentation and an additional 10 minutes for discussion).

Presentation of posters.

PUBLICATION OF ACTS

The abstracts accepted for oral or displayed may be published in the form of articles (8-10 pages) to be submitted before June 2

nd,

2014. Articles will be subjected and selected to a proofreading committee with the possibility of being published in the University

Press of Grenoble (PUG) at the beginning of 2015.

CONTACT

For any further information about submissions or registrations, please email to:

cedil2014@gmail.com

Back  Top

3-3-17(2014-06-26) 5th annual workshop on Speech and Language Processing for Assistive Technologies (SLPAT),Baltimore, USA

We are pleased to announce the first call for papers for the fifth annual workshop on Speech and Language Processing for Assistive Technologies (SLPAT), to be co-located with ACL 2014 in Baltimore in June 2014. The deadline for submission of papers and demo proposals is 21 March. Full details on the workshop, topics of interest, timeline, and formatting of regular papers can be found here here:

               

       http://www.slpat.org/slpat2014

 

This 2-day workshop will bring together researchers from all areas of speech and language technology with a common interest in making everyday life more accessible for people with physical, cognitive, sensory, emotional, or developmental disabilities. This workshop will provide an opportunity for individuals from both research communities, and the individuals with whom they are working, to assist to share research findings, and to discuss present and future challenges and the potential for collaboration and progress. General topics include but are not limited to:

                • Automated processing of sign language

                • Speech synthesis and speech recognition for physical or cognitive impairments

                • Speech transformation for improved intelligibility

                • Speech and Language Technologies for Assisted Living

                • Translation systems; to and from speech, text, symbols and sign language

                • Novel modeling and machine learning approaches for AAC/AT applications

                • Text processing for improved comprehension, e.g., sentence simplification or text-to-speech

                • Silent speech: speech technology based on sensors without audio

                • Symbol languages, sign languages, nonverbal communication

                • Dialogue systems and natural language generation for assistive technologies

                • Multimodal user interfaces and dialogue systems adapted to assistive technologies

                • NLP for cognitive assistance applications

                • Presentation of graphical information for people with visual impairments

                • Speech and NLP applied to typing interface applications

                • Brain-computer interfaces for language processing applications

                • Speech, natural language and multimodal interfaces to assistive technologies

                • Assessment of speech and language processing within the context of assistive technology

                • Web accessibility; text simplification, summarization, and adapted presentation modes such as speech, signs or symbols

                • Deployment of speech and NLP tools in the clinic or in the field

                • Linguistic resources; corpora and annotation schemes

                • Evaluation of systems and components, including methodology

                • Anything included in this year's special topic

                • Other topics in Augmentative and Alternative Communication

 

Please contact the conference organizers at slpat2014-workshop@googlegroups.com with any questions.

 

Important dates:

 

21 March: Paper/demo submissions due

11 April: Notification of acceptance

28 April: Camera-ready papers due

26 - 27 June: SLPAT workshop

 

We look forward to seeing you!

 

The organizing committee of SLPAT 2014,

Jan Alexandersson, DFKI, Germany

Dimitra Anastasiou, University of Bremen, Gernany

Cui Jian, SFB/TR 8 Spatial Cognition, University of Bremen, Germany

Ani Nenkova, University of Pennsylvania, USA

Rupal Patel, Northeastern University, USA

Frank Rudzicz, Toronto Rehabilitation Institute and University of Toronto, Canada

Annalu Waller, University of Dundee, Scotland

Desislava Zhekova, University of Munich, Germany

 

 

 

Back  Top

3-3-18(2014-07-01) 21st Conference on Natural Language Processing (TALN 2014), Marseille, F(MODIFIED)

CALL FOR PAPERS

 

                               TALN-2014

             21st Conference on Natural Language Processing

 

                        http://www.taln2014.org

 

                             July 1-4 2014

 

                           Marseille, FRANCE

 

IMPORTANT DATES

 

==========

 

1. Long paper

 

    - Extended submission deadline : February 27, 2014

    - Notification : March 29, 2014

    - Camera ready paper due : May 2, 2014

 

2. Short paper

 

    - Paper submission deadline : April 12, 2014

    - Notification : May 10, 2014

    - Camera ready paper due : May 26, 2014

 

3. Demonstrations

 

    - Submission deadline : April 21, 2014

    - Notification : May 10, 2014

    - Camera ready paper due : May 26, 2014

 

 

PRÉSENTATION

============

Organized by the LPL (Laboratoire Parole et Langage) and the LIF

(Laboratoire d’Informatique Fondamentale), the 21st Conference on

Natural Language Processing (TALN) will take place from 1st to the

4th July at Faculté Saint Charles, Marseille (France).

 

TALN'2014 is organised under the aegis of ATALA (Association pour

le Traitement Automatique des Langues) and will be held jointly with

RECITAL'2014, the conference for young researchers (separated call

for papers).

 

TALN'2014 will include oral presentations of research and position

papers, posters, invited speakers and demonstrations. The official

language is French. English presentations and papers are accepted for

non-French-speaking authors.

 

 

TYPES OF COMMUNICATIONS

=======================

Two communication formats are proposed: long papers (from 12 to 14 pages)

and short papers (from 6 to 8 pages).

 

Authors are invited to submit two types of communications:

 

  - original research work

  - position paper on the current state of the research work

 

Papers should present original works, with substantial new material when

comparing to previous publications of the same author(s). Translation of

previously published papers are not

 

There will be two presentation formats: Oral for long papers and Poster

for short papers.

 

All topics of NLP are eligible for a submission.

 

 

SELECTION CRITERIA

==================

Submissions will be reviewed by at least two experts of the domain. For research

papers, decisions will be based on the following criteria:

 

 - relevance to the conference topics

- importance and originality of the paper

- scientific and technical soundness

- comparison of the results obtained with those found in relevant works

- situation of the research in comparison with international work

- clarity of the presentation

 

 

For position papers, decisions will be based on the following criteria:

 

 - originality of the point of view presented

- breadth of view and the taking into account of the state-of-the-art

 

 

The selected communications will be published in the conference proceedings.

 

The program committee will select one paper (TALN Best Paper) among the

 

accepted papers which will be recommended for publication (extended form) in

 

the journal 'Traitement Automatique des Langues' (T.A.L.).

 

 

SUBMISSION PROCEDURE

=======================

Papers will be written in French for French-speaking authors or English for non-French-speaking authors.

 

A LaTeX style file and a Word template will be made available on the

 

conference website: http://wwwtaln2014.org

 

 

ORGANIZING COMMITEE

=====================

                Philippe Blache (Président)

                Carine André                    Frédéric Béchet

                Sébastien Bermond       Brigitte Bigi

                Nadéra Bureau                Cyril Deniaud

                Stéphanie Desous          Benoît Favre

                Nuria Gala                          Joëlle Lavaud

                Grégoire Montcheuil     Alexis Nasr

                Catherine Perrot             Klim Peshkov

                Laurent Prévot                 Carlos Ramisch

                Stéphane Rauzy              Claudia Starke

 

CONTACTS

========

  philippe.blache[arobas]lpl-aix.fr

  nadera.bureau[arobas]lpl-aix.fr

 

Back  Top

3-3-19(2014-07-01) Atelier: Réseaux Lexicaux et Traitement des Langues Naturelles (RLTLN), Marseille (F)

 


We do accept papers written in English by those who are not fluent in French

Les articles seront rédigés en français pour les francophones,
en anglais pour ceux qui ne maîtrisent pas le français.

---------------------

Réseaux Lexicaux et Traitement des Langues Naturelles
(RLTLN)


Atelier TALN 2014
Faculté Saint Charles (Aix Marseille Université), 1er juillet 2014.
Date limite de soumission
: 21 avril 2014  


Organisateurs :
Michael Zock (LIF, Marseille)
Gemma Bel-Enguix (LIF, Marseille)
Reinhard Rapp (LIF, Marseille et Université de Mainz)

1 PRÉSENTATION DU CHAMP

La façon dont nous regardons les unités lexicales, leur organisation et utilisation a radicalement changée ces dernières décennies. Décrites dans des dictionnaires et considérées comme des annexes de la grammaire dans les années 80, on les considère désormais comme de la matière première en TAL. Si à l’époque on utilisait encore des termes comme 'mots' ou 'dictionnaires', on parle aujourd’hui plutôt de 'ressources lexicales' dont il existe un certain nombre (WordNet, FrameNet, VerbNet, PropBank, ...). Celles-ci ont été standardisées (http://en.wikipedia.org/wiki/UBY-LMF), liées entre elles (http://verbs.colorado.edu/semlink/) ou liées à des encyclopédies comme Wikipédia (http://en.wikipedia.org/wiki/BabelNet). Il y a également des projets comme DBnary (http://kaiko.getalp.org/about-dbnary) qui, partant de Wiktionary, fournit des ressources lexicales dans de nombreuses langues.

Si dans le passé on créait des dictionnaires à la main, on le fait aujourd'hui de manière (semi-) automatique et à l'aide de corpus. Bien entendu, cette évolution ne s'est pas faite du jour au lendemain. Les premières tentatives de création automatique de ressources à partir de dictionnaires imprimés (Ide Véronis, sites.univ-provence.fr/veronis/publis.html) se sont vite heurtées à des problèmes, en raison de la pauvreté de la source : les dictionnaires papier ne contenaient pas les informations nécessaires permettant ensuite un usage par la machine. Or, c’était justement le but recherché. L’accès à de vastes corpus a alors permis de marquer un tournant et de construire des ressources plus riches, plus explicites et mieux structurées. Concernant ce dernier point, WordNet (WN) a joué un rôle capital. Bien qu’il n’a pas eu le succès escompté auprès des psycholinguistes ou auprès des utilisateurs consultant la ressource (pour chercher des mots), WN a eu un succès considérable en TAL. Ceci dit, il a également eu un impact incontestable sur le plan théorique. WN a profondément modifié notre manière de voir la structure des ressources lexicales. Dorénavant, elles ne se résument plus à des simples listes alphabétiques, mais elles sont réprésentées plutôt sous forme des graphes (réseau lexical) dont les noeuds sont des unités lexicales liées par différents types de relations.

Parallèlement à l’évolution des ressources lexicales, on a pu observer une évolution notable concernant les travaux portant sur les graphes. Ces derniers semblent se prêter à merveille à la modélisation de divers domaines (Barrat, 2008, Barabási, 2003), y compris celui de la langue. En effet, il y a eu de nombreux travaux montrant leur pertinence pour capter le sens des mots et celui des phrases (Widdows, 2004; Sowa, 1991) ou pour modéliser divers aspects du 'monde' lexical : structures associatives (http://www.eat.rl.ac.uk, ou http://w3.usf.edu/FreeAssociation/), structure du dictionnaire (Gaume et al. 2008), densité lexicale, distance moyenne entre les mots (Vitevitch, 2008), accessibilité (Ferrer i Cancho Sole, 2001), aspects dynamiques des graphes (Dion, 2012), etc.

Nous constatons donc qu’il y a deux communautés, dont l’une s’intéresse aux données (concrètes commes les unites lexicales), et l’autre plutôt à leur représentation et organisation (graphes, topologie, navigation). C’est pour encourager l’échange d’idées entre ces deux mondes, que nous organisons cet atelier.


2 THEMES

Nous attendons des soumissions portant sur les thèmes évoqués plus haut et en particulier :
  • Origine des données permettant la construction des ressources : corpus (web, blogs, courriels), êtres humains (liste d’associations), etc. ;
  • Méthode de construction de la ressource: automatique, semi-automatique, collaborative (par des jeux, etc.) ;
  • Construction automatique du réseau (repérage et caractérisation des relations sémantiques) ;
  • Structuration des données : alphabétique, thématique, liens sémantiques, liens associatifs ;
  • Propriétés mathématiques des réseaux lexicaux ;
  • Facteurs affectant le poids des noeuds ou des liens : aspects dynamiques des graphes (fréquence, saillance, récence, changement de thème, etc.) ;
  • Caractérisation topologique du graphe lexical : distribution, densité relative, évolution du graphe ;
  • Exploitation ou utilisation de la ressource ou d’une de ses transformations comme la transformation du graphe en arbre pour assister la navigation (accès lexical) ;
  • Accessibilité des mots grâce à des caractéristiques du réseau (phénomène du ‘petit monde’) ;
  • Visualisation et manipulation des graphes (traduction en arbre, clustering, calcul de similarité sémantique) ;
  • Modélisation des variations linguistiques et des changements de la langue (évolution du lexique).

3 CRITÈRES DE SÉLECTION

Les soumissions seront examinées par au moins deux spécialistes du domaine.

Pour les travaux de recherches, seront considérées en particulier :
 
  • l'adéquation aux thèmes de l'atelier,
  • l'importance et l'originalité de la contribution,
  • la correction du contenu scientifique et technique,
  • l'organisation et la clarté de la présentation.

4 MODALITÉS DE SOUMISSION

Les articles seront rédigés en français pour les francophones, en anglais pour ceux qui ne maîtrisent pas le français. Ils devront suivre le format de TALN 2014 et ne doivent pas dépasser 10 pages (références comprises). Les feuilles de style (LaTeX et Word) sont disponibles sur le site web de la conférence (http://www.taln2014.org/site/soumission/). Les propositions doivent être envoyées sous forme pdf à l’adresse suivante : (https://www.easychair.org/conferences/?conf=RLTLN2014). Les articles retenus donneront lieu à une présentation de 30 mn, discussion comprise.

5 COMITE DE PROGRAMME

  • Bel Enguix, Gemma (LIF, Université Aix-Marseille, France)
  • Bouillon, Pierrette (tim, Faculté de Traduction et d’interprétation de Genève, Suisse)
  • Cristea, Dan (University A.I.Cuza, Iasi, Romania)
  • Ferrer i Cancho, Ramon (larca, université polytechnique de Catalogne, Barcelone, Espagne)
  • Ferret, Olivier (cea list, Gif sur Yvette, France)
  • Francopoulo, Gil (Tagmatica, Paris, France)
  • Gala, Nuria (lif-cnrs, Aix Marseille Université, Marseille, France)
  • Granger, Sylviane (Université Catholique de Louvain, Belgium)
  • Grefenstette, Gregory (Inria, Saclay, France)
  • Lapalme, Guy (rali, Université de Montréal, Canada)
  • Lenci, Alessandro (Université de Pise, Italie)
  • L'Homme, Marie-Claude (Université de Montréal, Canada)
  • Massip i Bonet, Àngels: (Université de Barcelone, département de philologie Catalane, Espagne)
  • Navigli, Roberto (Sapienzia, Université de Rome, Italie)
  • Ploux, Sabine (L2C2, Institut des Sciences Cognitives, Lyon, France)
  • Prévot, Laurent (lpl, Université Aix Marseille, Aix en Provence)
  • Rapp, Reinhard (lif, France) et (université de Mainz, Germany)
  • Rosso, Paolo (nlel, Universitat Politècnica de València, Spain)
  • Schwab, Didier (lif-getalp, Grenoble, France)
  • Sérasset, Gilles (lig, Grenoble, France)
  • Zock, Michael (lif, Marseille, France) et (Université de Tainan, Taiwan)

6 DATES IMPORTANTES

  • Date limite de soumission : 21 avril 2014
  • Notification aux auteurs : 10 mai 2014
  • Soumission de la version définitive : 30 mai 2014
  • Date de l’atelier : 1er juillet 2014

7 CONTACT

- Michael Zock michael.zock [arobas] lif.univ-mrs.fr
- Gemma Bel Enguix gemma.belenguix [arobas] gmail.com
- Reinhard Rapp reinhardrapp [arobas] gmx.de


Web :

http://www.taln2014.org/site/
https://sites.google.com/site/rltlntaln2014/
Back  Top

3-3-20(2014-07-01) TALAf 2014 : Traitement automatique des langues africaines (écrit et parole), Marseille, France
TALAf 2014 : Traitement automatique des langues africaines (écrit et parole)
Atelier TALN 2014 - Marseille le 1er juillet 2014

ORGANISATEURS 
 
Mathieu Mangeot (LIG) et Fatiha Sadat (UQAM)


PRÉSENTATION (voir plus de détails sur http://jibiki.univ-savoie.fr/~mangeot/TALAf/2014/ )

Dans la suite du premier atelier TALAf qui s'est tenu le 8 juin 2012 à Grenoble, lors de la conférence JEP-TALN-RECITAL 2012 (voir les actes : http://aclweb.org/anthology//W/W12/#1300), nous proposons une nouvelle édition de cet atelier lors de la conférence TALN 2014 le premier juillet à Marseille. Nous accueillons les travaux menés sur toutes les langues peu dotées d'Afrique. Des travaux sur l''arabe dialectal de l'Afrique du nord sont également bienvenus.

Cet atelier a pour but d'effectuer un état des lieux des travaux de constitution de ressources linguistiques de base (dictionnaires, corpus oraux et écrits), de mettre au point des méthodologies simples et économes d'élaboration de ressource, d'échanger sur les techniques permettant de se passer de certaines ressources inexistantes et de fixer un certain nombre de principes pour les futurs travaux dans le domaine.
L'atelier se déroulera sur une demi-journée.


TYPES DE COMMUNICATION

Les publications devront comprendre entre 6 et 12 pages. Les auteurs sont invités à soumettre des articles présentant des travaux de recherche originaux sur les thèmes proposés ci-dessous.

THÈMES

L'atelier est ouvert à la présentation de travaux de recherche portant sur les thèmes suivants :
Ressources :
• constitution de corpus écrits (monolingues, bilingues alignés ou comparables)
• constitution de corpus oraux (incluant la transcription)
• élaboration de lexiques et dictionnaires (monolingues, bilingues)
• évaluation de la qualité des ressources
Outils :
• analyseurs morphologiques, correcteurs orthographiques
• analyseurs syntaxiques, correcteurs grammaticaux
• systèmes de TA (statistique ou à base de règles)
• reconnaissance de la parole
• synthèse vocale

CRITÈRES DE SÉLECTION

Les soumissions seront examinées par au moins deux spécialistes du domaine.
Pour les travaux de recherches, seront considérées en particulier :
- l'adéquation aux thèmes de l'atelier.
- l'importance et l'originalité de la contribution,
- la correction du contenu scientifique et technique,
- l'organisation et la clarté de la présentation.

MODALITÉS DE SOUMISSION

Les articles seront rédigés en français pour les francophones, en anglais pour ceux qui ne maîtrisent pas le français. Les formats précis de soumission sont disponibles pour Word et Latex sur le site de taln2014 : http://www.taln2014.org/site/soumission/
Les propositions de communications doivent être envoyées sous forme pdf à l'adresse suivante :
https://www.easychair.org/conferences/?conf=talaf20140

COMITÉ DE PROGRAMME

Laurent Besacier (LIG, Grenoble, France)
Philippe Bretier (Voxygen, Pleumeur-Bodou, France)
Khalid Choukri (ELDA, Paris, France)
Mame Thierno Cissé (ARCIV, Université Cheikh Anta Diop, Dakar, Sénégal)
Denys Duchier (Université d'Orléans, Orléans, France)
Chantal Enguehard (LINA, Nantes, France)
Gil Francopoulo (Tagmatica, Paris, France)
Mathieu Mangeot (LIG, Grenoble, France)
Chérif Mbodj, (Centre de Linguistique Appliquée de Dakar, Sénégal)
Kamal Naït-Zerrad (INALCO, Paris, France)
Pascal Nocera, (Université d'Avignon, France)
Francois Pellegrino, (DDL, Lyon, France)
Fatiha Sadat (UQAM, Montréal, Canada)
Mamadou Lamine Sanogo (INSS, Ouagadougou, Burkina-Faso)
Emmanuel Schang (Université d'Orléans, Orléans, France)
Gilles Sérasset (LIG, Grenoble, France)
Valentin Vydrin (LLACAN-INALCO, Paris, France)

CALENDRIER

- Date limite de soumission : 26 avril 2014
- Notification aux auteurs : 24 mai 2014
- Date limite de soumission des versions définitives : 15 juin 2014
- Atelier : 1 juillet 2014
 
 
 

 

 

 
Back  Top

3-3-21(2014-07-06) Special Session on Computational Intelligence Algorithms for Digital Audio Applications, Beijing China
Special Session on Computational Intelligence Algorithms for Digital Audio Applications WCCI 2014 Special Session - Call for Papers 2014 IEEE World Congress on Computational Intelligence (WCCI 2014) Beijing, China, 
July 6-11 2014. 
 www.ieee-wcci2014.org Theme and Scope of the Session ___________________________________________ Computational Intelligence (CI) is widely used to face complex modelling, prediction, and recognition tasks, and is largely addressed in different research fields. One of these, characterized by a mature orientation to market for many years already, is represented by Digital Audio, which finds application in diverse contexts like entertainment, security, and health. Scientists and technicians worldwide actively cooperate to develop new solutions and propose them for commercial exploitation, and, from this perspective, the employment of advanced CI techniques, in combination with suitable Digital Signal Processing algorithms, surely constitutes a plus. In particular, this is typically accomplished with the aim of extracting and manipulating useful information from the audio stream to pilot the execution of automatized services, also in an interactive fashion. This often happens in conjunction with data coming from other media, like textual and visual, for which specific and application-driven fusion techniques are needed (which also require the involvement of advanced CI algorithms). Several are the Digital Audio topics touched by such a paradigm. In digital music applications we have music transcription, onset detection, genre recognition, just to name a few. Then, moving to speech processing, speech/speaker recognition, speaker diarization, and source separation are surely representative subjects with a florid literature already. Furthermore, auditory scene analysis, acoustic monitoring and sound detection and identification have lately encountered a certain success in the scientific community and can be thus included in this illustrative list. In dealing with the problems correlated to these different topics, the adoption of data-driven learning systems is often a ``must''. This is not, however, immune to technological issues. Indeed, big amount of data frequently needs to be managed and processed, data which features can change over time due to the time-varying characteristics of the audio stream and of the acoustic environment. Moreover, in many applicative scenarios hard real-time processing constraints must be taken into account. It is indeed of great interest for the scientific community to understand how and to what extent novel CI techniques can be efficiently employed in Digital Audio, in the light of all aforementioned aspects. The aim of this session is therefore to offer a CI oriented look at the large variety of Digital Audio research topics and applications and to discuss the most recent technological efforts from this perspective. Topics ___________________________________________ Intelligent Audio Analysis Audio Information Retrieval Music Content Analysis and Understanding Speech and Speaker Analysis and Classification Cross-domain Audio Analysis Sound Detection and Identification Computational Auditory Scene Analysis Acoustic Monitoring Context-aware Audio Source Separation Intelligent Audio Interfaces Important Dates ___________________________________________ •20 December 2013: Due date for paper submission •15 March 2014: Notification to authors •15 April 2014: Camera-ready deadline for accepted papers •6-11 July 2013: Conference Days Organisers ___________________________________________ Stefano Squartini Università Politecnica delle Marche (Italy) s.squartini@univpm.it Aurelio Uncini Università La Sapienza (Italy) aurel@ieee.org Francesco Piazza Università Politecnica delle Marche (Italy) f.piazza@univpm.it Björn Schuller Imperial College London (UK), TUM (Germany) schuller@tum.de 
Back  Top

3-3-22(2014-07-07) 2014 International Conference on Audio, Language and Image Processing (ICALIP 2014), Shanghai, China

ICALIP2014 CALL FOR PAPERS

2014 International Conference on Audio, Language and Image Processing

July 7-9, 2014, Shanghai, China

Website: http://www.icalip2014.org/

*********************************************************************

 

As the flagship conference and the most import event in the region, the 4th International Conference on Audio, Language and Image Processing (ICALIP2014) will continue the great success of its three previous editions (ICALIP 2008, ICALIP2010 and ICALIP2012) with aiming to provide a unique forum for researchers, engineers and educators interested in audio, language and image processing to learn about recent progresses, to address related challenges and to develop new methods, applications and systems. The conference, to be held on July 7-9, 2014 in Shanghai, the largest city of China, is technically sponsored by IEEE CIS Shanghai Chapter and is co-organized by IET Shanghai Local Network, Shanghai University, Tongji University, Fudan University and Shanghai JiaoTong University. The conference proceedings including all the accepted papers will be published by IEEE (IEEE catalog number: No. CFP1450D-PRT and ISBN: 978-1-4799-3902-2) and submitted to both EI Compendex and ISTP which has indexed all accepted papers of previous three conferences. Best Paper Awards will be granted to the authors of those outstanding papers determined by the International Program Committee of the conference and also the expanded version of the selected papers will be published in four SCI-E indexed IET Research Journals.

 

IMPORTANT DATES

2 The Submission Deadline       March 25, 2014

2 Notification of Acceptance     May 10, 2014

2 Camera-ready Copy                 May 30, 2014

 

Topics

Conference topics include, but are not limited to:

A.  Audio and Music Processing:  

  • Compression and Coding, 3D and Surround Sounds, Digital Rights and Watermarking
  • Music Information Retrieval, Internet Audio, Audio Beam Loudspeaker, Hearing Aids
  • Echo Cancellation, Noise Reduction, Beamforming, Audio Gesture System, Binaural Hearing
  • Crosstalk Cancellation, Voice Processing in Cochlear Implants

B. Language and Speech Processing:

  • Language Acquisition and Reproduction, Human Language Understanding and Learning
  • Machine Translation, Speech Recognition and Understanding
  • Speech Enhancement, Spoken Document Retrieval

C.  Image Processing:  

  • Image Coding, Image Filtering and Enhancement, Image Segmentation and Understanding
  • Image Representation and Modeling, Feature Extraction and Analysis
  • Image Storage and Retrieval, Image Authentication and Watermarking, DSP/FPGA Implementations

D.  Computer Graphic and Virtual Reality:

  • Computational Geometry, Modeling, Rendering, Visualization
  • Animation and Simulation, Interactive Environments, Immersive Virtual Reality
  •  Virtual environments, Applications of Virtual Reality

E. Bio-informatics:

  • Biomedical Image Processing & Visualization, Bio-signal Processing and Analysis
  • Biomedical Engineering, Medical and Biomedical Applications
  • Health Monitoring Systems, Health Informatics

F.  Remote Sensing and GIS:

  • Mobile and Wireless GIS, Geospatial Information Visualization and Service
  • Multi-dimensional GIS (3D GIS), Indoor and Outdoor Location
  • Remote Sensing Image Preprocessing, Multisensor and Multisource Data Fusion
  • GPS Technology, Remote Sensing and GIS Application

G.  Multimedia SOC Design:

  • Reconfigurable Processor for Multimedia, Multimedia system-on-a-chip
  • Multi-core Technology for Multimedia Processing, GPU-based Multimedia Processing
  • Novel Architecture for Multimedia Computing

H.  Big Data and Cloud Processing:

  • Big Data Acquisition and Preprocessing, Big Data Storage and Cloud Storage and Management
  • Modeling Technology and System for Big Date and Cloud, Big Data Analysis
  • Big Data Intelligent Computing,  Big Data Mining, Big Data Visualization, Big Data and Cloud Applications

 

Paper Submission

Prospective authors are invited to send full-length, 4-6 page papers, including figures and references, to the conference website (http://www.icalip2014.org/) following the Instructions for Authors. All papers will be handled and reviewed electronically. Check the conference website for update.

 

Back  Top

3-3-23(2014-07-19) Congrès Mondial de Linguistique Française at l’Université Libre de Berlin (Freie Universität Berlin)

Congrès Mondial de Linguistique Française 2014

Organisé par l’

 

Institut de Linguistique Française (CNRS – FR 2393)

du 19 au 23 juillet 2014,

à l’Université Libre de Berlin (Freie Universität Berlin)

APPEL A COMMUNICATIONS

Dates

: 19 au 23 juillet 2014

Lieu

: Université Libre de Berlin

Site web

: http://www.ilf.cnrs.fr/, rubrique Congrès Mondial de Linguistique Française

Contact

: cmlf2014@ling.cnrs.fr

 

Intérêt scientifique

Le quatrième Congrès Mondial de Linguistique Française est organisé par l’Institut de

Linguistique Française (ILF), Fédération de Recherche du CNRS (FR 2393) qui est sous la

tutelle de cet organisme et du Ministère de l’Enseignement Supérieur et de la Recherche.

L’ILF regroupe dix-sept laboratoires de recherche, qui sont les co-organisateurs de ce

congrès en partenariat avec de nombreuses associations nationales et internationales. Une

telle organisation, conjointement prise en charge par dix-sept unités de recherche, est

exceptionnelle par son ampleur et la volonté de partenariat scientifique qu’elle révèle.

Le premier Congrès Mondial a été organisé à Paris par l’ILF en 2008, le deuxième à La

Nouvelle-Orléans, le troisième à Lyon en 2012. Chacun de ces trois congrès a attiré plus de

300 participants et les résultats ont fait l’objet d’une publication en ligne immédiate

accompagnée par un volume de résumés et un CD-ROM d’actes.

Ce congrès est organisé sans aucun privilège d'école ou d'orientation et sans exclusive

théorique ou conceptuelle. Chaque domaine ou sous-domaine, chaque type d'objet, chaque

type de questionnement et chaque problématique portant sur le français peut y trouver sa

place.

Le CMLF est organisé en 15 sessions, lesquelles soulignent le fait que la linguistique

française n’est pas limitée à tel ou tel domaine érigé en modèle pour les autres sousdisciplines

du champ. Quatorze thématiques ont été retenues, qui permettent de balayer la

plus grande partie du champ scientifique : (1) Histoire du français : perspectives

diachronique et synchronique, (2) Linguistique et Didactique (français langue première,

français langue seconde), (3) Discours, Pragmatique et Interaction, (4) Francophonie, (5)

Histoire, Épistémologie, Réflexivité, (6) Lexique(s), (7) Linguistique de l’écrit, Linguistique

du texte, Sémiotique, Stylistique, (8) Morphologie, (9) Phonétique, Phonologie et

Interfaces, (10) Psycholinguistique et Acquisition, (11) Sémantique, (12)

Sociolinguistique, Dialectologies et Écologie des langues, (13) Syntaxe, (14) Ressources

et Outils pour l’analyse linguistique. A ces quatorze thématiques a été ajoutée une quinzième

session « pluri-thématique », laissant ouverte la possibilité de travailler dans plusieurs

domaines, voire en marge des territoires disciplinaires traditionnels.

Chaque thématique est pilotée par un Président et coordonnée par un Vice-président

(membre du Comité directeur de l’ILF, ou bien choisi par ce comité). Les comités

scientifiques comportent une proportion équilibrée de spécialistes français et étrangers. Un

soin particulier a été accordé à la sélection des comités afin de s’assurer qu’ils présenteraient

les plus grandes garanties scientifiques pour le succès du congrès. On trouve donc dans

chaque comité des linguistes connu(e)s mondialement pour leur contribution au domaine. Le

rôle de ces comités est de sélectionner les propositions de communications.

Les soumissions se feront sous la forme de brefs articles de 10 à 15 pages.

Toutes les communications (y compris les conférences plénières) seront publiées sous la

forme d'un article de 10 à 15 pages dans les actes du congrès (sous forme de CD-ROM

accompagnant un livret des titres et des résumés des communications) et maintenues sous

forme électronique sur le site du CMLF. L'archive électronique restera accessible après le

congrès.

 

Rappel du calendrier

15 mai 2013 : Ouverture de la plateforme de dépôt des propositions de communications

30 novembre 2013 : Date limite de réception des propositions de communication

25 février 2014 : Notification de l'acceptation ou du refus et directives pour la version

définitive

25 mars 2014 : Réception de la version définitive des articles

Congrès Mondial de Linguistique Française : du 19 au 23 juillet 2014

 

 

Back  Top

3-3-24(2014-07-22) 4th Lisbon Machine Learning School - 'Learning with Big Data', Lisbon, Portugal
==============================================================
Call for Participation
4th Lisbon Machine Learning School - 'Learning with Big Data'
==============================================================

We
invite everyone interested in Machine Learning and Natural Language
Processing to attend the 4th Lisbon Machine Learning School - LxMLS
2014.

Important Dates
---------------

* Application Deadline: April 15, 2014
* Decision: May 15, 2014
* Early Registration: June 15, 2014
* Summer School: July 22-29, 2014


Topics and Intended Audience
---------------

The
school will cover a range of Machine Learning (ML) topics, from theory
to practice, that are important in solving Natural Language Processing
(NLP) problems that arise in the analysis and use of Web data.

Our target audience is:

* Researchers and graduate students in the fields of NLP and Computational Linguistics;
* Computer scientists who have interests in statistics and machine learning;
* Industry practitioners who desire a more in depth understanding of these subjects.

Features of LxMLS:

* No deep previous knowledge of ML or NLP is assumed;
* Recommended reading will be provided in advance;
* Includes a strong practical component;
* A day zero is scheduled to review basic concepts and introduce the necessary tools for implementation exercises;
* Days will be divided into tutorials and practical sessions (view schedule);
* Both basic and advanced topics will be covered;
* Instructors are leading researchers in machine learning.


List of Confirmed Speakers
---------------

RYAN MCDONALD Google Inc. | USA
NOAH SMITH Carnegie Mellon University | USA
XAVIER CARRERAS Universitat Politècnica de Catalunya | Spain
SLAV PETROV Google Inc. | USA
CHRIS DYER Carnegie Mellon University | USA
RICHARD SOCHER Stanford University | USA
ANDREAS MUELLER Amazon | Germany
ARIADNA QUATTONI Universitat Politècnica de Catalunya | Spain
DIPANJAN DAS Google Inc. | USA
IVAN TITOV University of Amsterdam | Netherlands
PHIL BLUNSOM University of Oxford | UK
LUIS PEDRO COELHO European Molecular Biology Laboratory | Germany
MÁRIO FIGUEIREDO Instituto de Telecomunicações | Portugal


Please visit our webpage for up to date information: http://lxmls.it.pt/2014.

To apply, please fill the form in https://lxmls.wufoo.com/forms/application-form/. Any questions should be directed to: lxmls-2014@lx.it.pt.


We are looking forward to your participation!

-- The organizers of LxMLS'2014.
Back  Top

3-3-25(2014-07-25) 14th Conference on Laboratory Phonology (LabPhon 14), Tokyo, Japan.

The 14th Conference on Laboratory Phonology (LabPhon 14) will be held from 25 to 27 July at the National Institute for Japanese Linguistics (NINJAL) in Tokyo, Japan. For more details, see its official website, which is now open: http://www.ninjal.ac.jp/labphon14/

Back  Top

3-3-26(2014-08-17) Summer school on “Tools & Techniques in Geolinguistics”, Univ Kiel, Germany

Summer school on “Tools & Techniques in Geolinguistics”

 

Dealing with methods and techniques in geolinguistics, an international summer school will take place at the University of Kiel (Germany) 17-27 August 2014. In this new and interdisciplinary research paradigm, regional varieties will be analysed with respect to their linguistic, geographical, social, perceptual and spatial characteristics. With its many Low German dialects and the endangered Frisian language, Northern Germany is a very dynamic language area right on Kiel’s doorstep, and the summer school will take advantage of this. In the case of Low German, students will learn how to collect speech data in the laboratory and in the field, how to compile a text corpus and how to analyse the material multifactorially from a geolinguistic perspective.

 

The summer school does not only address students and graduates of (German) dialectology and geolinguistics but also provides new insights for everyone interested in speech documentation, field research, phonetics, corpus linguistics, perceptual dialectology, sociolinguistics and typology. International experts in dialectology and geolinguistics will offer a wide range of lectures, interactive workshops and practical exercises. Additionally, participants will be supported by student mentors.

 

The summer school is addressed to about 50 national and international students. Applicants will be postgraduates holding a bachelor degree (or higher) in linguistics, phonetics, language documentation/typology, German studies or a similar field. Please send the following documents (preferably in pdf format) by email to contact@geoling.uni-kiel.de

- relevant academic achievements, i.e. certificates and complementary proofs of qualification

- curriculum vitae, including experiences in statistics and speech processing software

- letter of motivation briefly summarizing the linguistic expertise and outlining personal research interests and future aims

- recommendation letter of a supervising academic teacher

 

We offer up to 30 full scholarships that cover all costs for travel and accommodation. Please note in your application whether or not you apply for a scholarship. If possible, all successful applicants from outside Kiel will receive a scholarship. The expenses will be reimbursed after the summer school, but other financial arrangements can be made as well, if necessary.

 

Applications should be sent by email to contact@geoling.uni-kiel.de by 28 February 2014. For further information, please visit our web site on http://www.geoling.uni-kiel.de/en/home

 

Funded by the Volkswagen Foundation, the summer school is organised by Prof Dr Oliver Niebuhr, Dr Christina A Anders as well as Dr Uwe Vosberg and hosted by the Institute for Scandinavian studies, Frisian and General Linguistics along with a research centre on “The areality and sociality of language” (http://www.arealitaet.uni-kiel.de) at the University of Kiel.

Back  Top

3-3-27(2014-08-23) 4th WORKSHOP ON COGNITIVE ASPECTS OF THE LEXICON (CogALex), Dublin, Ireland
1st Call for Papers

4th WORKSHOP ON COGNITIVE ASPECTS OF THE LEXICON (CogALex)
together with a shared task concerning the ‘lexical access-problem’

Pre-conference workshop at COLING 2014 (August 23d, Dublin, Ireland)

Submission deadline: May 25, 2014


Invited speaker: Roberto Navigli (Sapienza University of Rome)

For more information, see : http://pageperso.lif.univ-mrs.fr/~michael.zock/cogalex-webpage/index.html
(Beware though that this page is still under construction)

==============================================================
GOAL

The aim of the workshop is to bring together researchers involved in the construction and application of electronic dictionaries to discuss modifications of existing resources in line with the users' needs, thereby fully exploiting the advantages of the digital form. Given the breadth of the questions, we welcome reports on work from many perspectives, including but not limited to: computational lexicography, psycholinguistics, cognitive psychology, language learning and ergonomics.



MOTIVATION

The way we look at dictionaries (their creation and use) has changed dramatically over the past 30 years. While being considered as an appendix to grammar in the past, by now they have moved to centre stage. Indeed, there is hardly any task in NLP which can be conducted without them. Also, rather than being static entities (data-base view), dictionaries are now viewed as dynamic networks, i.e. graphs, whose nodes and links (connection strengths) may change over time. Interestingly, properties concerning topology, clustering and evolution known from other disciplines (society, economy, human brain) also apply to dictionaries: everything is linked, hence accessible, and everything is evolving. Given these similarities, one may wonder what we can learn from these disciplines.

In this 4th edition of the CogALex workshop we therefore also invite scientists working in these fields, with the goal to broaden the picture, i.e. to gain a better understanding concerning the mental lexicon and to integrate these findings into our dictionaries in order to support navigation. Given recent advances in neurosciences, it appears timely to seek inspiration from neuroscientists studying the human brain. There is also a lot to be learned from other fields studying graphs and networks, even if their object of study is something else than language, for example biology, economy or society.


TOPICS OF INTEREST

This workshop is about possible enhancements of lexical resources and electronic dictionaries. To perform the groundwork for the next generation of such resources we invite researchers involved in the building of such tools. The idea is to discuss modifications of existing resources by taking the users’ needs and knowledge states into account, and to capitalize on the advantages of the digital media. For this workshop we solicit papers including but not limited to the following topics, each of which can be considered from various points of view: linguistics, neuro- or psycholinguistics (tip of the tongue problem, associations), network related sciences (sociology, economy, biology), mathematics (vector-based approaches, graph theory, small-world problem), etc.


1) Analysis of the conceptual input of a dictionary user

  • What does a language producer start from (bag of words)?
  • What is in the authors' minds when they are generating a message and looking for a word?
  • What does it take to bridge the gap between this input and the desired output (target word)?


2) The meaning of words

  • Lexical representation (holistic, decomposed)
  • Meaning representation (concept based, primitives)
  • Revelation of hidden information (distributional semantics, latent semantics, vector-based approaches: LSA/HAL)
  • Neural models, neurosemantics, neurocomputational theories of content representation.


3) Structure of the lexicon

  • Discovering structures in the lexicon: formal and semantic point of view (clustering, topical structure)
  • Creative ways of getting access to and using word associations (reading between the lines, subliminal communication);
  • Evolution, i.e. dynamic aspects of the lexicon (changes of weights)
  • Neural models of the mental lexicon (distribution of information concerning words, organisation of words)


4) Methods for crafting dictionaries or indexes

  • Manual, automatic or collaborative building of dictionaries and indexes (crowd-sourcing, serious games, etc.)
  • Impact and use of social networks (Facebook, Twitter) for building dictionaries, for organizing and indexing the data (clustering of words), and for allowing to track navigational strategies, etc.
  • (Semi-) automatic induction of the link type (e.g. synonym, hypernym, meronym, association, collocation, ...)
  • Use of corpora and patterns (data-mining) for getting access to words, their uses, combinations and associations


5) Dictionary access (navigation and search strategies, interface issues,...)

  • Search based on sound, meaning or associations
  • Search (simple query vs multiple words)
  • Context-dependent search (modification of users’ goals during search)
  • Recovery
  • Navigation (frequent navigational patterns or search strategies used by people)
  • Interface problems, data-visualisation

6) Dictionary applications

  • Methods supporting vocabulary learning (for example, creation of data-bases showing words in various contexts)
  • Tools for supporting Human translation


IMPORTANT DATES

Deadline for paper submissions: May 25, 2014
Notification of acceptance: June 15, 2014
Camera-ready papers due: July 7, 2014
Workshop date: August 23, 2014


SUBMISSION INFORMATION

Papers should follow the COLING main conference formatting details (http://www.coling-2014.org/call-for-papers.php) and should be submitted as a PDF-file via the START workshop manager at https://www.softconf.com/coling2014/WS-1/ (you must register first). 

Contributions can be short or long papers. Short paper submission must describe original and unpublished work without exceeding six (6) pages (references included). Characteristics of short papers include: a small, focused contribution; work in progress; a negative result; a piece of opinion; an interesting application nugget. Long paper submissions must describe substantial, original, completed and unpublished work without exceeding twelve (12) pages (references included).

Reviewing will be double blind, so the papers should not reveal the authors' identity. Accepted papers will be published in the workshop proceedings.

For further details see: http://pageperso.lif.univ-mrs.fr/~michael.zock/cogalex-webpage/index.html

SHARED TASK

We invite participation in a shared task devoted to the problem of lexical access in language production, with the aim of providing a quantitative comparison between different systems.

Motivation of shared task

The quality of a dictionary depends not only on coverage, but also on the accessibility of the information. That is, a crucial point is dictionary access. Access strategies vary with the task (text understanding vs. text production) and the knowledge available at the very moment of consultation (words, concepts, speech sounds). Unlike readers who look for meanings, writers start from them, searching for the corresponding words. While paper dictionaries are static, permitting only limited strategies for accessing information, their electronic counterparts promise dynamic, proactive search via multiple criteria (meaning, sound, related words) and via diverse access routes. Navigation takes place in a huge conceptual lexical space, and the results are displayable in a multitude of forms (e.g. as trees, as lists, as graphs, or sorted alphabetically, by topic, by frequency).

To bring some structure into this multitude of possibilities, the shared task will concentrate on a crucial subtask, namely multiword association.  we will organize a novel type of shared task which will allow quantitative comparisons between different systems. The task chosen is multiword association.

What we mean by this in the context of this workshop is the following. Suppose, we were looking for a word expressing the following ideas: ísuperior dark coffee made of beans from Arabiaí, but could not remember the intended word mocha. Since people always remember something concerning the elusive word, it would be nice to have a system accepting this kind of input, to propose then a number of candidates for the target word. Given the above example, we might enter dark, coffee, beans, and Arabia, and the system would be supposed to come up with lists of associated words such as mocha, espresso, or cappuccino.


Procedure

The participants will receive lists of five given words (primes) such as 'circus', 'funny', 'nose', 'fool', and 'fun' and are supposed to compute the word which is most closely associated to all of them. In this case, the word 'clown' would be the expected answer. Here are some more examples:

given words: gin, drink, scotch, bottle, soda    
expected answer: whisky

 

given words: wheel, driver, bus, drive, lorry
expected answer: car

given words: neck, animal, zoo, long, tall
expected answer: giraffe

given words: holiday, work, sun, summer, abroad    
expected answer: vacation

given words: home, garden, door, boat, chimney
expected answer: house

given words: blue, cloud, stars, night, high
expected answer: sky


We will provide a training set of 2000 sets of five input words (multiword stimuli), together with the expected target words (associative response). The participants will have several weeks to train their systems on this data. After the the training phase, we will release a test set containing another 2000 sets of five input words, but without providing the expected target words.

Participants will have five days to run their systems on the test data, thereby predicting the target words. For each system, we will compare the results to the expected target words and compute an accuracy. The participants will be invited to submit a paper describing their approach and the results.

For the participating systems, we will distinguish two categories: (1) Unrestricted systems. They can use any kind of data to compute their results. (2) Restricted systems: These systems are only allowed to draw on the freely available ukWaC corpus (comprising 2 billion words) in order to extract information on word associations. Participants are allowed to compete in either category or in both.


Schedule for Shared Task

  • Training Data Release:  March 25, 2014
  • Test Data Release:  May 5, 2014
  • Final Results:  May 9, 2014
  • Deadline for Paper Submission:  May 25, 2014
  • Reviewers' feedback:  June, 15, 2014
  • Camera-Ready Version:  July 7, 2014
  • Workshop date:  August 23, 2014

All data releases to be found on the workshop website.


PROGRAMME COMMITTEE

  • Bel Enguix, Gemma(LIF-CNRS, France) and (GRLMC, Tarragona, Spain)
  • Chang, Jason(National Tsing Hua University, Taiwan)
  • Cook, Paul(University of Melbourne, Australia)
  • Cristea, Dan(University A.I.Cuza, Iasi, Romania)
  • De Deyne, Simon(Experimental Psychology, Leuven, Belgium) and (Adelaide, Australia)
  • De Melo, Gerard(IIIS, Tsinghua University, Beijing, China)
  • Ferret, Olivier(CEA LIST, Gif sur Yvette, France)
  • Fontenelle, Thierry(CDT, Luxemburg)
  • Gala, Nuria(LIF-CNRS, Aix Marseille University, Marseille, France)
  • Granger, Sylviane(Université Catholique de Louvain, Belgium)
  • Grefenstette, Gregory (Inria, Saclay, France)
  • Hirst, Graeme(University of Toronto, Canada)
  • Hovy, Eduard(CMU, Pittsburgh, USA)
  • Hsieh, Shu-Kai(National Taiwan University, Taipei, Taiwan)
  • Huang, Chu-Ren(Hongkong Polytechnic University, China)
  • Joyce, Terry(Tama University, Kanagawa-ken, Japan)
  • Lapalme, Guy(RALI, University of Montreal, Canada)
  • Lenci, Alessandro(CNR, university of Pisa, Italy)
  • L'Homme, Marie Claude(University of Montreal, Canada)
  • Mihalcea, Rada(University of Texas, USA)
  • Navigli, Roberto(Sapienza, University of Rome, Italy)
  • Pirrelli, Vito(ILC, Pisa, Italy)
  • Polguère, Alain(ATILF-CNRS, Nancy, France)
  • Rapp, Reinhard(LIF-CNRS, France) and (Mainz, Germany)
  • Rosso, Paolo(NLEL, Universitat Politècnica de València, Spain)
  • Schwab, Didier(LIG-GETALP, Grenoble, France)
  • Serasset, Gilles(IMAG, Grenoble, France)
  • Sharoff, Serge(University of Leeds, UK)
  • Su, Jun-Ming(University of Tainan, Taiwan)
  • Tiberius, Carole(Institute for Dutch Lexicology, The Netherlands)
  • Tokunaga, Takenobu(TITECH, Tokyo, Japan)
  • Tufis, Dan(RACAI, Bucharest, Romania)
  • Valitutti, Alessandro(Helsinki Institute of Information Technology, Finland)
  • Wandmacher, Tonio(IRT SystemX, Saclay, France)
  • Zock, Michael(LIF-CNRS, Marseille, France), currently (University of Tainan, Taiwan)



WORKSHOP ORGANIZERS and CONTACT PERSONS

  • Michael Zock (LIF-CNRS, Marseille, France), michael.zock AT lif.univ-mrs.fr
  • Reinhard Rapp (University of Aix Marseille (France) and Mainz (Germany), reinhardrapp AT gmx.de
  • Chu-Ren Huang (The Hong Kong Polytechnic University, Hong Kong), churen.huang AT inet.polyu.edu.hk


For more details see:

http://pageperso.lif.univ-mrs.fr/~michael.zock/cogalex-webpage/index.html
(again, this page is still under construction)

Back  Top

3-3-28(2014-08-23) CfP 25th International Conference on Computational Linguistics (COLING 2014)

 

 

             1st (Preliminary) Call for Papers - Coling 2014

 

The 25th International Conference on Computational Linguistics

August 23 - 29, 2014

Dublin, Ireland

 

http://www.coling-2014.org

 

The International Committee on Computational Linguistics (ICCL) is pleased to announce the 25th International Conference on Computational Linguistics (Coling 2014), at Dublin City University (DCU, Dublin, Ireland, European Union). DCU is a young, dynamic and ambitious university with a mission to transform lives and societies through education, research and innovation. Most of the local organizers are from CNGL, Ireland’s Centre for Global Intelligent Content (formerly the Centre for Next Generation Localization), which embodies the leading position of Ireland in the global localization/internationalization business, a strong focus on language technologies including machine translation, computational linguistics and natural language processing, as well as on intelligent management, search, retrieval, transformation and adaptation of content.

Coling will cover a broad spectrum of technical areas related to natural language and computation. The conference will include full papers (presented as oral presentations or posters), demonstrations, tutorials, and workshops.

 

TOPICS OF INTEREST

 

Coling 2014 solicits papers and demonstrations on original and unpublished research on the following topics, including, but not limited to:

 

- pragmatics, semantics, syntax, grammars and the lexicon;

- cognitive, mathematical and computational models of language processing;

- models of communication by language; 

- lexical semantics and ontologies;  

- word segmentation, tagging and chunking;

- parsing, both syntactic and deep;

- generation and summarization;

- paraphrasing, textual entailment and question answering;

- speech recognition, text-to-speech and spoken language understanding;

- multimodal and natural language interfaces and dialogue systems;

- information retrieval, information extraction and knowledge base linking;

- machine learning for natural language;

- modeling of discourse and dialogue;

- sentiment analysis, opinion mining and social media;

- multilingual processing, machine translation and translation aids;              

- applications, tools and language resources;

- system evaluation methodology and metrics.

 

In all relevant areas, we encourage authors to include analysis of the influence of theories (intuitions, methodologies, insights, ? to technologies (computational algorithms, methods, tools, data, ? and/or contributions of technologies to theory development. In technologically oriented papers, we encourage in-depth analysis and discussion of errors made in the experiments described, if possible linking them to the presence or absence of linguistically-motivated features. Contributions that display and rigorously discuss future potential, even if not (yet) attested in standard evaluation, are welcome.

 

PAPER REQUIREMENTS

 

Papers should describe original work; they should emphasize completed work or well-advanced ongoing research rather than intended work, and should indicate clearly the state of completion of the reported results. Wherever appropriate, concrete evaluation results should be included.

 

Submissions will be judged on correctness, originality, technical strength, significance and relevance to the conference, and interest to the attendees.

Submissions presented at the conference should mostly contain new material that has not been presented at any other meeting with publicly available proceedings. Papers that are being submitted in parallel to other conferences or workshops must indicate this on the title page, as must papers that contain significant overlap with previously published work.

 

REVIEWING

 

Reviewing will be double blind. It will be managed by an international Conference Program Committee consisting of Program Chairs, members of the Scientific Advisory Board and Area Chairs, who will be assisted by invited reviewers.

 

INSTRUCTIONS FOR AUTHORS

 

For Coling 2014, there will be one category of research papers only. All of the papers will be included in conference proceedings, this time in electronic form only.

 

The maximum submission length is 8 pages (A4), plus two extra pages for references.  Authors of accepted papers will be given additional space in the camera-ready version to reflect space needed for changes stemming from reviewers?comments.  Authors can indicate their preference for presentation mode (i.e. oral or poster presentation) in the submission form, and the reviewers will recommend an appropriate mode of presentation to the program committee which will then decide. There will be no distinction in the proceedings between research papers presented orally vs. as posters.

 

Papers shall be submitted in English, anonymized with regard to the authors and/or their institution (no author-identifying information on the title page nor anywhere in the paper), including referencing style as usual. Papers must conform to official Coling 2014 style guidelines, which will be available on the Coling 2014 website. Submission and reviewing will be managed online by the START system. The only accepted format for submitted papers is in Adobe’s PDF.

 

Submissions must be uploaded on the START system by the submission deadlines; submissions after that time will not be reviewed. To minimize network congestion, we request authors to upload their submissions as early as possible.

 

 

Important Notice

 

[1] In order to allow participants to be acquainted with the published papers ahead of time which in turn should facilitate discussions at Coling 2014, we have set the official publication date two weeks before the conference, i.e., on August 11, 2014. On that day, the papers will be available online for all participants to download, print and read. If your employer is taking steps to protect intellectual property related to your paper, please inform them about this timing.

 

[2] While submissions are anonymous, we strongly encourage authors to plan for depositing language resources and other data as well as tools used and/or developed for the experiments described in the papers, if the paper is accepted. In this respect, we encourage authors then to deposit resources and tools to available open-access repositories of language resources and/or repositories of tools (such as META-SHARE, Clarin, ELRA, LDC or AFNLP/COCOSDA for data, and github, sourceforge, CPAN and similar for software and tools) and refer to them instead of submitting them with the paper, even though it will also be an open possibility (through the START system). The details will be given in the submission site for camera-ready versions of accepted papers.

 

[3] There will be a separate call for demonstrations in February. Accepted papers on demonstrations will also be included in the proceedings.

 

 

IMPORTANT DATES

 

January, 2014: Opening of the submission website

March 21, 2014: Paper submission deadline

May 9-12, 2014: Author response period

May 23, 2014: Author notification

June 6, 2014: Camera-ready PDF due

August 11, 2014: Official paper publication date

August 25-29, 2014: Main conference

 

PROGRAM COMMITTEE

 

Program Committee Co-chairs

 

Junichi Tsujii (Microsoft Research, China)

Jan Hajic (Charles University in Prague, Czech Republic)

 

Scientific Advisory Board members

 

Ralph Grishman (New York University, USA)

Yuji Matsumoto (Nara Institute of Science and Technology, Japan)

Joakim Nivre (Uppsala University, Sweden)

Michael Picheny (IBM T. J. Watson Research Center, USA)

Donia Scott (Unviersity of Sussex, United Kingdom)

Chengqing Zong (Chinese Academy of Sciences, China)

 

Area Chairs

 

1. Linguistic Issues in CL and NLP

Emily M. Bender  (University of Washington, USA)

Eva Hajicova (Charles University in Prague, Czech Republic)

Igor Boguslavsky (Universidad Politecnica de Madrid, Spain)

 

2. Machine Learning for CL and NLP

Jason Eisner (Johns Hopkins University, USA)

Yoshimasa Tsuruoka (University of Tokyo, Japan)

 

3. Cognitive Issues in CL and NLP

Philippe Blache (CNRS & CNRS & Aix-Marseille Université, France)

Ted Gibson (MIT, USA)

 

4.  Morphology, Word Segmentation, Tagging and Chunking

Reut Tsarfaty (Weizmann Institute of Science, Israel)

Yue Zhang (Singapore University of Technology and Design, Singapore)

 

5. Syntax, Grammar Induction, Syntactic and Semantic Parsing

Laura Kallmeyer (Heinrich-Heine-Universität, Germany)

Ryan McDonald (Google, USA)

 

6. Lexical Semantics and Ontologies

Chu-Ren Huang (Hong Kong Polytechnic University, Hong Kong)

Alessandro Oltramari (Carnegie Mellon University, USA)

 

7. Semantic Processing, Distributional Semantics and Compositional Semantics

Stephen Clark (University of Cambridge, UK)

Alessandro Lenci (University of Pisa, Italy)

 

8. Modeling of Discourse and Dialogue

Nicolas Asher (CNRS & Université Paul Sabatier, France)

Marilyn Walker (University of California Santa Cruz, USA)

 

9. Natural Language Generation and Summarization

Albert Gatt (University of Malta, Malta)

Advaith Siddharthan (University of Aberdeen, UK)

 

10. Paraphrasing and Textual Entailment

Ido Dagan (Bar Ilan University, Israel)

Kentaro Inui (Tohoku University, Japan)

 

11. Sentiment Analysis, Opinion Mining and Social Media

Rada Mihalcea (University of Michigan, USA)

Bing Liu (University of Illinois at Chicago, USA)

 

12.  Information Retrieval and Question Answering

Gareth Jones (Dublin City University, Ireland)

Siddharth Patwardhan (IBM Research, USA)

 

13. Information Extraction and Database Linking

James Curran (University of Sydney, Australia)

Seung-won Hwang (Postec, Korea)

 

14. Applications

Srinivas Bangalore (AT&T Labs-Research, USA)

Heyan Huang (Beijing Institute of Technology, China)

Guillaume Jacquet (Joint Research Centre, Italy)

 

15. Multimodal and Natural Language Interfaces and Dialog Systems

Kristiina Jokinen  (University of Helsinki, Finland)

David Traum (University of Southern California, USA)

 

16. Speech Recognition, Text-To-Speech, Spoken Language Understanding

Nick Campbell  (Trinity College Dublin, Ireland)

Alex Potamianos (National Technical University Crete, Greece)

 

17. Machine Translation

Phillip Koehn (University of Edinburgh, UK / Johns Hopkins University, USA)

Chris Quirk (Microsoft Research, USA)

Tiejun Zhao (Harbin Institute of Technology, China)

 

18. Resources

Pushpak Bhattacharyya (IIT Bombay, India)

Nicoletta Calzolari (ILC-CNR, Pisa, Italy)

Martha Palmer (University of Colorado, USA)

 

19. Languages with less resources

Steven Bird (University of Melbourne, Australia)

Mark Liberman (University of Pennsylvania, USA)

Rajeev Sangal (IIT Banaras Hindu University, India)

Koenraad De Smedt (University of Bergen, Norway)

 

20. Software and Tools

Jesús Cardeñosa (Universidad Politecnica de Madrid, Spain)

 

Jing-Shin Chang (National Chi Nan University,Taiwan)




org

Back  Top

3-3-29(2014-08-23) SHARED TASK ON THE LEXICAL ACCESS PROBLEM (with COGALEX)
SHARED TASK ON THE LEXICAL ACCESS PROBLEM
(COMPUTING ASSOCIATIONS WHEN BEING GIVEN MULTIPLE STIMULI)


In the framework of the 4th Workshop on Cognitive Aspects of the Lexicon (CogALex) to be held at COLING 2014, we invite participation in a shared task devoted to the problem of lexical access in language production, with the aim of providing a quantitative comparison between different systems.

 
MOTIVATION

The quality of a dictionary depends not only on coverage, but also on the accessibility of the information. That is a crucial point is dictionary access. Access strategies vary with the task (text understanding vs. text production) and the knowledge available at the very moment of consultation (words, concepts, speech sounds). Unlike readers who look for meanings, writers start from them, searching for the corresponding words. While paper dictionaries are static, permitting only limited strategies for accessing information, their electronic counterparts promise dynamic, proactive search via multiple criteria (meaning, sound, related words) and via diverse access routes. Navigation takes place in a huge conceptual lexical space, and the results are displayable in a multitude of forms (e.g. as trees, as lists, as graphs, or sorted alphabetically, by topic, by frequency).

To bring some structure into this multitude of possibilities, the shared task will concentrate on a crucial subtask, namely multiword association.  What we mean by this in the context of this workshop is the following. Suppose, we were looking for a word expressing the following ideas: 'superior dark coffee made of beans from Arabia', but could not remember the intended word 'mocha' due to the tip-of-the-tongue problem. Since people always remember something concerning the elusive word, it would be nice to have a system accepting this kind of input, to propose then a number of candidates for the target word. Given the above example, we might enter 'dark', 'coffee', 'beans', and 'Arabia', and the system would be supposed to come up with one or several associated words such as 'mocha', 'espresso', or 'cappuccino'.

 
TASK DEFINITION

The participants will receive lists of five given words (primes) such as 'circus', 'funny', 'nose', 'fool', and 'fun' and are supposed to compute the word which is most closely associated to all of them. In this case, the word 'clown' would be the expected response. Here are some more examples:

   given words:  gin, drink, scotch, bottle, soda
   target word:  whisky

   given words:  wheel, driver, bus, drive, lorry
   target word:  car

   given words:  neck, animal, zoo, long, tall
   target word:  giraffe

   given words:  holiday, work, sun, summer, abroad
   target word:  vacation

   given words:  home, garden, door, boat, chimney
   target word:  house

   given words:  blue, cloud, stars, night, high
   target word:  sky

We will provide a training set of 2000 sets of five input words (multiword stimuli), together with the expected target words (associative responses). The participants will have about five weeks to train their systems on this data. After the training phase, we will release a test set containing another 2000 sets of five input words, but without providing the expected target words.

Participants will have five days to run their systems on the test data, thereby predicting the target words. For each system, we will compare the results to the expected target words and compute an accuracy. The participants will be invited to submit a paper describing their approach and their results.

For the participating systems, we will distinguish two categories:

(1) Unrestricted systems. They can use any kind of data to compute their results.
(2) Restricted systems: These systems are only allowed to draw on the freely available ukWaC corpus in order to extract information on word associations. The ukWaC corpus comprises about 2 billion words and is can be downloaded from http://wacky.sslmit.unibo.it/doku.php?id=corpora.

Participants are allowed to compete in either category or in both.


VENUE

The shared task will take place as part of the CogALex workshop which is co-located with COLING 2014 (Dublin). The workshop date is August 23, 2014. Shared task participants who wish to have a paper published in the workshop proceedings will be required to present their work at the workshop.
 

SHARED TASK SCHEDULE

Training data release:  March 27, 2014
Test data release:  May 5, 2014
Final results due:  May 9, 2014
Deadline for paper submission: May 25, 2014 
Reviewers' feedback:  June, 15, 2014
Camera-ready version:  July 7, 2014
Workshop date:  August 23, 2014


FURTHER INFORMATION

CogALex workshop website: http://pageperso.lif.univ-mrs.fr/~michael.zock/CogALex-IV/cogalex-webpage/index.html
Data releases: To be found on the above workshop website from the dates given in the schedule.
Registration for the shared task: Send e-mail to Michael Zock, with Reinhard Rapp in copy.


WORKSHOP ORGANIZERS

Michael Zock (LIF-CNRS, Marseille, France), michael.zock AT lif.univ-mrs.fr
Reinhard Rapp (University of Aix Marseille (France) and Mainz (Germany), reinhardrapp AT gmx.de
Chu-Ren Huang (The Hong Kong Polytechnic University, Hong Kong), churen.huang AT inet.polyu.edu.hk

Back  Top

3-3-30(2014-09-01) 22nd European Signal Processing Conference (EUSIPCO 2014) Lisbon, Portugal
The 22nd European Signal Processing Conference  September 1 – 5, 2014, Lisbon, Portugal http://www.eusipco2014.org/ ============================================================== Deadline for the submission of Full Papers: FEBRUARY 17, 2014 ============================================================== EUSIPCO 2014 will be held on September 1- 5, 2014, in Lisbon, Portugal. This is one of the largest international conferences in the field of signal processing and will address all the latest developments in research and technology. The conference will bring together individuals from academia, industry, regulation bodies, and government, to exchange and discuss ideas in all the areas and applications of signal processing. EUSIPCO 2014 will feature world-class keynote speakers, special sessions, plenary talks, tutorials, and technical sessions. We invite the submission of original, unpublished technical papers on signal processing topics, including but not limited to: • Audio and acoustic signal processing • Design and implementation of signal processing systems • Multimedia signal processing • Speech processing • Image and video processing • Machine learning • Signal estimation and detection • Sensor array and multichannel signal processing • Signal processing for communications including wireless and optical communications and networking • Signal processing for location, positioning and navigation • Nonlinear signal processing • Signal processing applications including health and biosciences Submitted papers must be camera-ready and no more than five pages long, and conforming to the format that will soon be specified on the EUSIPCO website (http://www.eusipco2014.org/ ). ============================================================== Best Paper Awards ============================================================== Two “EUSIPCO best young author paper awards” will be given at the dinner banquet of EUSIPCO 2014 to the two best papers from authors under the age of 30. ============================================================== Important Dates ============================================================== Proposal for special sessions: December 9, 2013 Proposal for tutorials: February 17, 2014 Electronic submission of full papers: February 17, 2014 Notification of acceptance: May 26, 2014 Submission of camera-ready papers and copyright forms: June 23, 2014 _______________________________________________
Back  Top

3-3-31(2014-09-03) Laboratory Approaches to Romance Phonology 7 (LARP VII), Aix en Provence, FR


Laboratory Approaches to Romance Phonology 7 (LARP VII)
   

                
Aix-en-Provence, France
Sept. 3-5, 2014

The biannual conference on Laboratory Approaches to Romance Phonology
(LARP) seeks to bring together international researchers interested in all
areas of Romance phonetics and phonology, in particular within the
 laboratory phonology approach. In the past decades, research in the
 laboratory phonology paradigm has expanded significantly so that the
 disciplines of phonetics and phonology are being investigated from a unique
 interdisciplinary angle. LARP aims at providing an interdisciplinary forum
 for world-wide research focusing on the experimental investigation
 of
 Romance phonetics and phonology and their related areas, such as language
 acquisition, language variation and change, prosody, speech pathology,
 speech technology, as well as the phonology-phonetics interface.
 LARP VII will be hosted for the first time in Europe, by the Laboratoire
 Parole et Langage in Aix-en-Provence, and will be the result of a joint
 effort between Aix-Marseille University (Aix-en-Provence, France) and the
 Universitat Pompeu Fabra (Barcelona, Spain).

 Meeting Dates:
 Laboratory Approaches to Romance Phonology VII will be held from
 03-Sept-2014 to 05-Sept-2014.

 Contact Information:
 Mariapaola D'Imperio: larp7conference@gmail.com
 
Organizers:
 Mariapaola D'Imperio (Aix-Marseille University & LPL,CNRS)
 Pilar Prieto (ICREA & Universitat Pompeu Fabra)

 Conference webpage:
 http://larp7.sciencesconf.org/

 Abstract Submission Information:
Abstracts can be submitted from 15-Dec-2013 until 15-April-2014. 
 Invited speakers
 Laura Bosch, Univ. Barcelona
 Martine Grice, Univ. Koeln, Germany
 Thierry Nazzi, CNRS, Paris
 Daniel Recasens, Univ. Autonoma, Barcelona
           
         
Back  Top

3-3-32(2014-09-10) 56th International Symposium ELMAR-2014 , Zadar, Croatia
 56th International Symposium ELMAR-2014 
********************************** 
September 10-12, 2014 Zadar, Croatia
 Paper submission deadline: May 15, 2014 
http://www.elmar-zadar.org/ 
CALL FOR PAPERS TECHNICAL CO-SPONSORS IEEE Region 8 
IEEE Croatia Section 
IEEE Croatia Section SP, 
AP and MTT Chapters 
EURASIP - 
European Association for Signal Processing 
TOPICS
 --> Image and Video Processing 
--> Multimedia Communications
 --> Speech and Audio Processing 
--> Wireless Communications 
--> Telecommunications
 --> Antennas and Propagation
 --> e-Learning and m-Learning 
--> Navigation Systems 
--> Ship Electronic Systems 
--> Power Electronics and Automation 
--> Naval Architecture
 --> Sea Ecology 
--> Special Sessions: http://www.elmar-zadar.org/2014/special_sessions/ 
 
KEYNOTE SPEAKER 
Prof. Milo Oravec, Slovak University of Technology in Bratislava, SLOVAKIA: 
Feature Extraction and Classification by Machine Learning Methods for Biometric Recognition 
of Face and Iris
 SCHEDULE OF IMPORTANT DATES 
Deadline for submission of full papers: May 15, 2014 
Notification of acceptance mailed out by: June 3, 2014 
Submission of (final) camera-ready papers: June 10, 2014 
Preliminary program available online by: June 17, 2014 
Registration forms and payment deadline: June 17, 2014 
Back  Top

3-3-33(2014-09-10) CfP 3rd SWIP - Swiss Workshop on Prosody, Université de Genève, Switzerland
Second Call for contributions
 
3rd SWIP - Swiss Workshop on Prosody
Special Theme : PhonoGenres and Speaking Styles
10-11 September 2014 - University of Geneva
 
 
The SWIP (Swiss Workshop on Prosody) is an annual meeting gathering
researchers in the field of prosody. After Zurich in 2012, and
Neuchâtel in 2013, the 3rd SWIP will take place in Geneva on
10-11 September 2014. For this edition, the special theme is
PhonoGenres and Speaking Styles. By this event we mark the end
of the three year FNS research project 'Prosodic and linguistic
characterisation of speaking styles: semi-automatic approach and
applications'.
 
Phonostylistic prosodic variation, whether regional, social or
situational, is the object of a growing number of studies. They are
systematic or isolated, based on phonetic-phonological studies of
large-scale corpora or on the examination of narrow samples. Approaches
vary between systematic methodologies and ad hoc procedures. Thus, one
of the major goals of the conference is to index different approaches
and to confront their results.
 
Topics of interest include, but are not limited to:
 
*PhonoGenres: phonetic-prosodic dimensions; situational, regional,
communicative, macro- or micro-social variations; comparative analysis
*speaker-specific behavior: cliché, idiosyncrasy, distinctive features
*diachronic speaking style variation
*identification of discourse genres and styles
*methodologies and tools for corpus processing of speech in general,
and especially those developed to process the speaking style variation
 
Invited speakers:
 
Julia Hirschberg
Philippe Boula de Mareüil
 
Submission:
 
First, a one page abstract, plus references, shall be submitted in
English or in French via EasyChair by the 1st of February 2014.
 
Second, the definitive version of paper shall be submitted by the
1st of June 2014 in order to publish the proceedings - both in paper
and electronic format - at the beginning of the conference. Proceedings
will be published in Cahiers de la Linguistique Française in a short
(6 pages max., about 2000 words) or in a long version (12 pages max.,
about 4000 words). Papers can be written in English or in French with
an abstract in the other language and they must follow style sheet
 
Please note that the conference language is English.
 
Important dates:
 
Submission of abstracts : 1 February 2014
Notification of acceptance: 1 Mars 2014
Submission of final paper for proceedings publication: 1 June 2014
Conference: 10-11 September 2014
 
Scientific committee:
 
Antoine Auchlin
Mathieu Avanzi
Philippe Boula de Mareüil
Nick Campbell
Elisabeth Delais-Roussarie
Céline De Looze
Volker Dellwo
Jean-Philippe Goldman
Julia Hirschberg
Daniel Hirst
Ingrid Hove
Adrian Leemann
Joaquim Llisterri
Philippe Martin
Piet Mertens
Anne Lacheret
Nicolas Obin
Tea Pršir
Stephan Schmid
Sandra Schwab
Elizabeth Shriberg
Anne Catherine Simon
 
Organising committee:
 
Antoine Auchlin
Jean-Philippe Goldman
Tea Pršir
 
 
 
 
 
Back  Top

3-3-34(2014-09-11) CfP 2nd Workshop on Speech, Language and Audio in Multimedia (SLAM 2014), Penang, Malaysia

SLAM2014 Call for Paper
=========================

2nd Workshop on Speech, Language and Audio in Multimedia (SLAM 2014)Penang, Malaysia
http://language.cs.usm.my/SLAM2014/
11-12 September, 2014

Following the first successful edition of the Workshop on Speech, Language and Audio in Multimedia (SLAM) in Marseille, France last year, we will be bringing the next edition of the workshop to Penang, Malaysia! SLAM workshop aims at bringing together researchers working in speech, language and audio processing to analyze, index and access multimedia data. Multimedia data are now available in very large amounts: Lectures, meetings, interviews, debates, conversational broadcast, podcasts, social videos on the Web, etc. Such data, along with the associated use scenarios, raise specific challenges: Robustness facing the high variability in quality; Efficiency to handle very large amount of data; Semantics shared across modalities; Potentially high error rates in transcription; etc. Worldwide, several national and international research projects are focusing on audio analysis of multimedia data. Similarly, various benchmark initiatives have been initiated such as TRECVID MED, Me!
diaEval, or ETAPE and REPERE in France.

SLAM 2014 is organized in conjunction with Interspeech 2014 over one and a half days, starting Thursday 11 September 2014 and ending Friday 12 September 2014. Penang is conveniently connected by bus, train and flight to Singapore, where the Interspeech 2014 conference will take place. The format of the workshop will include an invited talk, oral presentations of scientific work and a poster session for project and benchmark presentations. SLAM 2014 Workshop is jointly organized by the ISCA SIG on Speech and Language in Multimedia and the IEEE SIG on Audio and Speech Processing in Multimedia. The proceeding will be published by ISCA and available online at ISCA Online Archive. Best papers selected will be able to extend their paper to be submitted to special issue in a dedicated journal that we will announce later.


SCIENTIFIC COMMITTEE:
Chng Eng Siong, Nanyang Technological University - Singapore
Eric Castelli, Hanoi University of Science and Technology - Vietnam
Fernando Fernández-Martínez, Universidad Carlos III, Madrid - Spain
Florian Metze, Carnegie Mellon University, Pittsburgh - USA
Frédéric Bechet, Aix-Marseille Université, Marseille - France
Gareth Jones, Dublin City University, Dublin - Ireland
Gerald Friedland, University of California, Berkeley - USA
Guillaume Gravier, Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Rennes - France
Juan Manuel Montero, Universidad Politécnica de Madrid, Madrid - Spain
Laurent Besacier, Université Joseph Fourier, Grenoble - France
Lin-Shan Lee, Nanyang Technological University - Singapore
Luis Fernando D’Haro, Universidad Politécnica de Madrid, Madrid - Spain
Martha Larson, Delft University of Technology, Delft - Netherlands
Ricardo de Córdoba, Universidad Politécnica de Madrid, Madrid - Spain
Roberto Barra-Chicote, Universidad Politécnica de Madrid, Madrid - Spain
Rubén San-Segundo, Universidad Politécnica de Madrid, Madrid - Spain
Sadaoki Furui, Tokyo Institute of Technology, Tokyo - Japan
Tang Enya Kong, Linton University College, Negeri Sembilan - Malaysia
Xavier Anguera, Telefónica Research, Barcelona - Spain

Two distinguished professors will be giving keynotes presentations during the workshop, they are Prof. Shrikanth S. Narayanan and Assoc. Prof. Min-Yen Kan

IMPORTANT DATES:
Full paper submission deadline 12th June 2014
Notification of acceptance 16th July 2014
Camera ready paper 31st July 2014
SLAM workshop 11-12th September 2014
Interspeech conference 14-18th September 2014

Independently of the scientific actions we will provide tour around George Town, Penang, which is a Unesco World Heritage Historical City.  

Jointly Organised by:
Universiti Sains Malaysia, SLAM Organising Committee, Interspeech 2014, ISCA SIG on Speech and Language in Multimedia, IEEE SIG on Audio and Speech Processing in Multimedia

Back  Top

3-3-35(2014-09-12) ISCSLP in Singapore

 

Welcome to ISCSLP 2014

第九届中文口语语言处理国际会议

http://www.iscslp2014.org



The 9th International Symposium on Chinese Spoken Language Processing (ISCSLP 2014) will be held on September 12-14, 2014 in Singapore.

ISCSLP 2014 is a joint conference between ISCA Special Interest Group on Chinese Spoken Language Processing and National Conference on Man-Machine Speech Communication of China.

ISCSLP is a biennial conference for scientists, researchers, and practitioners to report and discuss the latest progress in all theoretical and technological aspects of spoken language processing.

While the ISCSLP is focused primarily on Chinese languages, works on other languages that may be applied to Chinese speech and language are also encouraged. The working language of ISCSLP is English.

 

Topics of interest for submission include, but are not limited to the following:

  1. Speech Production and Perception

  2. Speech Analysis

  3. Speech Coding

  4. Speech Enhancement

  5. Hearing Aids and Cochlear Implant

  6. Phonetics and Phonology

  7. Corpus-based Linguistics

  8. Speech and Language Disorders

  9. Speech Recognition

  10. Spoken Language Translation

  11. Speaker, Language, and Emotion Recognition

  12. Speech Synthesis

  13. Language Modeling

  14. Speech Prosody

  15. Spoken Dialog Systems

  16. Machine Learning Techniques in Speech and Language Processing

  17. Voice Conversion

  18. Indexing, Retrieval and Authoring of Speech Signals

  19. Multi-Modal Interfaces

  20. Speech and Language Processing in Education

  21. Spoken Language Resources and Technology Evaluation

  22. Applications of Spoken Language Processing Technology

  23. Others

 

 

Important Dates

Regular and special session paper submission deadline

April 10, 2014

Notification of paper acceptance

June 20, 2014

Revised camera-ready paper upload deadline

June 30, 2014

Author’s registration deadline

July 10, 2014

 

Back  Top

3-3-36(2014-09-25) XLVIII Congresso Internazionale - Società di Linguistica Italiana , Udine, Italy

XLVIII Congresso Internazionale - Società di Linguistica Italiana (SLI) 2014

(Udine, 25-27.9.2014)

 

WORKSHOP

 

Between linguistics and linguistic medical clinic. The role of the linguist

 

 

Workshop topics

- Medical terminology

- The medical discourse and the effectiveness of corporate communication health

- Communicative interaction doctor-patient in multilingual contexts

- Oral language, written language, and specific disabilities

- Grammar diseases: the role of the linguist

- Linguistic symptoms in the context of specific diseases

 

Invited speakers

Charles Antaki

Maria Teresa Guasti

 

Scientific Committee

Grazia Basile

Anna Cardinaletti

Francesca M. Dovetto

Vincenzo Orioles

Franca Orletti

Patrizia Sorianello

 

 

Abstract submission guidelines

Scholars, researchers and PhD students interested in presenting a paper or poster should send an abstract by email to <medcli.sli2014@libero.it>.

The deadline for submission is 20st February 2014.

Notifications of acceptance will be sent by email by 31th March 2014.

Authors must submit an anonymous abstract (.doc/.pdf format) while in the email they should clearly include: name of the author(s), affiliation(s) and email address(es). Abstracts should be no longer than 1000 words including the bibliography.

Conference languages: Italian, English, French and Spanish.

 

Info: <dovetto@unina.it>

 

 

Back  Top

3-3-37(2014-10-05) 16th International Conference on Speech and Computer (SPECOM-2014), Novi Sad, Serbia

SPECOM 2014 - CALL FOR PAPERS

*********************************************************

 

16th International Conference on Speech and Computer (SPECOM-2014)

Venue: Novi Sad, Serbia, 5-9 October 2014

Web: www.specom.nw.ru

 

 

SPECOM NEWS

 

SPECOM this year is organized in parallel with DOGS (The Tenth Conference on Digital Speech and Image Processing) in the same time at the same place. Participants will be able to attend both conferences.

 

ORGANIZERS

 

The conference is organized by the Faculty of Technical Sciences, University of Novi Sad (UNS, Novi Sad, Serbia), in cooperation with Moscow State Linguistic University (MGLU, Moscow, Russia) and St. Petersburg Institute for Informatics and Automation of the Russian Academy of Science (SPIIRAS, St. Petersburg, Russia).

 

SPECOM conferences

 

The SPECOM conferences are long time being organised by SPIIRAS (St.Petersburg) and MGLU (Moscow). Recently SPECOM venue is significantly varied: Patras, Greece, 2005; Kazan, Russia, 2011; Plzen, Czech Republic, 2013.

The last conference was organized in parallel with TSD'2013 (The 16th International Conference of Text, Speech and Dialogue) and had a great success and benefits of joining the various research teams. Continue this tradition SPECOM'2014 and DOGS'2014 will be organized jointly. The both conferences are devoted to issues of human-machine interaction and their topics harmonically add each other.

Since 2013 due to extending contribution of University of West Bohemia, Czech Republic the SPECOM proceedings are published by Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI) series. LNAI series are listed in all major citation databases such as DBLP, SCOPUS, EI, INSPEC, or COMPENDEX.

 

TOPICS

 

Topics of the conference will include (but are not limited to):

Signal processing and feature extraction

Speech enhancement

Multichannel signal processing

Speech recognition and understanding

Spoken language processing

Spoken dialogue systems

Speaker identification and diarization

Speech forensics and security

Language identification

Text-to-speech systems

Speech perception and speech disorders

Speech translation

Multimodal analysis and synthesis

Audio-visual speech processing

Multimedia processing

Speech and language resources

Applications for human-computer interaction

 

OFFICIAL LANGUAGE

 

The official language of the event will be English. However, papers on processing of languages other than English are strongly encouraged.

 

FORMAT OF THE CONFERENCE

 

The conference program will include presentation of invited papers, oral presentations, and a poster/demonstration sessions. Papers will be presented in plenary or topic oriented sessions.

Social events including a trip to the Krusedol monastery and wine makers on Fruska Gora will allow for additional informal interactions. Details about the social event will be available on the web page.

 

SUBMISSION OF PAPERS

 

Authors are invited to submit a full paper not exceeding 8 pages formatted in the LNCS style (see below). Those accepted will be presented either orally or as posters. The decision on the presentation format will be based upon the recommendation of three independent reviewers. The authors are asked to submit their papers using the on-line submission form accessible from the conference web site.

Papers submitted to SPECOM 2014 must not be under review by any other conference or publication during the SPECOM review cycle, and must not be previously published or accepted for publication elsewhere.

As the reviewing is blind, the paper should not include authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g., 'We previously showed (Smith, 1991) ...', should be avoided. Instead, use citations, such as 'Smith previously showed (Smith, 1991) ...'. Papers that do not conform to the requirements above are determined to be rejected without review.

The paper format for the review has to be the PDF file with all required fonts included. Upon notification of acceptance, speakers will receive further information on submitting their camera-ready and electronic sources (for detailed instructions on the final paper format see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0).

 

 

IMPORTANT DATES

 

May 18, 2014 ............. Submission of full papers

June 1, 2014 ............. Notification of acceptance

June 15, 2014 ............ Final papers (camera ready) and registration

October 5-9, 2014 ........ Conference dates

 

The contributions to the conference will be published in proceedings that will be made available to participants at the time of the conference.

 

 

CONFERENCE FEES

 

The conference fee depends on the date of payment and on your status. It includes one copy of the conference proceedings, refreshments/coffee breaks, opening dinner, welcome party, mid-conference social event admissions, and organizing costs. In order to lower the fee as much as possible, meals during the conference, the accommodation, and the conference trip are not included.

 

Full participant:

early registration by June 15, 2013 – RSD 40000 (approx. 350 EUR)

late registration by September 1, 2013 – RSD 43000 (approx. 380 EUR)

on-site registration – RSD 50000 (approx. 440 EUR)

 

Student (reduced):

early registration by June 15, 2013 – RSD 32000 (approx. 280 EUR)

late registration by September 1, 2013 – RSD 35000 (approx. 310 EUR)

on-site registration – RSD 40000 (approx. 350 EUR)

 

The payment may be refunded up until September 15, at the cost of RSD 6.500. No refund is possible after this date. All costs are in Serbian Dinar (RSD), see e.g. http://www.xe.com/ucc/ for the current exchange rate.

At least one of the authors has to register and pay the registration fee by June 15, 2014 for their paper to be included in the conference proceedings. Only one paper of up to 8 pages is included in the regular registration fee. The additional paper and page charge is RSD 5000 per page. Any additional paper is treated as extra pages. An extra page charge is RSD 5000 per page. An author with more than one paper pays the additional paper rates unless a co-author has also registered and paid the full registration fee. In the case of uncertainty, feel free to contact the organising committee for clarification.

 

VENUE

 

The conference will be organized in Hotel Park, Novi Sad, Serbia (http://hotelparkns.com).

Novi Sad is the second largest city in Serbia. The city has population of 231,798 inhabitants. It is located in the southern part of Pannonian Plain, on the border of the Bačka and Srem regions, on the banks of the Danube river and Danube-Tisa-Danube Canal, facing the northern slopes of Fruška Gora mountain. The city was founded in 1694, when Serb merchants formed a colony across the Danube from the Petrovaradin fortress, a Habsburg strategic military post. In the 18th and 19th centuries, it became an important trading and manufacturing centre, as well as a centre of Serbian culture of that period, earning the nickname Serbian Athens. Today, Novi Sad is an industrial and financial centre of the Serbian economy, as well as a major cultural center.

The University of Novi Sad was founded on 28 June 1960. Today it comprises 14 faculties located in the four major towns of the Autonomous Province of Vojvodina: Novi Sad, Subotica, Zrenjanin, and Sombor. The University of Novi Sad is now the second largest among six state universities in Serbia. The main University Campus, covering an area of 259,807m², provides the University of Novi Sad with a unique and beautiful setting in the region and the city of Novi Sad. Having invested considerable efforts in intensifying international cooperation and participating in the process of university reforms in Europe, the University of Novi Sad has come to be recognized as a reform-oriented university in the region and on the map of universities in Europe.

The Faculty of Technical Sciences (Fakultet Tehničkih Nauka, FTN, www.ftn.uns.ac.rs) with 1,200 employees and more than 12,000 students is the largest faculty at UNS. FTN offers engineering education within 71 study programmes. As a research and scientific institution, FTN has 13 departments and 31 research centres. FTN also publishes 4 international journals and organises 16 scientific conferences on various aspects of engineering, including the conference DOGS which is dedicated to the area of speech technologies where FTN has the leading position in the Western Balkan region.

 

ACCOMMODATION

 

The organizing committee has arranged accommodation for reasonable prices in the same Hotel Park, which is situated near the city center. The rooms with sufficient discount are reserved for the conference days.

 

ADDRESS

 

All correspondence regarding the conference should be addressed to:

SPECOM Secretariat

E-mail: specom@iias.spb.su

Phone/Fax: +7 812 328 7081

Fax: +7 812 328 4450 — Please, designate the faxed material with capitals 'SPECOM' on top.

SPECOM-2014 conference web site: www.specom.nw.ru

 

Back  Top

3-3-38(2014-10-14) 2nd INTERNATIONAL CONFERENCE ON STATISTICAL LANGUAGE AND SPEECH PROCESSING

2nd INTERNATIONAL CONFERENCE ON STATISTICAL LANGUAGE AND SPEECH

PROCESSING

SLSP 2014

Grenoble, France

October 14-16,

2014

Organised by:

Équipe GETALP

Laboratoire d’Informatique de Grenoble

Research Group on Mathematical Linguistics (GRLMC)

Rovira i Virgili University

http://grammars.grlmc.com/slsp2014/

**********************************************************************************

AIMS:

SLSP is a yearly conference series aimed at promoting and displaying excellent research

on the wide spectrum of statistical methods that are currently in use in computational

language or speech processing. It aims at attracting contributions from both fields. Though

there exist large, well‐known conferences and workshops hosting contributions to any of

these areas, SLSP is a more focused meeting where synergies between subdomains and

people will hopefully happen. In SLSP 2014, significant room will be reserved to young

scholars at the beginning of their career and particular focus will be put on methodology.

VENUE:

SLSP 2014 will take place in Grenoble, at the foot of the French Alps.

SCOPE:

The conference invites submissions discussing the employment of statistical methods

(including machine learning) within language and speech processing. The list below is

indicative and not exhaustive:

phonology, morphology

syntax, semantics

discourse, dialogue, pragmatics

statistical models for natural language processing

supervised, unsupervised and semi‐supervised machine learning methods applied to

natural language, including speech

statistical methods, including biologically‐inspired methods

similarity

alignment

language resources

part‐of‐speech tagging

parsing

semantic role labelling

natural language generation

anaphora and coreference resolution

speech recognition

speaker identification/verification

speech transcription

text‐to‐speech synthesis

machine translation

translation technology

text summarisation

information retrieval

text categorisation

information extraction

term extraction

spelling correction

text and web mining

opinion mining and sentiment analysis

spoken dialogue systems

author identification, plagiarism and spam filtering

STRUCTURE:

SLSP 2014 will consist of:

invited talks

invited tutorials

peer‐reviewed contributions

INVITED SPEAKERS:

to be announced

PROGRAMME COMMITTEE:

Sophia Ananiadou (Manchester, UK)

Srinivas Bangalore (Florham Park, US)

Patrick Blackburn (Roskilde, DK)

Hervé Bourlard (Martigny, CH)

Bill Byrne (Cambridge, UK)

Nick Campbell (Dublin, IE)

David Chiang (Marina del Rey, US)

Kenneth W. Church (Yorktown Heights, US)

Walter Daelemans (Antwerpen, BE)

Thierry Dutoit (Mons, BE)

Alexander Gelbukh (Mexico City, MX)

Ralph Grishman (New York, US)

Sanda Harabagiu (Dallas, US)

Xiaodong He (Redmond, US)

Hynek Hermansky (Baltimore, US)

Hitoshi Isahara (Toyohashi, JP)

Lori Lamel (Orsay, FR)

Gary Geunbae Lee (Pohang, KR)

Haizhou Li (Singapore, SG)

Daniel Marcu (Los Angeles, US)

Carlos Martín‐Vide (Tarragona, ES, chair)

Manuel Montes‐y‐Gómez (Puebla, MX)

Satoshi Nakamura (Nara, JP)

Shrikanth S. Narayanan (Los Angeles, US)

Vincent Ng (Dallas, US)

Joakim Nivre (Uppsala, SE)

Elmar Nöth (Erlangen, DE)

Maurizio Omologo (Trento, IT)

Barbara H. Partee (Amherst, US)

Gerald Penn (Toronto, CA)

Massimo Poesio (Colchester, UK)

James Pustejovsky (Waltham, US)

Gaël Richard (Paris, FR)

German Rigau (San Sebastián, ES)

Paolo Rosso (Valencia, ES)

Yoshinori Sagisaka (Tokyo, JP)

Björn W. Schuller (London, UK)

Satoshi Sekine (New York, US)

Richard Sproat (New York, US)

Mark Steedman (Edinburgh, UK)

Jian Su (Singapore, SG)

Marc Swerts (Tilburg, NL)

Jun'ichi Tsujii (Beijing, CN)

Renata Vieira (Porto Alegre, BR)

Dekai Wu (Hong Kong, HK)

Feiyu Xu (Berlin, DE)

Roman Yangarber (Helsinki, FI)

Geoffrey Zweig (Redmond, US)

ORGANISING COMMITTEE:

Laurent Besacier (Grenoble, co‐chair)

Adrian Horia Dediu (Tarragona)

Benjamin Lecouteux (Grenoble)

Carlos Martín‐Vide (Tarragona, co‐chair)

Florentina Lilica Voicu (Tarragona)

SUBMISSIONS:

Authors are invited to submit non‐anonymized papers in English presenting original and

unpublished research. Papers should not exceed 12 single‐spaced pages (including

eventual appendices) and should be prepared according to the standard format for

Springer Verlag's LNAI/LNCS series (see

http://www.springer.com/computer/lncs?SGWID=0‐164‐6‐793341‐0).

Submissions have to be uploaded to:

https://www.easychair.org/conferences/?conf=slsp2014

PUBLICATIONS:

A volume of proceedings published by Springer in the LNAI/LNCS series will be available

by the time of the conference.

A special issue of a major journal will be later published containing peer‐reviewed

extended versions of some of the papers contributed to the conference. Submissions to it

will be by invitation.

REGISTRATION:

The period for registration is open from January 16, 2014 to October 14, 2014. The

registration form can be found at:

http://grammars.grlmc.com/slsp2014/Registration.php

DEADLINES:

Paper submission: May 7, 2014 (23:59h, CET)

Notification of paper acceptance or rejection: June 18, 2014

Final version of the paper for the LNAI/LNCS proceedings: June 25, 2014

Early registration: July 2, 2014

Late registration: September 30, 2014

Submission to the post‐conference journal special issue: January 16, 2015

QUESTIONS AND FURTHER INFORMATION:

florentinalilica.voicu@urv.cat

POSTAL ADDRESS:

SLSP 2014

Research Group on Mathematical Linguistics (GRLMC)

Rovira i Virgili University

Av. Catalunya, 35

43002 Tarragona, Spain

Phone: +34 977 559 543

Fax: +34 977 558 386

ACKNOWLEDGEMENTS:

Departament d’Economia i Coneixement, Generalitat de Catalunya

Laboratoire d’Informatique de Grenoble

Universitat Rovira i Virgili

Back  Top

3-3-39(2014-10-16) CfP MediaEval 2014 Multimedia Benchmark Evaluation, Barcelona (SP)

--------------------------------------------------
Call for Participation
MediaEval 2014 Multimedia Benchmark Evaluation
http://www.multimediaeval.org
Early registration deadline: 1 May 2014
--------------------------------------------------

MediaEval is a multimedia benchmark evaluation that offers tasks promoting research and innovation in areas related to human and social aspects of multimedia. MediaEval 2014 focuses on aspects of multimedia that include speech and audio. Participants carry out one or more of the tasks offered and submit runs to be evaluated. They then write up their results and present them at the MediaEval 2014 workshop.

The tasks that focus on speech are:

*QUESST: Query by Example Search on Speech Task (ex SWS)*
*Search and Hyperlinking*

The entire list of tasks and their descriptions is below.

For each task, participants receive a task definition, task data and accompanying resources (dependent on task) such as shot boundaries, keyframes, visual features, speech transcripts and social metadata. In order to encourage participants to develop techniques that push forward the state-of-the-art, a 'required reading' list of papers will be provided for each task. Participation is open to all interested research groups. To sign up, please click the “MediaEval 2014 registration site” link at:

http://www.multimediaeval.org/mediaeval2014

The following tasks are available to participants at MediaEval 2014:

*Synchronization of multi-user Event Media (New!)*
This task requires participants to automatically create a chronologically-ordered outline of multiple image galleries corresponding to the same event, where data collections are synchronized altogether and aligned along parallel lines over the same time axis, or mixed in the correct order.

*C@merata: Question Answering on Classical Music Scores (New!)*
In this task, systems take as input a noun phrase (e.g., 'harmonic perfect fifth') and a short score in MusicXML (e.g., J.S. Bach, Suite No. 3 in C Major for Cello, BWV 1009, Sarabande) and return an answer stating the location of the requested feature (e.g., 'Bar 206').

*Retrieving Diverse Social Images Task*
This task requires participants to automatically refine a ranked list of Flickr photos with landmarks using provided visual and textual information. The objective is to select only a small number of photos that are equally representative matches but also diverse representations of the query.

*Search and Hyperlinking*
This task requires participants to find video segments relevant to an information need and to provide a list of useful hyperlinks for each of these segments. The hyperlinks point to other video segments in the same collection and should allow the user of the system to explore the collection with respect to the current information need in a non-linear fashion. The task focuses on television data provided by the BBC and real information needs from home users.

*QUESST: Query by Example Search on Speech Task (ex SWS)*
The task involves searching FOR audio content WITHIN audio content USING an audio content query. This task is particularly interesting for speech researchers in the area of spoken term detection or low-resource speech processing.

*Visual Privacy*
This task requires participants to implement privacy filtering solutions that provide an optimal balance between obscuring information that personally identifies people in a video, and retraining information that allows viewers otherwise to interpret the video.

*Emotion in Music (an Affect Task)*
We aim at detecting emotional dynamics of music using its content. Given a set of songs, participants are asked to automatically generate continuous emotional representations in arousal and valence.

*Placing: Geo-coordinate Prediction for Social Multimedia*
This task requires participants to estimate the geographical coordinates (latitude and longitude) of multimedia items (photos, videos and accompanying metadata), as well as predicting how “placeable” a media item actually is. The Placing Task integrates all aspects of multimedia: textual meta-data, audio, image, video, location, time, users and context.

*Affect Task: Violent Scenes Detection*
This task requires participants to automatically detect portions of movies depicting violence. Participants are encouraged to deploy multimodal approaches (audio, visual, text) to solve the task.

*Social Event Detection in Web Multimedia*
This task requires participants to discover, retrieve and summarize social events, within a collection of Web multimedia. Social events are events that are planned by people, attended by people and for which the social multimedia are also captured by people.

*Crowdsourcing: Crowdsorting Multimedia Comments (New!)*
This task asks participants to combine human computation (i.e., input from the crowd) with automatic computation to carry out classification. The classification involves sorting timed-comments in music, i.e., comments that users have made at certain points in a song, into categories according to their type (e.g., useful vs. non-useful and informative vs. affective).

Tasks marked 'New!' are the 2014 Brave New Tasks. If you sign up for these tasks, please be aware that you will be asked to keep in close touch with the task organizers concerning the details of the task over the course of the benchmarking cycle. We ask for extra-tight communication in order to ensure that these tasks have the flexibility they need to reach their goals.

MediaEval 2014 Timeline
(dates vary slightly from task to task, see the individual task pages for the individual deadlines: http://www.multimediaeval.org/mediaeval2014)

April-May: Registration and return usage agreements.
May-June: Release of development/training data.
June-July: Release of test data.
Mid-Sept.: Participants submit their completed runs.
Mid-Sept.-End-Sept.: Evaluation of submitted runs. Participants write their 2-page working notes papers.
16-17+18 October: MediaEval 2014 Workshop, Barcelona, Spain

We ask you to register by 1 May, when the first task will release its data set. After that point, late registration will be possible, but we encourage teams to register as early as they can.

Contact
For questions or additional information please contact Martha Larson m.a.larson@tudelft.nl or visit http://www.multimediaeval.org

MediaEval 2014 Organization Committee:

Martha Larson at Delft University of Technology and Gareth Jones at Dublin City University act as the overall coordinators of MediaEval. Individual tasks are coordinated by a group of task organizers, who form the MediaEval Organizing Committee. It is the collective efforts of this group of people that makes MediaEval possible. The complete list of MediaEval organizers is at:

http://www.multimediaeval.org/who

A large number of organizations and projects make a contribution to MediaEval organization, including the projects (alphabetical): AXES (http://www.axes-project.eu), CUbRIK (http://www.cubrikproject.eu/), CNGL (http://www.cngl.ie), CrowdRec (http://crowdrec.eu), Glocal (http://www.glocal-project.eu), LinkedTV (http://www.linkedtv.eu), Media Mixer (http://mediamixer.eu), Mucke (http://www.chistera.eu/projects/mucke), Promise (http://www.promise-noe.eu), Quaero (http://www.quaero.org), Sealinc Media (http://www.commit-nl.nl), SocialSensor (http://www.socialsensor.org), and VideoSense (http://www.videosense.eu).

Back  Top

3-3-40(2014-10-16) Journées de pausologie à Montpellier France

 

Journées de pausologie  http://itic.univ-montp3.fr/pausologie/
Le point sur la pause


Les Journées de Pausologie s’intéressent aux travaux originaux et novateurs réalisés en phonétique, psycholinguistique et linguistique générale sur la pause. 

 

D’un point de vue purement formel, une séquence de parole peut être décrite comme une succession de sons entrecoupée par des phases silencieuses. Si la phonétique s’est largement appliquée à décrire ces séquences sonores du point de vue de leurs caractéristiques articulatoires, de leur dimension acoustique et de leurs conséquences au niveau perceptif, les pauses ont fait l’objet d’un nombre d’études moins conséquent, alors même qu’un certain nombre de recherches (Goldman-Eisler, 1968 par ex.) ont révélé la nécessité de marquer de brèves interruptions lors de la production de la parole.

 

Ce caractère essentiel de la pause s’explique notamment par le fait qu’elle est le reflet à la fois du mouvement respiratoire mais aussi d’une activité cognitive importante. En effet, la pause permet au locuteur de reprendre son souffle mais aussi de planifier le contenu de son message pour structurer son énoncé et le mettre en scène, comme dans le cas des discours politiques par exemple (Duez, 1999 par ex.). En outre, la pause est également l’un des éléments révélant la fin d’un tour du parole et le signal du début de la prise de parole pour l’interlocuteur (Sacks et al., 1974). A l’écrit, les fonctions prosodiques, mais aussi syntaxiques et sémantiques, de la pause sont traditionnellement marquées par des signes de ponctuation dont l’interprétation a varié au cours de l’histoire (Catach, 1994 ; 2001).

 

La dimension cognitive de la pause qui a été évoquée plus haut permet également d’exploiter ce paramètre rythmique en linguistique clinique dans la mesure où la pause, prise en tant que disfluence, est révélatrice des capacités langagières de l’individu : la localisation et la durée des pauses peuvent en effet servir d’indices pour respectivement identifier des difficultés pathologiques d’accès au lexique (Gayraud et al., 2011) ou pour différencier une disfluence classique et un bégayage (Starkweather, 1987 ; Hirsch et al., 2012).    

 

Les propositions de communication devront répondre à une des thématiques suivantes :

 

  1. La pause en tant que contribution à la mise en place d’un phonostyle ;
  2. La pause et le rythme en paroles normale/pathologique ;
  3. Les répercussions de la pause sur les plans sémantique et syntaxique ;
  4. La pause en tant qu’indice de tour de parole dans l’interaction.

 

Le lien avec le sujet du colloque devra être explicité dans le résumé. Par ailleurs, des propositions n’entrant pas directement dans l’une des thématiques proposées ci-dessus peuvent également être acceptées à la condition que soit manifesté le lien avec le thème des Journées.

 

Bibliographie :

 

Catach N., (1994) La ponctuation : histoire et système, collection Que sais-je ?, n° 2818, Paris, PUF.

Catach N. (2001) Histoire de l'orthographe française, éd. posthume réalisée par Renée Honvault, avec la collab. de Irène Rosier-Catach, collection Lexica, n° 9, Paris, Champion.

Duez D. (1999), La fonction symbolique des pauses dans la parole de l'homme politique, Faits de langues, vol. 13, p. 91-97.

Gayraud, F., Lee H.R., Barkat-Defradas, M. (2010), Syntactic and lexical context of pauses and hesitations in the discourse of Alzheimer patients and healthy elderly subjects, Clinical Linguistics & Phonetics, vol. 25(3):198-209 (DOI : 10.3109/02699206.2010.521612).

Goldman-Eisler F. (1968) Psycholinguistics. Experiments in spontaneous speech, New York, Academic Press.

Hirsch F., Monfrais-Pfauwadel M.C., Crevier-Buchman L., Sock R., Fauth C., Pialot H. (2012) Using nasovideofibroscopic data to observe abnormal laryngeal behavior in stutterers, Proceedings of the 7th World Congress on Fluency Disorders, 2-5 juillet, Tours.

Sacks H., Schegloff E A., Jefferson G. (1974), A Simplest Systematics for the Organization of Turn-Taking for Conversation, Language, n° 50, 4, p. 696-735.

Starkweather C. (1987), Fluency and Stuttering. Englewood Cliffs: Prentice Hall.

 

Comité Scientifique :

 

Barkat-Defradas Mélissa, Praxiling CNRS UMR5267-Université de Montpellier

Bres Jacques, Praxiling CNRS-Université de Montpellier

Delais-Roussare Elisabeth, CNRS-Université Paris 7 Paris Diderot,

Dodane Christelle, CNRS-Université de Montpellier

Ferré Gaëlle, Laboratoire de Linguistique de Nantes

Gayraud Frédérique, Université Lyon 2 & CNRS (Dynamique du Langage UMR5596)

Ghio Alain, Laboratoire Parole et Langage UMR 7309 CNRS - Université Aix-Marseille

Goldman Jean-Phillipe, Université de Genève

Hirsch Fabrice, Praxiling CNRS UMR5267-Université de Montpellier

Kleiber Georges, Université de Strasbourg, EA 1339 Lilpa

Rochet-Capellan Amélie, GIPSA Lab CNRS UMR 5216, Grenoble

Simon Anne-Catherine, Université Catholique de Louvain

Sock Rudolph, Université de Strasbourg, EA 1339 Lilpa

Steuckardt Agnès, Praxiling CNRS UMR5267-Université de Montpellier

Vaxelaire Béatrice, Université de Strasbourg, EA 1339 Lilpa

 

Comité d’Organisation :

 

Barkat-Defradas Mélissa, Université Paul Valéry, UMR5267 Praxiling

Bellemouche Hacène, Université Paul Valéry, UMR5267 Praxiling

Didirkova Ivana, Université Paul Valéry, UMR5267 Praxiling

Dodane Christelle, Université Paul Valéry, UMR5267 Praxiling

Hirsch Fabrice, Université Paul Valéry, UMR5267 Praxiling

Maturafi Lavie, Université Paul Valéry, UMR5267 Praxiling

Sauvage Jérémi, Université Paul Valéry, UMR5267 Praxiling

 

Calendrier :

 

Date limite de soumission : 15 juin 2014

Date de notification aux auteurs : 15 juillet 2014

Journées d’Etudes de Pausologie : 16-17 octobre 2014

 

Soumission :

 

Les soumissions aux Journées de Pausologie se présentent sous la forme de résumés rédigés en français, d'une longueur maximale de 500 mots (hors bibliographie), police Times New Roman, 12pt, interligne simple. Les résumés devront être soumis au format PDF aux adresses suivantes : fabrice.hirsch@univ-montp3.fr ET melissa.barkat@univ-montp3.fr. Dans un souci d’anonymisation, ne figureront dans le fichier PDF que le titre de la proposition, le résumé et la bibliographie. Le nom des auteurs et leur affiliation devront être présents dans le courriel mais pas dans le fichier PDF.

 

Un article sera demandé à l’issue des Journées en vue d’une publication.

   

Pour toute question, écrire à : fabrice.hirsch@univ-montp3.fr

Back  Top

3-3-41(2014-10-16) Journées de pausologie à Montpellier France
Les Journées de Pausologie, qui se dérouleront à Montpellier les 16 et 17 octobre 2014, s’intéressent aux travaux originaux et novateurs réalisés en phonétique, psycholinguistique et linguistique générale sur la pause.

Les soumissions (résumé de 500 mots) sont à envoyer à fabrice.hirsh@univ-montp3.fr  ET melissa.barkat@univ-montp3.fr  pour le 15 Juin.

Toutes les informations sont en ligne : http://itic.univ-montp3.fr/pausologie/
Back  Top

3-3-42(2014-11-03) CfP ACM Multimedia 2014 - Area on Music, Speech, and Audio Processing in Multimedia, Orlando, Florida, USA
Call for short and long paper contributions for ACM Multimedia 2014 - 
Area on Music, Speech, and Audio Processing in Multimedia 
November 3-7, 2014 Orlando, Florida, USA 
(For general information and information on other areas check http://www.acmmm.org/2014/) 
As a core part of multimedia data, the acoustic modality is of great importance as a source of 
information that is orthogonal to other modalities like video or text. This allows for richer
 information to be extracted when performing content analysis, as well as a rich mean of 
communication of information. We are seeking strong technical submissions revolving around 
music, speech and audio processing in multimedia. One topic of interest is submissions
 performing an analysis of the acoustic signals in order to extract information from multimedia 
content (e.g. what notes are being played, what is being said, or what sounds appear), or the 
context (e.g. language spoken, age and gender of the speaker, localization using sound). 
Another topic of interest is submissions performing synthesis of acoustic content for multimedia 
purposes (e.g. speech synthesis, singing voices, acoustic scene synthesis). Furthermore, we are 
also interested in ways to represent acoustic data as multimedia; for example, in symbolic form
 (e.g. closed captioning of speech), in the form of sensor input and visual images (e.g. recordings
 of gestures in musical performances). or others. Another topic of interest is applications that 
involve the acoustic modality. The inclusion of acoustics opens up interesting possibilities for 
novel multimedia interfaces and user interactions. In addition, contextual, social and affective 
aspects play an important role when using acoustics, which can be seen, for example, in the 
consumption and enjoyment of music, and the sound design of cinematic productions.
 All submissions should maintain a clear relation to multimedia: there either should be an 
explicit relation to multimedia items, applications or systems, or an application of a multimedia 
perspective, in which information sources from different modalities are considered. 
Topics of interest include, but are not limited to:
 · Multimedia audio analysis and synthesis · Multimedia audio indexing, search, and retrieval
 · Music, speech, and audio annotation, similarity measures, and evaluation 
· Multimodal and multimedia approaches to music, speech, and audio
 · Multimodal and multimedia context models for music, speech, and audio 
· Computational approaches to music, speech, and audio inspired by other domains (e.g. 
computer vision, information retrieval, musicology, psychology) 
· Multimedia localization using acoustic information 
· Social data, user models and personalization in music, speech, and audio 
· Music, audio, and aural aspects in multimedia user interfaces
 · Algorithms and applications of music, speech, and audio
 · New and interactive musical instruments, systems and other music, speech, and audio 
applications · Novel interaction interfaces using/with music, speech, and audio 
· Music, speech, and audio coding, transmission, and storage for multimedia applications 
Deadlines for long papers is March 31st, 2014 and for short papers is April 14th, 2014 
For other deadlines please check http://www.acmmm.org/2014/important_dates.html 
 
 
Back  Top

3-3-43(2014-12-01) CfP IEEE Global Conference on Signal and Information Processing - Atlanta Georgia 2014
IEEE GlobalSIP’14 – Call for Symposium Proposals
IEEE Global Conference on Signal and Information Processing - Atlanta Georgia 2014

Technical Program Chairs: Douglas Williams, Timothy Davidson, and Ghassan AlRegib
General Chairs: Geoffrey Li and Fred Juang

The IEEE Global Conference on Signal and Information Processing (GlobalSIP) is a recently launched flagship conference of the IEEE Signal Processing Society. GlobalSIP’14 will be held in Atlanta, Georgia, USA, during the week of December 1, 2014. The conference will focus broadly on signal and information processing with an emphasis on up-and-coming signal processing themes. The conference will feature world-class speakers, tutorials, exhibits, and technical sessions consisting of poster or oral presentations. GlobalSIP’14 will be comprised of colocated symposia selected competitively based on responses to this call-for-symposium proposals. Symposium topics may include, but are not limited to:

  • Signal processing in communications and networks, including green communication and optical communications
  • Image and video processing
  • Selected topics in speech and language processing
  • Acoustic array signal processing
  • Signal processing in security applications
  • Signal processing in finance
  • Signal processing in energy and power systems
  • Signal processing in genomics and bioengineering (physiological, pharmacological, and behavioral)
  • Neural signal processing
  • Selected topics in statistical signal processing
  • Seismic signal processing
  • Graph-theoretic signal processing
  • Machine learning and human machine interfaces
  • Compressed sensing, sparsity analysis, and applications
  • Big data processing, heterogeneous information processing, and informatics
  • Radar and array processing including localization and ranging techniques
  • Multimedia transmission, indexing and retrieval, and playback challenges
  • Hardware and real-time implementations
  • Other novel and significant applications of selected areas of signal processing

Symposium proposals should include the title of the symposium; length of the symposium (one day or two days); projected selectivity of the symposium; paper length requirements (submission: from 2 to 6 pages, final: 4-6 pages, invited papers may be longer); names, addresses, and short CVs (up to 250 words) of the organizers, including the general organizers and the technical chairs; an up-to two page description of the technical issues that the symposium will address (including timeliness and relevance to the signal processing community; names of (potential) technical program committee members; name of (potential) invited speakers (up to 2 for one-day symposia and 4 for two-day ones)); and a draft call-for-papers. Please package everything in a single pdf file. More detailed information can be found at http://renyi.ece.iastate.edu/globalsip2014/cfs.html

Symposium proposals should be emailed to Doug Williams (doug.williams@ece.gatech.edu) and Geoffrey Li (liye@ece.gatech.edu) according the following timeline:

November 8, 2013: Symposium proposals due
November 22, 2013: Symposium selection decision notification
November 29, 2013: Final version of the call-for-papers for the accepted symposia due

Tentative timeline for paper submission:
May 16, 2014: Paper submission deadline (regular and invited)
June 27, 2014: Review results announced
September 5, 2014: Camera-ready regular and invited papers due

 

Back  Top

3-3-44(2014-12-07) 3rd Dialog State Tracking Challenge (DSTC3).

We are pleased to announce the opening of the third Dialog State Tracking Challenge (DSTC3). Complete information, including the challenge handbook, training data, evaluation scripts, and baseline trackers are available on the DSTC3 website:

http://camdial.org/~mh521/dstc/

The Dialog State Tracking Challenge (DSTC) is a research challenge focused on improving the state of the art in tracking the state of spoken dialog systems. State tracking refers to accurately estimating the user's goal as a dialog progresses. Accurate state tracking is desirable because it provides robustness to errors in speech recognition, and helps reduce ambiguity inherent in language within a temporal process like dialog.

In this challenge, participants are given labelled corpora of dialogs to develop state tracking algorithms. The trackers will then be evaluated on a common set of held-out dialogs which are released, un-labelled, during a one week period. This is a corpus-based challenge: participants do not need to implement a speech recognizer, a semantic parser, or an end-to-end dialog system.

The first DSTC completed in 2013, with 9 teams participating and a total of 27 entries, with 9 papers presented at SIGDIAL 2013, advancing the state-of-the-art in several dimensions. DSTC2 introduced a completely new dataset, in a new domain (restaurant information), with more complicated and dynamic dialog states that may change throughout the dialog. DSTC2 concluded a few months ago, again with 9 participating teams (about half new) -- results have been submitted to and will be presented at a special session at SIGDIAL 2014.

DSTC3 will focus on the task of adapting and expanding to a new domain, when there is a lot of labelled data in a smaller domain. The 'smaller domain' is the restaurants domain from DSTC2; the 'new extended domain' is a larger tourist information domain: DSTC3 includes restaurants and adds pubs and coffee shops, and more detail (slots) for restaurants relative to the DSTC2 data.

Participants are encouraged to submit papers describing their work to SLT 2014, whose deadline will be approx. 20 July. The organisers are awaiting confirmation of a proposed special session at the conference.

DSTC3 schedule:

- 4 April 2014 : Labelled tourist information seed set released

- 9 June 2014 : Unlabelled tourist information test set released

- 16 June 2014 : Tracker output on tourist information test set due

- 23 June 2014 : Results on tourist information test set given to participants

- 20 July 2014 : SLT paper deadline (approximate)

- 7-10 Dec 2014 : SLT workshop (Lake Tahoe, Nevada, USA)

The training data, scoring scripts, baselines, domain ontology and database are all available for public download. Prospective participants are strongly encouraged to join the mailing list, to ensure you receive notifications of updates to data or scripts, and are included in discussions about the challenge. To join, email listserv@lists.research.microsoft.com with 'subscribe DSTC' in the body of the message (without quotes).

Feel free to direct questions to the organizers. We hope you will consider participating!

DSTC3 organizers

Matt Henderson (lead) - Cambridge University [matthen@gmail.com]

Blaise Thomson - Cambridge University [brmt2@cam.ac.uk]

Jason D. Williams - Microsoft Research [jason.williams@microsoft.com]

DSTC3 advisory board

Bill Byrne - University of Cambridge

Paul Crook - Microsoft Research

Maxine Eskenazi - Carnegie Mellon University

Milica Gasic - University of Cambridge

Helen Hastie - Herriot Watt

Kee-Eung Kim - KAIST

Sungjin Lee - Carnegie Mellon University

Oliver Lemon - Herriot Watt

Olivier Pietquin - SUPELEC

Joelle Pineau - McGill University

Deepak Ramachandran - Nuance Communications

Brian Strope - Google

Steve Young - University of Cambridge

 
Back  Top

3-3-45(2014-12-07)The 2014 IEEE Spoken Language Technology Workshop (SLT 2014) , South Lake Tahoe, California/Nevada, USA

The 2014 IEEE Spoken Language Technology Workshop (SLT 2014) will be held in South Lake Tahoe, California and Nevada, on Dec 7-10, 2014. The main theme of the workshop will be 'machine learning in spoken language technologies'. One of our goals is to increase both intra and inter community interaction, by means of (inter alia)

  • keynote/guest speakers from machine learning community (e.g. Neural Information Processing Systems (NIPS) and others);

  • online panel discussions before/during the conference;

  • miniSIGs - small discussion groups to get organized before/during /after the panel discussions or as independent SIG meetings;

  • highlight sessions, where 3-5 best papers will be presented orally.

 

Following tradition from the last two SLT workshops in 2010 (Berkeley, CA) and 2012 (Miami, FL), we are looking forward to hosting challenges and special or themed sessions.

Submission of papers in all areas of spoken language technology is encouraged, with emphasis on the following topics:

·      Speech recognition and synthesis

·      Spoken language understanding

·      Spoken dialog systems

·      Spoken document summarization

·      Machine translation for speech

·      Question answering from speech

·      Speech data mining

·      Spoken document retrieval

·      Spoken language databases

·      Multimodal processing

·      Human/computer interaction

·      Educational and healthcare applications

·      Assistive technologies

·      Natural Language Processing



Important Deadlines

Paper Submission Monday, July 21, 2014

Notification of Acceptance Friday, September 5, 2014

Demo Submission September 2014

Demo Acceptance October 2014

Early registration deadline October 17, 2014

Workshop December 7-10, 2014



Submission Procedure

 

Prospective authors are invited to submit full-length, 4-6 page papers, including figures and references, to the SLT 2014 website. All papers will be handled and reviewed electronically. Please note that the submission dates for papers are strict deadlines.

 

 

Back  Top

3-3-46(2014-12-23) CfP International Conference on Human Machine Interaction, New Delhi India

Call for papers

International Conference on Human Machine Interaction 2014 23 – 25, December 2014 http://intconfhmi.com

In association with SETIT, Sfax University, Tunisia. and ASDF (Association of Scientists, Developers and Faculties) Chennai Chapter, we will organize the International Conference HMI 2014 which will be held in New delhi -INDIA.

Human Machine Interaction (HMI), is a main annual research conference aimed at presenting current research being carried out. The idea of the conference is for the scientists, scholars, engineers and students from the Universities all around the world and the industry to present ongoing research activities, and hence to foster research relations between the Universities and the industry. HMI 2014 is co-sponsored by Association of Scientists, Developers and Faculties and SETIT, Sfax University, Tunisia and technical co-sponsored by many other universities and institutes.
The HMI 2014 conference proceeding will be published in the ASDF Proceedings as one volume, and will be included in the Engineering & Technology Digital Library, and indexed by EBSCO, World Cat, Google Scholar, and sent to be reviewed by Ei Compendex and ISI Proceedings. Selected papers will be recommended to be published in the Journals.

Area of Submission

 

  • Active Vision
  • Agents and Multi-Agent Systems
  • Applications of Perception
  • Artificial Intelligence
  • Brain Machine Interfaces
  • Cognitive Engineering
  • Collaborative Design and Manufacturing
  • Collaboration Technologies and Systems
  • Computer Graphics
  • Computer Vision
  • Cooperative Design
  • Dimensionality Reduction
  • Distributed Intelligent Systems
  • Ergonomics
  • Fuzzy Systems
  • Health Care
  • Human Centered Transportation System
  • Human Factors
  • Human Perception
  • Hybrid Intelligent System Design
  • Image Analysis
  • Intelligent Transportation
  • Knowledge Representation
  • Machine Learning
  • Material Appearance Modeling Medical Imaging
  • Mental Workload
  • Multimedia
  • Multiview Learning
  • Next Generation Network
  • Network Security and Management
  • Ontologies
  • Patient Safety
  • Pattern Recognition
  • Perceptual Factors
  • Physiological Indicators
  • Production Planning and Scheduling
  • Protocol Engineering
  • Semi-Supervised Learning
  • Service-Oriented Computing
  • Simulator Training
  • Systems Integration and Collaboration
  • Systems Safety and Security
  • Team Performance
  • Video Processing
  • Virtual Reality
  • Visualization

Topics of interest for HMI is widely declared for the above, but not limited to.

Conference Registration Fees Rebate (Discount)

We are pleased to inform you that the organizing committee of the HMI2014 allocates a financial support for all participants from developing or emerging countries. This Financial support of among of 150 Dollars is available to help participants to attend HMI2014

You can find more details in: http://intconfhmi.com/register.html

  • REGISTRATION without discount

    Author Registration (Full Paper)

    250 USD

    Author Registration (Short Paper / Poster) 200 USD

    Listener Registration

    200 USD

    Extra Pages

    15 USD

  • REGISTRATION with discount

    Author Registration (Full Paper)

    100 USD

    Author Registration (Short Paper / Poster) 50 USD

    Listener Registration

    50 USD

    Extra Pages

    15 USD

 

We are waiting for seeing you in India.

NB : A select number of Post Conference Excursions will take place during 5 days.

As examples : 1 Day Tour to Taj Mahal, Agra Fort, Mathura in AC Bus : 25 $ per person 1 Day Tour to Qutub Minar, Parliament, Lotus Temple, India Gate, Gandhi Smiriti, Red Fort, Humayun's Tomb, Rajghat: 25 $ per person

 

Best Regards

 Mohamed Salim BOUHLEL General Co-Chair, HMI2014 Head of Research Unit: Sciences & Technologies of Image and Telecommunications ( Sfax University ) GSM +216 20 200005

 

Back  Top

3-3-47(2016-00-00) Bids invitation fore the conference Speech Prosody 2016 (SProSIG)
The Advisory Board of SProSIG, the Speech Prosody Special Interest Group, invites bids from sites interested in hosting
 its flagship conference Speech Prosody in 2016.  

The bid process will proceed as follows:

(1) Optional: Groups interested in hosting a bid are invited to express interest in an e-mail to the current SProSIG Secretary
 and/or President (e.g., by replying to this e-mail).

(2) Optional: Each group interested in hosting the conference is invited to give a presentation on May 23, 2014 at the Speech 
Prosody 2014 conference in Dublin.

(3) Required: Each bid should then be formalized in a written document, mailed to the secretary of SProSIG by June 15, 2014.  
These documents will be posted at sprosig.isle.illinois.edu for all SProSIG members to read.

(4) Selection of the site for Speech Prosody 2016 will then be conducted using an on-line ballot at http://sprosig.isle.illinois.edu.  
 
Each current member of SProSIG will be allowed to vote.

Bids to host Speech Prosody 2016 should include the following information:

(a) Names and affiliations of members of the organizing committee.

(b) Information about institutional support for the conference if any.

(c) Tentative location of the conference (city and, if possible, venue).  Oral and written presentation of the bid should highlight 
attributes that make both city and site suitable for hosting an international conference, including transportation to/from and
 within the city, lodging and dining options near the proposed venue, facilities in the proposed venue for a 300-person oral
 session and a 40-poster poster session, and any other attributes likely to be of interest to the members of SProSIG.

(d) Tentative dates of the proposed conference (typically four days in late May 2016)

(e) Proposed theme of the conference and/or proposed new session topics that will be included, along with existing session
 topics of the Speech Prosody conference, in the Conference Call for Papers.

Back  Top

3-3-48Announcing the Master of Science in Intelligent Information Systems

Carnegie Mellon University

 

degree designed for students who want to rapidly master advanced content-analysis, mining, and intelligent information technologies prior to beginning or resuming leadership careers in industry and government. Just over half of the curriculum consists of graduate courses. The remainder provides direct, hands-on, project-oriented experience working closely with CMU faculty to build systems and solve problems using state-of-the-art algorithms, techniques, tools, and datasets. A typical MIIS student completes the program in one year (12 months) of full-time study at the Pittsburgh campus.  Part-time and distance education options are available to students employed at affiliated companies. The application deadline for the Fall 2013 term is December 14, 2012. For more information about the program, please visit http://www.lti.cs.cmu.edu/education/msiis/overview.shtml

Back  Top

3-3-49CALL FOR PROPOSALS ICASSP 2019
CALL FOR PROPOSALS
ICASSP 2019

 

The IEEE Signal Processing Society is accepting proposals for the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP®). SPS Members are invited to submit a proposal to host ICASSP. If you are interested in submitting a proposal please contact Nicole Allen to get the forms and guidelines.

ICASSP is the world’s largest and most comprehensive technical conference focused on signal processing applications. The series is sponsored by the IEEE Signal Processing Society and has been held annually since 1976. The conference features world-class speakers, tutorials, exhibits, and over 120 lecture and poster sessions. ICASSP is a cooperative effort of the IEEE Signal Processing Technical Committees:

  • Audio and Acoustic Signal Processing
  • Bio Imaging and Signal Processing
  • Design and Implementation of Signal Processing Systems
  • Image, Video, and Multidimensional Signal Processing
  • Information Forensics and Security
  • Machine Learning for Signal Processing
  • Multimedia Signal Processing
  • Sensor Array and Multichannel
  • Signal Processing for Communications and Networking
  • Signal Processing Theory and Methods
  • Speech and Language Processing
  • Standing Committee on Industry DSP Technology

To submit a proposal, send a notice of intent to bid to the Vice President – Conferences and the Conference Service Coordinator using the email addresses as shown below. Include in the notice your contact information and the proposed location. The Signal Processing Society Conference Services staff will issue the proposal submission form and guidlines upon receipt of the letter of intent. The form must be completed and the proposal submitted to the Conference Services staff by 21 March 2014. Proposals will be assessed by the Conference Board Executive Subcommittee. Accepted bidding teams [finalists] will be invited to present at the Conference Board meeting held at ICASSP 2014, May 4-9, 2014 in Florence, Italy.

Professor Wan-Chi Siu
IEEE Signal Processing Society
VP-Conferences (2012 – 2014)
enwcsiu@polyu.edu.hk
Nicole Allen
IEEE Signal Processing Society
Conference Services Coordinator
n.allen@ieee.org
Back  Top

3-3-50Cf Participation:URGENT/ NTCIR-11 Spoken Query and Spoken Document Retrieval Task (SpokenQuery&Doc)

Call for Participation

NTCIR-11 Spoken Query and Spoken Document Retrieval Task (SpokenQuery&Doc)

http://www.nlp.cs.tut.ac.jp/ntcir11

 

(Although the official deadline of the NTCIR-11 task registration is 20th

 

January, the organizers will accept the registration for SpokenQuery&Doc until

 

the end of March, 2014.)

INTRODUCTION

The NTCIR-11 SpokenQuery&Doc task will evaluate information retrieval systems

that make use of speech technologies for query input and document retrieval,

i.e. speech-driven information retrieval and spoken document retrieval.

Spoken document retrieval (SDR) in the SpokenQuery&Doc task builds on the

previous NTCIR-9 SpokenDoc and NTCIR-10 SpokenDoc-2 tasks, and will evaluate two

SDR tasks: spoken term detection (STD) and spoken content retrieval (SCR).

Common search topics will tbe used for STD and SCR which will enable component

and whole system evaluations of STD and SCR.

The emergence of mobile computing devices means that it is increasingly

desirable to interactive with computing applications via speech input. The

SpokenQuery&Doc provides the first benchmark evaluation using spontaneously

spoken queries instead of typed text queries. Here, a spontaneously spoken query

means that the query is not carefully arranged before speaking, and is spoken in

a natural spontaneous style, which tends to be longer than a typed text query.

Note that this spontaneousness contrasts with spoken queries in the form of

spoken isolated keywords which are carefully selected in advance, and represent

very different situations in terms of speech processing and composition. One of

the advantages of such spontaneously spoken queries as input to retrieval

systems is that this enables users to easily submit long queries which give

systems rich clues for retrieval, although their spontaneous nature means that

they are harder to recognise reliably.

TASK OVERVIEW

The target data for the SpokenQuery&Doc task is recordings of the first to

seventh annual Spoken Document Processing Workshop. For this speech data, manual

and automatic transcriptions (with several ASR conditions) will be provided to

task participants. These enable researchers interested in SDR, but without

access to their own ASR system to participate in the tasks.

The main task of SpokenQuery&Doc is searching spoken documents for contents

described in response to spontaneously spoken queries (spoken-query-driven

spoken content retrieval: SQ-SCR). Partial sub-tasks of the main task will also

be conducted. The sub-tasks include a spoken term detection task for the spoken

queries (SQ-STD), and a SCR task from the search results of SQ-STD (STD-SCR).

For these tasks, manual and automatic transcriptions of the spoken queries are

also to be provided. These enable participants from the previous SpokenDoc tasks

to participate in the tasks using the text queries. For the SQ-SCR and STD-SCR

tasks, a target search unit is either a speech segment that is spoken within a

presentation slide (slide retrieval task) or a boundary-free speech segment

(passage retrieval task).

FOR MORE INFORMATION

Please visit: http://www.nlp.cs.tut.ac.jp/ntcir11

TASK REGISTRATION

To register for the SpokenQuery&Doc please visit the main NTCIR-11 website at:

http://research.nii.ac.jp/ntcir/ntcir-11/

Registration deadline: 20th January 2014

(Although the official deadline of the NTCIR-11 task registration is 20th

January, the organizers will accept the registration for SpokenQuery&Doc until

the end of March, 2014.)

ORGANIZERS

Tomoyosi Akiba (Toyohashi University of Technology, Japan)

Hiromitsu Nishizaki (University of Yamanashi, Japan)

Hiroaki Nanjo (Ryukoku University, Japan)

Gareth Jones (Dublin City University, Ireland)

If you have any questions, please send e-mails to the task organizers mailing

list: ntcadm-spokenqueryanddoc@nlp.cs.tut.ac.jp

======================================================================

Back  Top

3-3-51Master in linguistics (Aix-Marseille) France

Master's in Linguistics (Aix-Marseille Université): Linguistic Theories, Field Linguistics and Experimentation TheLiTEx offers advanced training in Linguistics. This specialty focuses Linguistics is aimed at presenting in an original way the links between corpus linguistics and scientific experimentation on the one hand and laboratory and field methodologies on the other. On the basis of a common set of courses (offered within the first year), TheLiTEx offers two paths: Experimental Linguistics (LEx) and Language Contact & Typology (LCT) The goal of LEx is the study of language, speech and discourse on the basis of scientific experimentation, quantitative modeling of linguistic phenomena and behavior. It focuses on a multidisciplinary approach which borrows its methodologies to human physical and biological sciences and its tools to computer science, clinical approaches, engineering etc.. Among the courses offered: semantics, phonetics / phonology, morphology, syntax or pragmatics, prosody and intonation, and the interfaces between these linguistic levels, in their interactions with the real world and the individual, in a biological, cognitive and social perspective. Within the second year, a set of more specialized courses is offered such as Language and the Brain and Laboratory Phonology. LCT aims at understanding the world's linguistic diversity, focusing on language contact, language change and variation (European, Asian and African languages, Creoles, sign language, etc.).. This specialty focuses, from a a linguistic and sociolinguistic perspective, on issues of field linguistics and taking into account both the human and socio-cultural dimension of language (speakers, communities). It also focuses on documenting rare and endangered languages and to engage a reflection on linguistic minorities. This path also provides expertise and intervention models (language policy and planning) in order  to train students in the management of contact phenomena and their impact on the speakers, languages and societies More info at: http://thelitex.hypotheses.org/678

Back  Top

3-3-52NEW MASTER IN BRAIN AND COGNITION AT UNIVERSITAT POMPEU FABRA, BARCELONA

NEW MASTER IN BRAIN AND COGNITION AT UNIVERSITAT POMPEU FABRA, BARCELONA

A new, one-year Master in Brain and Cognition will begin its activities in the Academic Year 2014-15 in Barcelona, Spain, organized by the Universitat Pompeu Fabra (http://www.upf.edu/mbc/).

The core of the master's programme is composed of the research groups at UPF's Center for Brain and Cognition  (http://cbc.upf.edu). These groups are directed by renowned scientists in areas such as computational neuroscience, cognitive neuroscience, psycholinguistics, vision, multisensory perception, human development and comparative cognition. Students will  be exposed to the ongoing research projects at the Center for Brain and Cognition and will be integrated in one of its main research lines, where they will conduct original research for their final project.

Application period is now open. Please visit the Master web page or contact luca.bonatti@upf.edu for further information.

Back  Top

3-3-53Research in Interactive Virtual Experiences at USC CA USA

REU Site: Research in Interactive Virtual Experiences

--------------------------------------------------------------------

 

The Institute for Creative Technologies (ICT) offers a 10-week summer research program for undergraduates in interactive virtual experiences. A multidisciplinary research institute affiliated with the University of Southern California, the ICT was established in 1999 to combine leading academic researchers in computing with the creative talents of Hollywood and the video game industry. Having grown to encompass a total of 170 faculty, staff, and students in a diverse array of fields, the ICT represents a unique interdisciplinary community brought together with a core unifying mission: advancing the state-of-the-art for creating virtual reality experiences so compelling that people will react as if they were real.

 

Reflecting the interdisciplinary nature of ICT research, we welcome applications from students in computer science, as well as many other fields, such as psychology, art/animation, interactive media, linguistics, and communications. Undergraduates will join a team of students, research staff, and faculty in one of several labs focusing on different aspects of interactive virtual experiences. In addition to participating in seminars and social events, students will also prepare a final written report and present their projects to the rest of the institute at the end of summer research fair.

 

Students will receive $5000 over ten weeks, plus an additional $2800 stipend for housing and living expenses.  Non-local students can also be reimbursed for travel up to $600.  The ICT is located in West Los Angeles, just north of LAX and only 10 minutes from the beach.

 

This Research Experiences for Undergraduates (REU) site is supported by a grant from the National Science Foundation. The site is expected to begin summer 2013, pending final award issuance.

 

Students can apply online at: http://ict.usc.edu/reu/

Application deadline: March 31, 2013

 

For more information, please contact Evan Suma at reu@ict.usc.edu.

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA