ISCA - International Speech
Communication Association


ISCApad Archive  »  2014  »  ISCApad #194  »  Events

ISCApad #194

Monday, August 04, 2014 by Chris Wellekens

3 Events
3-1 ISCA Events
3-1-1(2014-01) INTERSPEECH 2014 Newsletter January 2014

 

English in Singapore

 

 

 

Attending INTERSPEECH 2014 in Singapore you will probably be glad to know that English is spoken in any corner of the island. Indeed, it is one of the four national languages and the second language spoken in Singapore's homes. According to the last census, in 2010, 89% of the population is literate in English, making of Singapore a very convenient place for tourism, shopping, research or to hold a conference.

 

 

 

 

 

Historical context of English in Singapore

 

 

 

The history of Singapore started as the first settlements were established in the 13th century AD [2]. Along the years, Singapore was part of different kingdoms and sultanas until the 19th century, when modern Singapore was founded under the impulsion of the British Empire. In 1819, Sir Thomas Stamford Raffles landed in Singapore and established a treaty with the local rulers to develop a new trading station. From this date, the importance of Singapore continuously grew under the influence of Sir Raffles who, despite not being very present on the island, was the real builder of modern Singapore. Singapore remained under British administration until the Second World War and became a Crown Colony after the end of the conflict. Followed a brief period during which Singapore was part of the Federation of Malaya before becoming independent in 1965 and part of the Commonwealth of Nations.

 

From this history, Singapore conserved English as one of its four official languages as well as many landmarks that deserve a visit beside of INTERSPEECH. Amongst them, Singapore Botanic Garden, founded in 1859, is internationally renowned [3]. This urban garden of 74 hectares was laid there by Sir Raffles to cultivate and preserve local plants in the tradition of the tropical colonial gardens. Including a Rain Forest, several lakes, an orchid garden and a performance stage, Singapore Botanic Garden is a very popular place to enjoy free concerts on week end afternoons.

 

Other green spot in the “City in a Garden”, the Padang (field in Malay) was created by Sir Raffles, always him, who planned to reserve the space for public purposes. The place is now famous for the two cricket clubs founded in 1870 and 1883 at both ends of the field and the games that can be watched on weekends.

 

Amongst the numerous landmarks inherited from the British colonization, the most famous include St Andrew's Anglican cathedral, the Victoria Theater, the Fullerton building, Singapore's City Hall, Old Parliament house, the Central Fire Station and many black and white bungalows built from the 19th century for the rich expatriate families. Some of those bungalows, now transformed in restaurant, will offer you a peaceful atmosphere to enjoy a local diner.

 

 

 

 

 

The role of English in Singapore

 

 

 

English has a special place in Singapore as it is the only national language which is not a “mother-tongue”. Indeed, Alsagoff [6] framed English as “cultureless” in that it is “disassociated from Western culture” in the Singaporean context. This cultural voiding makes English an ethnically neutral language used as lingua franca between ethnic groups [5] after replacing the local Malay in this role [4]. Interestingly, English is the only compulsory language of education, and its status in school is that of First Language, as opposed to the Second Language status delegated to the other official languages. By promoting the use of English as working language, the will of the government is to not advantage or disadvantage any ethnic group.

 

Nevertheless, the theoretical equality stated in the constitution between the four national languages is not always present in practice. For instance, English is overwhelming parliamentary business and some governmental websites are only available in English. Additionally, all legislation is in English only [4].

 

 

 

 

 

Singapore English

 

 

 

The standard Singapore English is almost similar to the British English although very cosmopolitan, with 42% of the population born outside the country. Nevertheless, a new standard of pronunciation has been emerging recently [1]. Interestingly, this pronunciation is independent of any external standard and some aspects of it cannot be predicted by reference to British English or any other variety of external English.

 

The other form of English that you will hear in Singapore is known as Singlish. It is a colorful Creole including words from the many languages spoken in Singapore such as various Chinese dialects (Hokkien, TeoChew, and Cantonese), Malay or Tamil. Many things might be said about Singlish and another newsletter will be especially dedicated to this local variant. Don't miss it!

 

 

 

 

 

[1] Deterding, David (2003). 'Emergent patterns in the vowels of Singapore English' National Institute of Education, Singapore. Retrieved 7 June 2013.

 

[2] http://www.yoursingapore.com/content/traveller/en/browse/aboutsingapore/a-brief-history.html
(on line January 7
th, 2014)

 

[3] http://whc.unesco.org/en/tentativelists/5786/ , (on line January 7th, 2014)

 

[4] Leimgruber, J. R. (2013). The management of multilingualism in a city-state: Language policy in Singapore. In I. G. Peter Siemund, Multilingualism and Language Contact in Urban Areas: Acquisition development, teaching, communication (pp. 229-258). Amsterdam: John Benjamins.

 

[5] Harada, Shinichi. 'The Roles of Singapore Standard English and Singlish.' 情報研究 40 (2009): 69-81.

 

[6] Alsagoff, L. (2007). Singlish: Negotiating culture, capital and identity. In Language, Capital, Culture: Critical studies of language and education in Singapore (pp. 25-46). Rotterdam: Sense Publishers.

Back  Top

3-1-2(2014-02) INTERSPEECH 2014 Newsletter February 2014

At the southern tip of the Malayan Peninsula...

 

 

…INTERSPEECH 2014 will be held in Singapore, between Malaysia and Indonesia. In the constitution, Malays are acknowledge as the “indigenous people of Singapore”. Indeed, Malays are the predominant ethnic group inhabiting the Malay Peninsula, Eastern Sumatra, Brunei, coastal Borneo, part of Thailand, the Southern Burmese coast and Singapore. You get now a better understanding of why Malay is one of the four official – and the only national – language of Singapore.

 

 

A Malay history of Singapore

 

It is said that the city of Singapore was founded in 1299 BCE by a Prince from Palembang (South Sumatra, Indonesia), descendant of Alexander the Great. According to the legend, the Prince named the city Singapura (“Lion City”) after sighting a beast on the island. If it is highly doubtful that any Lion ever lived in Singapore outside the zoo, another story tells that the last surviving tiger in Singapore was shot at the bar of the Raffles Hotel in 1902.

Despite this auspicious foundation, the population of Pulau Ujong (the “island at the end”) did not exceed a thousand inhabitants when, in 1819, Sir Thomas Raffles decided to establish a new port to reinforce the British trade between China and India. At this time, the population consisted of different Malay groups (Orang Kallang, Orang Seletar, Orang Gelam, Orang Lauts) and a few Chinese. Nowadays, Malay count for 13.3% of Singapore's population with origins as diverse as Johor, Riau islands (for the Malays Proper), Java (Javanese), Baewan island (Baewanese), Celebes islands (Bugis) or Sumatra (Batak and Minangkabaus).

 

Malay language

 

With almost 220 million of speakers, the Malay language in its various forms unites the fifth largest language community in the world [1]. Origins of Malay language can be traced amongst the very first Austronesian languages, back to 2000 BCE [2]. Through the centuries, the major Indian religions brought a number of Sanskrit and Persian words to the Malay vocabulary while islamization of the South East Asia added Arabic influences [3]. Later on, languages from the colonization powers (mainly Dutch and British) and migrants (Chinese and Tamil) contributed to the diversity of Malay influences [4, 5]. In return Malay words have been loaned in other languages, e.g. in English: rice paddy, Orangutan, babirussa, cockatoo, compound, durian, rambutan, etc.

During the golden age of Malay empires, Malay has gained its foothold in territories of modern Malaysia and Indonesia where it became a vector for trade and business. Today, Malay is official language in Malaysia, Indonesia, Brunei and Singapore and is spoken in southern Thailand, Philippines or Cocos and Christmas Islands in Australia [6].

Malay counts a total of 35 phonemes: 6 vowels, 3 diphthongs and 27 consonants [1,5]. As an agglutinative language, its vocabulary can be enriched by adding affixes to the root words [7]. Affixations in Malay consist of prefixation, infixation, suffixation or circumfixation1. Malay languages also have two proclitics, four enclitics and three particles that may be attached to an affixed word [8].

In Singapore, Malaysia, Brunei and Indonesia, Malay is officially written using the Latin alphabet (Rumi) but an Arabic alphabet called Jawi is co-official in Brunei and Malaysia.

 

 

Bahasa Melayu in Singapore

 

Bahasa Melayu (or Malay Language) is one of the four official languages of Singapore, the ceremonial national language and is used in the national anthem or for military commands. However, several creoles remain spoken across the island. Amongst them, Bahasa Melayu Pasar or Bazaar Malay is a creole of Malay and Chinese which used to be the lingua franca and the language for trade between communities [4, 5]. Baba Malay, another variety of Malay Creole influenced by Hokkien and Bazaar Malay is still spoken by around 10,000 people in Singapore.

Today, Bahasa Melayu is the lingua franca among Malay, Javanese, Boyanese, other Indonesian groups and some Arabs in Singapore. It is used as a mean for transmitting familial and religious values amongst the Malay community as well as in “Madrasahs”, mosques and religious schools. However, with 35% of Malay pupils predominantly speaking English at home and a majority of Singaporeans being bilingual in English, Malay is facing competition from English which is taught as first language [5].

 

Selemat datang ke Singapura / Welcome to Singapore

 

Opportunities of discovering the Malay culture in Singapore are everywhere. Depending on time and location you might want to taste the Malay's cuisine or one of the succulent Malay cookies while walking around the streets of Kampong Glam or visiting the Malay heritage center.

 

 

 

 

 

[1] Tan, Tien-Ping, et al. 'MASS: A Malay language LVCSR corpus resource.' Speech Database and Assessments, 2009 Oriental COCOSDA International Conference on. IEEE, 2009.

[2] http://en.wikipedia.org/wiki/History_of_the_Malay_language#Modern_Malay_.2820th_century_CE.29

[3] http://en.wikipedia.org/wiki/Malay_language

[4] http://en.wikipedia.org/wiki/Comparison_of_Malaysian_and_Indonesian

[5] http://en.wikipedia.org/wiki/List_of_loanwords_in_Malay

[6] http://www.kwintessential.co.uk/resources/global-etiquette/malaysia.html

[7] B. Ranaivo-Malacon, 'Computational Analysis of Affixed Words in Malay Language,' presented at International Symposium on Malay /Indonesian Linguistics, Penang, 2004 .

[8] http://www-01.sil.org/linguistics/GlossaryOfLinguisticTerms/WhatIsACliticGrammar.htm

 

1 circumfixation refers here to the simultaneous adding of morphological units, expressing a single meaning or category, at the left and right side of the root word

 

Back  Top

3-1-3(2014-03) INTERSPEECH 2014 Newsletter March 2014

Tamil and Indian Languages in Singapore

 

 

Our fifth step to INTERSPEECH 2014 brings us to the fourth official language of Singapore: Tamil. Today, Indians constitute 9% of the population of Singaporean citizens and permanent residents. They are considered as the third ethnic group in Singapore, although origins of Singaporean-Indians are diverse. Usually locally born, they are second, third, fourth or even fifth generation descendants of Punjabi, Hindi, Sindhi and Gujarati-speaking migrants from the Northern India and Malayalees, Telugu, and Tamil-speaking migrants from the Southern India. This latter group is the core of Singaporean-Indian population with 58% of the Indian community [2, 5].



Before 1819 and Sir Raffles*,

Indianised Kingdoms, such as Srivijaya and Majapahit, radiated over South-East Asia. Influenced by Hindu and Buddhist culture, a large area including Cambodia, Thailand, Malaysia, part of Indonesia and Singapore, formed the Greater India. From this period, Singapore kept some of its most important pre-colonial artifacts such as the Singapore Stone and it is also reported that the hill of Fort Canning was chosen for the first settlement as a reference to the Hindu concept of Mount Meru which was associated to kingship in Indian culture [1].



Under British colony,

Indian migrants arrived to Singapore from different parts of India to fulfill functions such as clerks, soldiers, traders or English teachers. By 1824, 7% of the population was Indian (756 residents). The part of Indian population in Singapore increased until 1860 when it overtook the Malay community and became the second larger ethnic group of 16%. Due to the nature of this migration, Indians in Singapore were predominantly adult men. A settled community, with a more balanced gender and age ratio, only emerged by the mid-20th century [2]. Although the Indian community increased for the following century, its ratio within the Singaporean population decreased until the 1980's, especially when the British withdrew their troupes after Singapore's independence in 1963.

After 1980, the immigration policy aimed at attracting educated people from other Asian countries to settle in Singapore. This change made the Indian population grow from 6.4% to 9%. In addition to this residential population, many ethnic Indian migrant workers temporarily come to work in Singapore (Bangladeshis, Sri Lankans, Malaysian Indians or Indian Indians)[3].



Tamil language

is one of the longest surviving classical languages in the world [8]. Existing for over 2,000 years, Tamil has a rich literature history and was the first Indian language to be declared a classical language by the Government of India in 2004. Earliest records of written Tamil were dated from around the 2nd century BC and, despite the significant amount of grammatical and syntactical change, this language demonstrates grammatical continuity across 2 millennium.

Tamil is the most populous language from the Dravidian language-family, with important groups of speakers in Malaysia, Philippines, Mauritius, South Africa, Indonesia, Thailand, Burma, Reunion and Vietnam. Significant communities can also be found in Canada, England, Fiji, Germany, Netherlands or United States. It is the official language in Indian states of Tamil Nadu, Puducherry, Andaman and Nicobar Islands as well as in Sri Lanka and Singapore.

Like Malay, another local language, Tamil is agglutinative. Affixes are added to words to mark noun class, number, case or verb tense, person, number, mood and voice [7]. Like Finish, not a local language, Tamil sets no limit to the length and extent of agglutination. This leads to long words with a large number of affixes in which its translation might require several sentences in other languages.

Phonology of Tamil is characterized by the use of retroflex consonants and multiple rhotics. Native grammarians classify phonemes into vowels, consonants and a secondary character called āytam. Aytam is an allophone of /r/ or /s/ at the end of an utterance. Vowels are called uyireḻuttu (uyir – life, eḻuttu – letter) and are classified into short (kuṟil), long (neṭil) (with five of each type) and two diphthongs. Unlike most of Indian languages, aspirated and unaspirated consonants are not distinguished in Tamil. Consonants are called meyyeḻuttu (mey—body, eḻuttu—letters) and count three categories: valliṉam—hard, melliṉam—soft or nasal, and iṭayiṉam—medium. Voiced and unvoiced consonants are not distinguished but voice is assigned depending on the position of the consonant in the word.

Tamil writing currently includes twelve vowels, eighteen consonants and one special character for the āytam that combine to form a total of 247 characters.



In Singapore,

Among all the Indian residents in Singapore, 38.8% speaks Tamil daily, 39% speak English, 11% speak Malay, and the remaining 11% speak other Indian languages [2, 4]. Tamil is one the two Indian languages taught as second language (mother tong) in public schools, together with Hindi. It also used in daily newspapers, free-to-air and cable television, radio channels, cinema or theaters [5].

In the multi-cultural environment of Singapore, Tamil influences the other local languages and vice versa. There is especially strong interaction Malay and the colloquial Singaporean English known as Singlish. Singaporean usage of Tamil includes some words from English and Malays while certain words or phrases that are considered archaic in India remain in use in Singapore [2].



During your stay in Singapore,

you can easily get to know Tamil culture through its many aspects. Having a walk in Little India, in which its architecture is protected since 1989, is a great opportunity to be exposed to Tamil music and lifestyle.

The two-storey shop-houses of Singapore's Indian hub host some of the best ambassadors of Indian cuisine. Here you'll find the local version of the Tamil cuisine that has evolved in response to local taste and influences of other cuisines present in Singapore. Other cuisines also include elements of Indian cuisine such as Singapore-Malay cuisine or Peranakan cuisine. Singaporean Tamil must-try include dishes such as achar, curry fish head, rojak, Indian mee goreng, murtabak, roti john, roti prata and teh tarik. Note that other Indian cuisines from Northern India can also be found.

 

 

 

 

 

 

 

 

 

 

 

 

[1] http://en.wikipedia.org/wiki/History_of_Indians_in_Singapore

[2] http://en.wikipedia.org/wiki/Indians_in_Singapore

[3] Leow, Bee Geok (2001). Census of Population 2000: Demographic Characteristics. p.47-49.

[4] Singapore Census 2010

[5] http://en.wikipedia.org/wiki/Indian_languages_in_Singapore

[6] http://en.wikipedia.org/wiki/Dravidian_language

[7] http://en.wikipedia.org/wiki/Tamil_language

[8] http://en.wikipedia.org/wiki/Classical_language

 

 

* Remember the third step to Singapore

 

Back  Top

3-1-4(2014-04) INTERSPEECH 2014 Newsletter April 2014

Multilingualism in Singapore

 

In September (this year), when attending INTERSPEECH, get ready to experience a stimulating multilingual experience. Every year at INTERSPEECH one can hear many languages from all over the world and meet a number of multilingual researchers. However, unlike other editions of this conference, INTERSPEECH 2014 will be held in a highly multilingual environment. From 2000 until 2007, Singapore was ranked the most globalized nation in the world five times1 considering flow of goods, services, people and communications. Indeed, in addition to the four official languages of Singapore, one can also experience many languages across the five continents in the wet markets and shopping malls of Singapore. What’s more interesting is, for many of these Singaporean, they code switch from one language to another naturally and effortlessly.

To speak or not to speak a language

Multilingualism is the ability to use more than one language. When the number of languages is reduced to two, which is the most common form, one talk about bilingualism. There are many ways to use a language so deciding what are the minimum abilities a person should have to be considered as bilingual is a difficult question. For a long time, linguists have limited the definition of bilingual to individuals who had native competency in two languages. This very restrictive definition, which assimilates bilingualism to ambilingualism has now been commonly extended. In its current interpretation, a bilingual person is one who can function at some level in two languages. Whether functioning consists of reading, speaking or, in the case of receptive bilingual, just understanding does not matter. The degree to which the bilingual subject can interact does not matter either and thus, the ability of asking your way in Bahasa Malayu toward a famous conference venue or reading a map in Chinese Mandarin makes you bilingual (as you read these lines).

 

Bilingualism and Multilingualism

Amongst the most famous multilingual speakers, is Giuseppe Mezzofanti, a 19th century Italian Cardinal, was reputed to speak 72 languages. If you consider claiming to know 72 languages to be a bit too gimmicky, consider this other case of a Hungarian interpreter during the cold war who was able to speak 16 languages (including Chinese and Russian) by the age of 86 [1]. Nevertheless, not all multilingual are hyper-polyglot as being able to learn 12 languages or more is not so common.

The complex mechanism of learning a new language is not clear yet and many questions remain, regarding a possible age limitation or the relationship between already mastered languages and the ease of learning a new one. Nevertheless, before considering learning an additional language you should be aware that this is a complicated process that has many effects. It might of course open your mind to other cultures and ways of thinking, but more importantly, it can deeply modify your brain. Neuroscience is a very active field when it comes to multilingualism. The powerful imaging tools available as well as the observation of subjects affected by trauma have led to a better understanding of the language learning process. Different language areas have been located within the brain and an augmented plasticity of the over-whole structure has been demonstrated for the case of multilingual speakers [2, 3]. Interestingly, the brain structure of simultaneous bilinguals, who learned two languages without formal education during childhood, is similar to that of monolingual subjects. On the contrary, learning a second language after gaining proficiency in the first language modifies the brain structure in an age-dependent manner [4].

Amongst the benefits of multilingualism, it has been shown that it increases the ability to detect grammatical errors and improve the reading ability. In the case of bimodal subjects, who use both a spoken language and a signed language, bilingualism improves the brain's executive function that directs the attention processes used for planning, solving problems and performing various mentally demanding tasks [2].

When it comes to society, multilingualism is the fact of having several languages spoken in a reduced area. Speakers don't have to interact or to be multilingual themselves. This phenomenon is observed in many countries or cities in the world and can take different forms. When a structural distribution of languages exists in the society, one talk about polyglossia. Multipart-lingualism refers to the case where most speakers are monolingual and speak different languages, while omnilingualism, the less common, describes the situation where no structural distribution can be observed and that it is nearly impossible to predict which language is going to be spoken in a certain context. That's the former one that you are going to experience in Singapore.

 

Multilingualism in Singapore

The city-state of Singapore is born from multi-multiculturalism and multilingualism. No wonder then that the choice of having four official languages was thought to be a central piece of the community harmony. In 1966, considering the lack of natural resources and the dominance of international trade in their local economy, Singaporean leaders decided to reinforce English as a medium of economic development [5]. In 1987, English was officially acknowledge as first language while others official languages were referred to as mother tongues [6]. Singapore's bilingualism is thus described as “English-knowing” because of the central role of English [7].

The success of Singapore is said to be partly the result of the language policy which fueled the globalization process of the Lion City [8]. Indeed, the promotion of English as the common neutral language amongst ethnic groups in Singapore facilitated Singapore's integration into the world economy. On the other hand, predominance of English has raised concerns about the decreasing usage of mother tongues and the demise of traditional cultural values [8].

In the last 30 years, language education has been undertaken by the state as one way to control globalization and to reduce the impact of Western culture that tends to replace Asian culture [8]. The growing importance of Western culture in Singapore is reflected by the shift in home languages towards English. Therefore, to reinforce the Asian cultural identity, Singapore's government has emphasized the learning of mother tongues. This policy is considered controversial to some as it led to the popularity of Mandarin Chinese and Bahasa Malayu at the expense of the loss of many other Chinese and Malay language varieties. It is no doubt a delicate and challenging trade-off between preserving language diversity and enforcing common languages for the convenience of communication and economic development.

 

Moving away from a language policy stemming from boosting economic development will probably take time. The implicit role of languages in Singapore’s multi-ethnic society can be significant yet complex. However, there is no doubt that Singaporeans consider multilingualism as a major component of their national identity that relies not only on the four official languages. One way to realize that during your stay with us is to ask any Singaporean about her/his language background and to get immersed in the rich diversity of spoken languages in Singapore .

 

 

[1] http://www.linguisticsociety.org/resource/multilingualism (accessed on 7 April, 2014)

[2] http://en.wikipedia.org/wiki/Cognitive_advantages_to_bilingualism (accessed on 7 April, 2014)

[3] http://en.wikipedia.org/wiki/Neuroscience_of_multilingualism (accessed on 7 April, 2014)

[4] Klein, D., Mok, K., Chen, J. K., & Watkins, K. E. (2013). Age of language learning shapes brain structure: A cortical thickness study of bilingual and monolingual individuals. Brain and language.

[5] 'Interview: Chinese Language education in Singapore faces new opportunities'. People's Daily Online. 2005-05-13. (accessed on 7 April, 2014)

[6] Pakir, A. (2001). Bilingual education with English as an official language: Sociocultural implications. GEORGETOWN UNIVERSITY ROUND TABLE ON LANGUAGES AND LINGUISTICS 1999, 341.

[7] Tupas, R. (2011). English knowing bilingualism in Singapore: Economic pragmatics, ethnic relations and class. English language education across greater China, 46-69.

[8] http://www.unesco.org/new/en/culture/themes/cultural-diversity/languages-and-multilingualism/ (accessed on 7 April, 2014)

1 A.T. Kearney/Foreign policy globalization index www.foreignpolicy.com accessed April 3, 2014

 

Back  Top

3-1-5(2014-05) INTERSPEECH 2014 Newsletter May 2014

Language Education

 

 

 

 

“If you talk to a man in a language he understands, that goes to his head. If you talk to him in his own language, that goes to his heart.”

 

Nelson Mandela

 

 

There is no doubt that this quote will continue to inspire generations of language learners despite the recent advancement of statistical machine translation. Before learning more about the latest discovery in language learning and machine translation during INTERSPEECH 2014, the newsletter of this month is dedicated to language education in Singapore.

 

 

Brief history of language education

 

Discovery of the “New World” is sometimes considered as the starting point of globalization. Of course travelers, merchants and scholars didn't wait that long to study languages but, surprisingly, theorization of language education is quite recent. In the 17th century, Latin was commonly used for education, language and religion in the Western countries. Its teaching was almost exclusively done through grammatical aspects until Jan Amos Komenský, a Czech teacher and educator, created a complete course for learning this language [12]. Jan Amos Komenský major contributions also include the invention of the primer and textbook which are now widely used to teach reading and languages.

During the 19th and 20th centuries, research on language education sped up and led to a large number teaching practices which were supposed to improve the experience of language learners. In 1963, Anthony [14] proposed a three-layer hierarchical framework to describe language teaching including approaches, methods and techniques. Approaches are related to general concepts about the nature of languages, while methods refer to the over-whole plan of the language teaching organization which is implemented in class through techniques which aim at achieving short term objectives. Anthony's framework was later extended by Richards and Rogers [15], who especially extended the concepts of methods and techniques to designs and procedures that were intended to be more specific and less descriptive.

Amongst the most popular, the structural methods consider languages through the prism of grammar, functional methods focus more on languages as a vehicle to accomplish certain functions and interactive ones emphasize on social relations such as acts, negotiation and interactions. Of course this list is not exhaustive and do not address the complexity of the whole range of existing methods.

 

 

Language education in the world

 

In Africa, where most countries used to be colonized, language policies strongly depend on the former colonial power and its tolerance of the local languages [16] but also on the post-independence political evolution, on the socio-linguistic contour of each country and on the level of education. During colonization, the French, Portuguese and Spanish used to teach their language at all levels and from the first day of school. The Germans did promote their language while giving prominence to local languages in the first years of schooling and the British conducted the first year of education in the local language before changing it to English in the following years. In some parts of Western Africa, British even encourage the teaching of certain languages such as Hausa, Igbo, Yoruba, Efik, Ga, or Ewe but kept English as a reference point. After independence, most of the African countries considered reforming education to promote indigenous languages but in a lot of cases, those politics have been questioned as teaching in the mother tongue could weaken the national unity. Of course, it has been easier for the few monolingual countries (Somalia, Burundi, Rwanda, Botswana, Lesotho, Swaziland, Madagascar, etc.) to promote education in their native languages while some multilingual countries have chosen to develop regional languages. For instance, Zambia uses six zonal languages in education, Zaire four, and Togo two. For economical reasons, English and French are still taught across the former colonies and are still strong factor of regional cohesion.

 

Bilingual education in South America mostly refers to the teaching of a “mainstream” language such as Spanish or Portuguese to non-Spanish and non-Portuguese speaking people [17]. It usually follows a “transition” or “maintenance” model. In the first one, the official language progressively replaces the mother tongue while the second one makes use of the two languages through the whole curriculum. Language distribution in South America is mainly characterized by the fact that many populations are located in isolated area where communication and outside contact are poor and thus, monolingualism is prevalent. In this context, a number of countries have launched bilingual education programs in the 1970's. Those programs have provided good results since bilingual education improved education amongst indigenous children.

 

North America’s modern history includes periods of colonialism too. However, the language education evolved in a very different way and strongly differ between Canada and U.S.. In the 19th century, U.S. was especially friendly towards bilingualism as immigrant communities commonly maintained and published in their native language [19]. Starting from the 1880's, and due to a huge influx of non-English speaking immigrants, English was used to develop an “American” identity. Monolingualism as then become the norm and second language learning is still uncommon before high school. Therefore, only 15 to 20 percent of Americans consider themselves bilingual compared to 56 percent of European (European commission survey, 2006). The most common second languages taught in the U.S. include Spanish, due to the large number of recent Spanish-speaking immigrants, followed by French, German, Latin, Mandarin, Italian and Japanese, in descending order of frequency. As a multilingual country, Canada allows two languages of instruction: English by default and French in the case of “Francophone children whose parents qualify for minority language rights” [18]. Additionally, aboriginal languages can be taught as a second language as all students are required to learn a second language from 9 to 14 years old.

 

In Europe, all children studied at least one foreign language as part of their compulsory curriculum except in Ireland where instruction includes English and Irish, both considered a native language, and a third European language. In all European countries, English is by far the most commonly learned language before French, Spanish, German and Russian. On average, children start learning a second language between 6 and 9 years old [20]. This age has strongly decreased in the last 15 years and it is now common for children to start learning in pre-school. However, the weekly number of hours spent learning a language did not really increased in the same time. Due to the multilingual context, and in order to encourage cross-border exchanges, the European Union strongly encourages learning of foreign languages and, on average, in 2009/10, 60.8% of lower secondary education students were learning two or more foreign languages. From a local point of view, almost all European countries have regional languages and more than half of the countries use partial immersion to teach both the minority and the state language.

 

In South-East Asia, 550 millions inhabitants speak hundreds of languages including local (Javanese, Hmong for example), national (Khmer, Thai, for instance) and regional languages (varieties of Chinese and Malay) [2]. Amongst the eleven countries of this region, all except Thailand have endured colonization and been exposed to European languages: Dutch in Indonesia, English in Brunei, Burma, Malaysia, Philippines and Singapore, French in Cambodia, Laos and Vietnam and Portuguese in East Timor. After decolonization, most governments used languages to strengthen national cohesion and forge a national identity. All eleven Southeast Asian countries have included English in education, often as a foreign language. In certain countries, however, instruction is given in the national language while sciences and technologies are taught in a foreign language, for instance English in Myanmar and French in Laos.

 

 

Singapore:

 

Under British colonial rule, school systems in the main four languages, namely Chinese Mandarin, English, Malay and Tamil, cohabited in Singapore [4]. After World War II, the schools were gradually brought under the control of the government, which decided to establish one of the existing languages as lingua franca to strengthen the national unity. Amongst the possible languages, Malay was considered a good choice given the integration of Singapore to the Federation of Malaya, and Hokkien was already spoken by the majoritys of Chinese Singaporeans. However, the government decided to chose English as it was both a tool for economic development and an ethnic neutral language in the context of Singapore’s multi-ethnic population including Chinese, Malay and Indian.

The bilingual education policy was officially introduced in 1966 with the possibility to teach English as a first or second language. However, schools teaching English as a second language declined rapidly as English was considered a key element for professional success. By 1986, there remained a single class of 28 secondary school students following a curriculum in Malay and Malay-medium schools, came to a natural demise like the Tamil-medium schools in 1982. Chinese-medium schools were removed by the government [4]. The government then officially defined English as the first language and the three other official languages as mother tongues. In a will of preserving the Asian culture in Singapore, the government imposed the learning of the mother tongue as second language. This mother tongue is determined for each student depending on her/his ethnicity. Therefore, Malay Singaporean have to learn Bahasa Malayu, Chinese learn Mandarin while Indian from a Dravidian language learn Tamil. Indian is a special case as non-Vernacular Languages like Hindi, Bengali, Punjabi, Gujarati and Urdu can be chosen as a mother tongue by non-Tamil but the state does not provide teachers in those languages [3]. On the opposite, all Singaporean Chinese have to learn Mandarin despite the various linguistic backgrounds present in the local community. Due to this policy, importance of non-Mandarin Chinese languages strongly decreased in the last 50 years and Mandarin is now the first-spoken language in Singaporean homes. Since 2002, Chinese associations in Singapore propose dialect classes in order to reconnect the population with its Chinese culture and enable the younger generation to talk to elderly [3]. A third language can be learn starting from secondary school for which students can chose amongst Mandarin (for non-Chinese), Malay (for non-Malays), Bahasa Indonesia (for non-Malays), Arabic, Japanese (only for Chinese), French, German and Spanish [4,6].

Although it is one of the reasons of Singapore's exceptional economic success, bilingual policy has been, according to the government itself, a cultural failure. By promoting English as a business and inter-ethnic language, the bilingual policy made other languages less attractive to the younger generation. Additionally, the mother tongues have been taught as discipline while using methods developed for a native language. As a consequence, many Singaporean students don't see the point of learning a language which is not a vector of culture but only a subject of study. Realizing this mistake, the government recently decided to make language learning more interesting and IT-based. For example, language learning through the use of smart phones and on-line computer games [5,10]

From a wider perspective, Singapore is unique in Asia as it has a strong national education system at a moment where other countries massively privatize instruction [11] and also because of the way, probably unparalleled in any other developed country, the state’s intervention changed the people’s language and speech patterns [1].

 

 

 

[1] Language, Society and Education in Singapore: Issues and Trends (Second Edition); S. Gopinathan, Anne Pakir, Ho Wah Kam and Vanithamani Saravanan (Eds.); Times Academic Press, Singapore, 1998

[2] Language Education Policies in Southeast Asia, T Clayton, elsevier

[3] http://en.wikipedia.org/wiki/Languages_of_Singapore

[4] http://en.wikipedia.org/wiki/Language_education_in_Singapore

[5] news.asiaone.com/News/Education/Story/A1Story20100603-219929.html

[6] http://www.moe.gov.sg/media/press/2004/pr20040318.htm

[7] http://www.moelc.moe.edu.sg/index.php/courses

[8] https://web.archive.org/web/20131002211453/http://libguides.nl.sg/content.php?pid=57257&sid=551371

[9] http://books.google.com.sg/books?id=_Wsh1EbUJB0C&printsec=frontcover#v=onepage&q&f=false

[10] http://enterpriseinnovation.net/article/multimedia-aid-chinese-instruction-singapore-schools

[11] Globalization and Multilingualism in Singapore: Implications for a hybrid identity

[12] http://en.wikipedia.org/wiki/Language_education

[13] http://en.wikipedia.org/wiki/Language_education_by_region

[14] Anthony, E. M. (1963). 'Approach, Method, and Technique'. ELT Journal (2): 63–43. doi:10.1093/elt/XVII.2.63

[15] Richards, Jack; Rogers, Theodore (2001). Approaches and Methods in Language Teaching. Cambridge: Cambridge University Press. ISBN 978-0-521-00843-3

[16] http://fafunwafoundation.tripod.com/fafunwafoundation/id7.html

[17] http://faculty.smu.edu/rkemper/anth_6306/anth_6306_language_and_education_in_latin_america.htm

[18] http://www2.gov.bc.ca/gov/topic.page?id=93A2746B883E4DA89C4E7E584D447E4B

[19] http://www.dailytexanonline.com/opinion/2013/10/06/americans-suffer-from-inadequate-foreign-language-education

[20] http://europa.eu/rapid/press-release_IP-12-990_en.htm

 

 

Back  Top

3-1-6(2014-06) INTERSPEECH 2014 Newsletter June 2014

Kristang, an endangered Portuguese creole

 

 

Amongst the many languages spoken in Singapore, Kristang is probably one of the less likely you might hear when attending INTERSPEECH 2014. Indeed, this Portuguese creole originating from Malacca, Malaysia, is spoken by less than five hundreds persons and categorized as an endangered language according to the UNESCO Atlas of the World’s Languages in Danger.

 

Some background about creoles

 

Creole languages can easily be mixed up with pidgins. The fundamental difference between those two categories of languages stands in the existence of native speakers [9]. A pidgin is a language without native speakers that is developed in areas where several languages co-exist, while a creole is taught by a generation to another and thus is native language for a community of speakers. Therefore, pidgins emerge in societies where several mother tongues are used in the same place to facilitate mutual understanding and can be referred to as “contact languages”. When spoken by many people it might become a creole if transmitted over generations. In some cases, creoles can even replace the existing mix of languages to become the official language such as Krio in Sierra Leone and Tok Pisin in Papua New Guinea.

From the linguistic perspective, both pidgins and creoles are derived from several languages and generally involve simplification of the vocabulary and syntax (grammar). They also often include considerable phonological variations and fulfill fewer functions than the original languages.

Before the 1930's, pidgins and creole were mostly ignored by linguists. Recently, more attention has been paid to these languages. In 1997, Hancock [9] listed 127 pidgins and creoles languages: thirty-five described as English-based, fifteen French-based, fourteen Portuguese-based, seven Spanish-based, six German-based, five which are based on Dutch language, three on Italian and the rest based on a variety of languages such as Russian, Chinese or Malay. Most of the creoles and pidgins are distributed in the equatorial belt where contact between languages is facilitated by oceans and trade.

 

 

Kristang

 

Kristang is a Portuguese-based creole language influenced by Malay, English and other languages spoken on the Malaya peninsula [1]. Called Papia Kristang or Christao, this creole originated from Malacca in 1511, when the Portuguese explorer, Alfonso de Albuquerque, conquered the city [7]. Strategically located on the spice trade routes of South-East Asia, Malacca was a way for Portuguese to challenge the dominance of Venice in the trading or rare spices [8]. In order to ensure the loyalty of the local population and to provide manpower, Alfonso de Albuquerque encouraged marriages between Portuguese men and Malay women [2]. In 1641, Portuguese lost Malacca to the Dutch and Dutch men married local “Portuguese” women and embraced their Catholic faith. The mix of Malay, Portuguese and Dutch were known as “Malacca-Portuguese” or Jenti Kristang (Kristang people) speech community [8]. After the Dutch captured the city, the Kristang community not only preserved the language but also, through migrations, influenced other languages such as Macanese, the creole language spoken in Macao, another Dutch colony.

 

Although Kristang has no written form and has never been taught in school, it has been passed down from generation to generation, through daily usage and by being used in church services [2]. A first proposal for standard orthography was made in late 1980's by Alan Baxter in which he suggested to use Malay orthography. In the 1990s, Joan Marbeck's book 'Ungua Andanza' was published, with a “Luso-Malay” orthography. The grammatical structure of Kristang is very close to Malay but a large part of the vocabulary (~95%) is Portuguese, so Kristang is generally quite recognizable to speakers of European Portuguese although many words are considered archaic. Perhaps because of cultural exchanges along trade routes, Kristang has a lot of similarities with other Portuguese-based creoles spoken in Indonesia and East-Timor. According to Baxter [8], Kristang's pronunciation is very close to the colloquial Malacca Malay, for instance, the vowel /e/ is usually pronounced as an /i/ when followed by a syllable with a /i/, for example, penitensia ('penitence') is pronounced [piniˈteɲsia].

 

Nowadays, Kristang counts 5,000 speakers in Malacca and 400 in Singapore; it is also spoken in some parts of Indonesia and in Australia (region of Perth) due to migrations. Kristang is considered the “last vital variety of a group of East and Southeast Asian Creole Portuguese languages” [6] and categorized as one of the endangered languages in Malaysia.

In order to revitalize the language, publications of dictionaries, phrase-books, and language documentation efforts are encouraged. Social media are also used as a way to promote the use of Kristang with Facebook pages such as “Keep Kristang alize” and “Yo Falah Linggu Kristang” (I speak the Kristang language).

 

Although Kristang is only spoken by a handful of people in Singapore these days, and most people (including local Singaporeans and the Portuguese and the Dutch) are virtually unaware of its existence, Kristang symbolizes the rich multilingualism rooted in Singapore in a historical context.

 

 

 

[1] http://en.wikipedia.org/wiki/Kristang_language

[2] http://web.archive.org/web/20041122225051/http://www.geocities.com/jingkli_nona/

[3] http://books.google.com.sg/books?hl=en&lr=&id=4A_RzBG4DjIC&oi=fnd&pg=PA115&dq=survey+creole+language&ots=rp1ka-V0Mg&sig=SzpHUBW3zry34h2Vlac6tlVcd1M#v=onepage&q=survey%20creole%20language&f=false

[4] http://en.wikipedia.org/wiki/Creole_language

[5] http://www.ethnologue.com/language/mcm

[6] Stefanie Pillai, Wen-Yi Soh, Angela S. Kajita , “Family language policy and heritage language maintenance of Malacca Portuguese Creole,” in Language and Communication, 2014, in press

[7] Bryan W. Husted, “Globalization and cultural change in international business research,” in Journal of International Management, 2003, pp. 427-433

[8] Ei Leen Lee, “Language maintenance and competing priorities at the Portuguese Settlement Malacca,” in Ritsumeikan Journal of Asia Pacific Studies, 2011, vol. 30, pp. 77-99

[9] Syarfuni Syarfuni, “Pidgins and creoles languages,” in Visipena, vol. 2, issue 1, 2011

Back  Top

3-1-7(2014-07) INTERSPEECH 2014 Newsletter July 2014

Singlish: a living example of multilingualism blending the East and West

 

Singlish (colloquial Singapore English) is a vivid and colorful creole example of how languages and speakers interact and mingle: while Singlish is a language variety of English, it is interleaved with slangs from languages such as Hokkien, Teochew, Cantonese, Malay and Tamil, and heavily influenced by Chinese grammar, phonology, and prosody. The complexity of Singlish exemplifies the challenges speech researchers face in developing spoken language technologies to automatically identify, transcribe, and parse colloquial and conversational speech.

 

Singlish is semi-tonal, as all words of Chinese origin retain their original words, while original English words as well as Malay and Tamil words are non-tonal. In addition, although most varieties of English are stressed-time, Singlish is syllable-timed, giving Singlish a rather staccato feel.

 

Singlish phonology is primarily British based, with influence of Chinese phonology. For example, the dental fricatives /θ/ and /ð/ are sometimes merged with /t/ and /d/ in certain contexts, so that three sounds like tree and then sounds like den [16]. The voiceless stops /p/, /t/, /k/ are also sometimes unaspirated as in Chinese languages [18]. There is generally no distinction between the non-close front monophthongs, so pet and pat are pronounced the same /pɛt/ [17].

 

At the vocabulary level, there is often inter-mixing of multiple languages. For example, damn shiok is a slang blending English and Punjabi, used to express extreme pleasure or satisfaction, often in the context of food. Mixing of languages is also reflected from location names. For example, Toa Payoh (literal translation: big swamp), a central district in Singapore, mixes Hokkien and Malay (toa is big in Hokkien and payoh is swamp in Malay). Reduplication is also used in Singlish, which is influenced by Chinese and Malay. Adjectives of one or two syllables can also be repeated for intensification. For example, “You go take the small-small one ah.” (Retrieve the smaller item, please.) The frequent use of already (pronounced more like oreddy) in Singapore English is probably a direct influence of the Hokkien liao particle [18]. For example, “Aiyah, cannot wait any more, must go oreddy.” (Oh dear, I cannot wait any longer. I must leave immediately.)

 

Singlish is topic-prominent like Chinese and Japanese, meaning that Singlish sentences often begin with a topic followed by a comment of new information. For example, Dis country weather very hot one.” (In this country, the weather is very warm.) The topic can be omitted when the context is clear, resulting in constructions that appear to be missing a subject. For example, “No good lah” (This isn’t good.)

 

Singlish is also known for its colorful usage of interjections from Chinese and Malay influence (examples in previous sentences examples include ah, aiyah, lah). ”lah” is probably the most famous one and a stereotypical interjection which appears to be ubiquitous to non-native speakers of Singlish. It may originate from the Hokkien character (), though its usage in Singapore is also influenced by its occurrence in Malay [19]. “lah” has many different usages. It is often used to soften ones tone. For example, “Cannot lah”, “Just drink lah”. It can also be used to indicate impatience with a low tone; e.g., “Eh, hurry up lah!” It can also be used for reassurance: Okay lah. (It's all right. Don't worry about it.) Yet, it can also be used to curse people. For example, “Go and die lah”.

 

Although Singlish is typically not used in official settings (e.g. school lectures and mainstream media generally use Standard Singapore English), Singlish is quite prevalent in day-to-day interactions with peers, siblings, parents, and elders. It is an effective means to establish rapport (for example, during military service) or for humorous effects for TV and radio shows. From a linguistic perspective, Singlish is a living example of multilingualism in Singapore blending the East and the West.

Back  Top

3-1-8(2014-09-14) CfP INTERSPEECH 2014 Singapore URGENT Action Required

Interspeech 2014

 Singapore 

September 14-18, 2014

 

INTERSPEECH 2014 paper submission deadline is on 24 March 2014.  There will be no extension of deadline.  Get ready your paper submissions and gear up for INTERSPEECH in Singapore.

 

 

INTERSPEECH is the world's largest and most comprehensive conference on issues surrounding the science and technology of spoken language processing, both in humans and in machines.

The theme of INTERSPEECH 2014 is 'Celebrating the Diversity of Spoken Languages'. INTERSPEECH 2014 emphasizes an interdisciplinary approach covering all aspects of speech science and technology spanning basic theories to applications. In addition to regular oral and poster sessions, the conference will also feature plenary talks by internationally renowned experts, tutorials, special sessions, show & tell sessions, and exhibits. A number of satellite events will take place immediately before and after the conference. Please follow the details of these and other news at the INTERSPEECH website www.interspeech2014.org.

We invite you to submit original papers in any related area, including but not limited to:

1: Speech Perception and Production

2: Prosody, Phonetics, Phonology, and Para-/Non- Linguistic Information

3: Analysis of Speech and Audio Signals

4: Speech Coding and Enhancement

5: Speaker and Language Identification

6: Speech Synthesis and Spoken Language Generation

7: Speech Recognition - Signal Processing, Acoustic Modeling, Robustness, and Adaptation

8: Speech Recognition - Architecture, Search & Linguistic Components

9: LVCSR and Its Applications, Technologies and Systems for New Applications

10: Spoken Language Processing - Dialogue, Summarization, Understanding

11: Spoken Language Processing -Translation, Info Retrieval

12: Spoken Language Evaluation, Standardization and Resources 

A detailed description of these areas is accessible at: 

 

http://www.interspeech2014.org/public.php?page=conference_areas.html

 

Paper Submission

Papers for the INTERSPEECH 2014 proceedings should be up to 4 pages of text, plus one page (maximum) for references only. Paper submissions must conform to the format defined in the paper preparation guidelines and provided in the Authors’ kit, on the INTERSPEECH 2014 website, along with the Call for Papers. Optionally, authors may submit additional files, such as multimedia files, which will be included in the official conference proceedings USB drive. Authors must declare that their contributions are original and are not being submitted for publication elsewhere (e.g. another conference, workshop, or journal). Papers must be submitted via the online paper submission system, which will be opened in February 2014. The conference will be conducted in English. Information on the paper submission procedure is available at:

http://www.interspeech2014.org/public.php?page=submission_procedure.html

There will be NO extension to the full paper submission deadline.

 

Important Dates

Full Paper submission deadline

:

24 March 2014

Notification of acceptance/rejection

:

10 June 2014

Camera-ready paper due

:

20 June 2014

Early registration deadline

:

10 July 2014

Conference dates

:

14-18 Sept 2014

We look forward to welcoming you to INTERSPEECH 2014 in Singapore!

 

Helen Meng and Bin Ma

Technical Program Chairs

 

 

Contact

 

Email: tpc@interspeech2014.org

organizers.interspeech2014@isca-speech.org— For general enquiries

 

Conference website: www.interspeech2014.org

Back  Top

3-1-9(2014-09-14) CfP Speech Technology for the Interspeech App

Call for Proposals

Speech Technology for the Interspeech App

During the past Interspeech conference in Lyon, a mobile application (app) was provided for accessing the conference program, designing personal schedules, inspecting abstracts, full papers and the list of authors, navigating through the conference center, or recommending papers to colleagues. This app was designed by students and researchers of the Quality and Usability Lab, TU Berlin, and will be made available to ISCA and to future conference and workshop organizers free-of-charge. It will also be used for the upcoming Interspeech 2014 in Singapore, and is available under both iOS and Android.

In its current state, the app is limited to mostly touch-based input and graphical output. However, we would like to develop the app into a useful tool for the spoken language community at large, which should include speech input and output capabilities, and potentially full spoken-language and multimodal interaction. The app could also be used for collecting speech data under realistic environmental conditions, for distributing multimedia examples or surveys during the conference, or for other research purposes. In addition, the data which is being collected with the app (mostly interaction usage patterns) could be analyzed further.

The Quality and Usability Lab of TU Berlin would like to invite interested parties to contribute to this development. Contributions could be made by providing ready-built modules (e.g. ASR, TTS, or alike) for integration into the app, by proposing new functionalities which would be of interest to a significant part of the community, and preferably by offering workforce for such future developments.

If you are interested in contributing to this, please send an email with your proposals to

interspeechapp@qu.tu-berlin.de

by October 31, 2013. In case that a sufficient number of interested parties can be found, we plan to submit a proposal for a special session around speech technology in mobile applications for the upcoming Interspeech in Singapore.

More information on the current version of the app can be found in: Schleicher, R., Westermann, T., Li, J., Lawitschka, M., Mateev, B., Reichmuth, R., Möller, S. (2013). Design of a Mobile App for Interspeech Conferences: Towards an Open Tool for the Spoken Language Community, in: Proc. 14th Ann. Conf. of the Int. Speech Comm. Assoc. (Interspeech 2013), Aug. 25-29, Lyon.

Back  Top

3-1-10(2014-09-14) INTERSPEECH 2014 Tutorials
--- INTERSPEECH 2014 - SINGAPORE
--- September 14-18, 2014
--- http://www.INTERSPEECH2014.org



The INTERSPEECH 2014 Organising Committee is pleased to announce 
the following 8 tutorials presented by distinguished speakers 
at the conference and will be offered on Sunday, 14 September 2014.
All Tutorials will be of three (3) hours duration and require 
an additional registration fee (separate from the conference registration fee). 

    • Non-speech acoustic event detection and classification
    • Contribution of MRI to Exploring and Modeling Speech Production
    • Computational Models for Audiovisual Emotion Perception
    • The Art and Science of Speech Feature Engineering
    • Recent Advances in Speaker Diarization
    • Multimodal Speech Recognition with the AusTalk 3D Audio-Visual Corpus
    • Semantic Web and Linked Big Data Resources for Spoken Language Processing
    • Speech and Audio for Multimedia Semantics


----------------------------------------------------------------------------------------------------
ISCSLP Tutorials @ INTERSPEECH 2014
----------------------------------------------------------------------------------------------------

Additionally, the ISCSLP 2014 Organising Committee welcomes 
the INTERSPEECH 2014 delegates to join the 4 ISCSLP tutorials 
which will be offered on Saturday, 13 September 2014.

    • Adaptation Techniques for Statistical Speech Recognition
    • Emotion and Mental State Recognition: Features, Models, System Applications and Beyond
    • Unsupervised Speech and Language Processing via Topic Models
    • Deep Learning for Speech Generation and Synthesis


More information available at: http://www.interspeech2014.org/public.php?page=tutorial.html


----------------------------------------------------------------------------------------------------
Tutorials Description
----------------------------------------------------------------------------------------------------

T1: Non-speech acoustic event detection and classification

    The research in audio signal processing has been dominated by speech research, 
    but most of the sounds in our real-life environments are actually non-speech 
    events such as cars passing by, wind, warning beeps, and animal sounds. 
    These acoustic events contain much information about the environment and physical
    events that take place in it, enabling novel application areas such as safety,
    health monitoring and investigation of biodiversity. But while recent years 
    have seen wide-spread adoption of applications such as speech recognition and 
    song recognition, generic computer audition is still in its infancy.

    Non-speech acoustic events have several fundamental differences to speech, 
    but many of the core algorithms used by speech researchers can be leveraged 
    for generic audio analysis. The tutorial is a comprehensive review of the field 
    of acoustic event detection as it currently stands. The goal of the tutorial is 
    foster interest in the community, highlight the challenges and opportunities 
    and provide a starting point for new researchers. We will discuss what acoustic 
    event detection entails, the commonalities differences with speech processing, 
    such as the large variation in sounds and the possible overlap with other sounds. 
    We will then discuss basic experimental and algorithm design, including descriptions 
    of available databases and machine learning methods. We will then discuss more 
    advanced topics such as methods to deal with temporally overlapping sounds and 
    modelling the relations between sounds. We will finish with a discussion of 
    avenues for future research.

    Organizers: Tuomas Virtanen and Jort F. Gemmeke


T2: Contribution of MRI to Exploring and Modeling Speech Production

    Magnetic resonance imaging (MRI) provides us a magic vision to look into 
    the human body in various ways not only with static imaging but also with 
    motion imaging. MRI has been a powerful technique for speech research to 
    study finer anatomy of the speech organs or to visualize true vocal tracts 
    in three dimensions. Inherent problems of slow image acquisition for speech 
    tasks or insufficient signal-to-noise ratio for microscopic observation have 
    been the cost for researchers to search for task-specific imaging techniques. 
    The recent advances of the 3-Tesla technology suggest more practical solutions 
    to broader applications of MRI by overcoming previous technical limitations. 
    In this joint tutorial in two parts, we summarize our previous effort to accumulate 
    scientific knowledge with MRI and to advance speech modeling studies for future 
    development. Part I, given by Kiyoshi Honda, introduces how to visualize the 
    speech organs and vocal tracts by presenting techniques and data for finer static 
    imaging, synchronized motion imaging, surface marker tracking, real-time imaging, 
    and vocal-tract mechanical modeling. Part 2, presented by Jianwu Dang, focuses on 
    applications of MRI for phonetics of Mandarin vowels, acoustics of the vocal tracts 
    with side branches, analysis and simulation in search of talker characteristics, 
    physiological modeling of the articulatory system, and motor control paradigm 
    for speech articulation.

    Organizers: Kiyoshi HONDA and Jianwu DANG


T3: Computational Models for Audiovisual Emotion Perception

    In this tutorial we will explore engineering approaches to understanding human 
    emotion perception, focusing both on modeling and application. We will highlight 
    both current and historical trends in emotion perception modeling, focusing on 
    both psychological and engineering-driven theories of perception 
    (statistical analyses, data-driven computational modeling, and implicit sensing). 
    The importance of this topic can be appreciated from both an engineering viewpoint, 
    any system that either models human behavior or interacts with human partners must 
    understand emotion perception as it fundamentally underlies and modulates our 
    communication, or from a psychological perspective, emotion perception is also used 
    in the diagnosis of many mental health conditions and is tracked in therapeutic 
    interventions. Research in emotion perception seeks to identify models that describe 
    the felt sense of ‘typical’ emotion expression – i.e., an observer/evaluator’s attribution 
    of the emotional state of the speaker. This felt sense is a function of the methods through 
    which individuals integrate the presented multimodal emotional information. 
    We will cover psychological theories of emotion, engineering models of emotion, 
    and experimental approaches to measure emotion. We will demonstrate how these modeling 
    strategies can be used as a component of emotion classification frameworks and how 
    they can be used to inform the design of emotional behaviors.

    Organizers: Emily Mower Provost and Carlos Busso


T4: The Art and Science of Speech Feature Engineering

    With significant advances in mobile technology and audio sensing devices, 
    there is a fundamental need to describe vast amounts of audio data in terms 
    of well representative lower dimensional descriptors for efficient automatic 
    processing. The extraction of these signal representations, also called features, 
    constitutes the first step in processing a speech signal. The art and science of 
    feature engineering relates to addressing the two inherent challenges - extracting 
    sufficient information from the speech signal for the task at hand and suppressing 
    the unwanted redundancies for computational efficiency and robustness. 
    The area of speech feature extraction combines a wide variety of disciplines like 
    signal processing, machine learning, psychophysics, information theory, linguistics and physiology. 
    It has a rich history spanning more than five decades and has seen tremendous advances 
    in the last few years. This has propelled the transition of the speech technology from 
    controlled environments to millions of end user applications.

    In this tutorial, we review the evolution of speech feature processing methods, 
    summarize the recent advances of the last two decades and provide insights into the 
    future of feature engineering. This will include the discussions on the spectral 
    representation methods developed in the past, human auditory motivated techniques 
    for robust speech processing, data driven unsupervised features like ivectors and 
    recent advances in deep neural network based techniques. With experimental results, 
    we will also illustrate the impact of these features for various state-of-the-art 
    speech processing systems. The future of speech signal processing will need to address 
    various robustness issues in complex acoustic environments while being able 
    to derive useful information from big data.

    Organizers: Sriram Ganapathy and Samuel Thomas


T5: Recent Advances in Speaker Diarization

    The tutorial will start with an introduction to speaker diarization giving a general 
    overview of the subject. Afterwards, we will cover the basic background including 
    feature extraction, and common modeling techniques such as GMMs and HMMs. 
    Then, we will discuss the first processing step usually done in speaker diarization 
    which is voice activity detection. We will consequently describe the classic approaches 
    for speaker diarization which are widely used today. We will then introduce state-of-the-art 
    techniques in speaker recognition required to understand modern speaker diarization techniques. 
    Following, we will describe approaches for speaker diarization using advanced representation 
    methods (supervectors, speaker factors, i-vectors) and we will describe supervised and 
    unsupervised learning techniques used for speaker diarization. We will also discuss issues 
    such as coping with unknown number of speakers, detecting and dealing with overlapping speech, 
    diarization confidence estimation, and online speaker diarization. Finally we will discuss 
    two recent works: exploiting a-prioiri acoustic information (such as processing a meeting 
    when some of the participants are known in advanced to the system, and training data is available for them), 
    The second recent work is modeling speaker-turn dynamics. If time permits, we will also discuss concepts 
    such as multi-modal diarization and using TDOA (time difference of arrival) for diarization of meetings.

    Organizers: Hagai Aronowitz


T6: Multimodal Speech Recognition with the AusTalk 3D Audio-Visual Corpus

    This tutorial will provide attendees a brief overview of 3D based AVSR research. 
    In this tutorial, attendees will learn how to use the newly developed 3D based audio 
    visual data corpus we derived from the AusTalk corpus (https://austalk.edu.au/) 
    for audio-visual speech/speaker recognition. In addition, we also plan to introduce 
    some results using this newly developed 3D audio-visual data corpus, which show that 
    there is a significant speech accuracy increase by integrating both depth-level and grey-level 
    visual features. In the first part of the tutorial, we will review some recent works published 
    in the last decade, so that attendees can obtain an overview of the fundamental concepts 
    and challenges in this field. In the second part of the tutorial, we will briefly describe 
    the recording protocol and contents of the 3D data corpus, and show attendees how to use 
    this corpus for their own research. In the third part of this tutorial, we will present our 
    results using the 3D data corpus. The experimental results show that, compared with the 
    conventional AVSR based on the audio and grey-level visual features, the integration of grey 
    and depth visual information can boost the AVSR accuracy significantly. Moreover, 
    we will also experimentally explain why adding depth information can benefit the standard AVSR systems. 
    Eventually, through our tutorial, we hope we can inspire more researchers in the community 
    to contribute to this exciting research.

    Organizers: Roberto Togneri, Mohammed Bennamoun and Chao (Luke) Sui


T7: Semantic Web and Linked Big Data Resources for Spoken Language Processing

    State-of-the-art statistical spoken language processing typically requires 
    significant manual effort to construct domain-specific schemas (ontologies) 
    as well as manual effort to annotate training data against these schemas. 
    At the same time, a recent surge of activity and progress on semantic web-related 
    concepts from the large search-engine companies represents a potential alternative 
    to the manually intensive design of spoken language processing systems. 
    Standards such as schema.org have been established for schemas (ontologies) that 
    webmasters can use to semantically and uniformly markup their web pages. 
    Search engines like Bing, Google, and Yandex have adopted these standards and are 
    leveraging them to create semantic search engines at the scale of the web. 
    As a result, the open linked data resources and semantic graphs covering various 
    domains (such as Freebase [3]) have grown massively every year and contains far more 
    information than any single resource anywhere on the Web. Furthermore, these resources 
    contain links to text data (such as Wikipedia pages) related to the knowledge in the graph.

    Recently, several studies on speech language processing started exploiting these massive 
    linked data resources for language modeling and spoken language understanding. 
    This tutorial will include a brief introduction to the semantic web and the linked 
    data structure, available resources, and querying languages. 
    An overview of related work on information extraction and language processing will 
    be presented, where the main focus will be on methods for learning spoken language 
    understanding models from these resources.

    Organizers: Dilek Hakkani-Tür and Larry Heck


T8: Speech and Audio for Multimedia Semantics

    Internet media sharing sites and the one-click upload capability of smartphones 
    are producing a deluge of multimedia content. While visual features are often dominant 
    in such material, acoustic and speech information in particular often complements it. 
    By facilitating access to large amounts of data, the text-based Internet gave a huge 
    boost to the field of natural language processing. The vast amount of consumer-produced 
    video becoming available now will do the same for video processing, eventually enabling 
    semantic understanding of multimedia material, with implications for human computer interaction, robotics, etc.

    Large-scale multi-modal analysis of audio-visual material is now central to a number of 
    multi-site research projects around the world. While each of these have slightly different 
    targets, they are facing largely the same challenges: how to robustly and efficiently process 
    large amounts of data, how to represent and then fuse information across modalities, 
    how to train classifiers and segmenters on unlabeled data, how to include human feedback, etc.

    In this tutorial, we will present the state of the art in large-scale video, speech, 
    and non-speech audio processing, and show how these approaches are being applied to tasks 
    such as content based video retrieval (CBVR) and multimedia event detection (MED). 
    We will introduce the most important tools and techniques, and show how the combination of 
    information across modalities can be used to induce semantics on multimedia material 
    through ranking of information and fusion. Finally, we will discuss opportunities 
    for research that the INTERSPEECH community specifically will find interesting and fertile. 

    Organizers: Florian Metze and Koichi Shinoda


----------------------------------------------------------------------------------------------------
ISCSLP Tutorials @ INTERSPEECH 2014 Description
----------------------------------------------------------------------------------------------------

ISCSLP-T1: Adaptation Techniques for Statistical Speech Recognition

    Adaptation is a technique to make better use of existing models for test data 
    from new acoustic or linguistic conditions. It is an important and challenging 
    research area of statistical speech recognition. This tutorial gives a systematic 
    review of fundamental theories as well as introduction of state-of-the-art adaptation 
    techniques. It includes both acoustic and language model adaptation. Following a simple example 
    of acoustic model adaptation, basic concepts, procedures and categories of adaptation will 
    be introduced. Then, a number of advanced adaptation techniques will be discussed, 
    such as discriminative adaptation, Deep Neural Network adaptation, adaptive training, 
    relationship to noise robustness etc. After the detailed review of acoustic model adaptation, 
    an introduction of language model adaptation, such as topic adaptation will also be given. 
    The whole tutorial is then summarised and future research direction will be discussed.

    Organizers: Kai Yu


ISCSLP-T2: Emotion and Mental State Recognition: Features, Models, System Applications and Beyond

    Emotion recognition is the ability to identify what you are feeling from moment 
    to moment and to understand the connection between your feelings and your expressions. 
    In today’s world, human-computer interaction (HCI) interface undoubtedly plays an 
    important role in our daily life. Toward harmonious HCI interfaces, automated analysis 
    and recognition of human emotion has attracted increasing attention from researchers 
    in multidisciplinary research fields. A specific area of current interest that also has key 
    implications for HCI is the estimation of cognitive load (mental workload), research into 
    which is still at an early stage. Technologies for processing daily activities including speech, 
    text and music have expanded the interaction modalities between humans and computer-supported 
    communicational artifacts.

    In this tutorial, we will present theoretical and practical work offering new and broad views 
    of the latest research in emotional awareness from audio and speech. We discuss several parts 
    spanning a variety of theoretical background and applications ranging from salient emotional features, 
    emotional-cognitive models, compensation methods for variability due to speaker and linguistic content, 
    to machine learning approaches applicable to emotion recognition. In each topic, we will review 
    the state of the art by introducing current methods and presenting several applications. 
    In particular, the application to cognitive load estimation will be discussed, 
    from its psychophysiological origins to system design considerations. Eventually, 
    technologies developed in different areas will be combined for future applications, 
    so in addition to a survey of future research challenges, 
    we will envision a few scenarios in which affective computing can make a difference.

    Organizers: Chung-Hsien Wu, Hsin-Min Wang, Julien Epps and Vidhyasaharan Sethu


ISCSLP-T3: Unsupervised Speech and Language Processing via Topic Models

    In this tutorial, we will present state-of-art machine learning approaches 
    for speech and language processing with highlight on the unsupervised methods 
    for structural learning from the unlabeled sequential patterns. In general, 
    speech and language processing involves extensive knowledge of statistical models. 
    We require designing a flexible, scalable and robust system to meet heterogeneous 
    and nonstationary environments in the era of big data. This tutorial starts from an 
    introduction of unsupervised speech and language processing based on factor analysis 
    and independent component analysis. The unsupervised learning is generalized to a latent 
    variable model which is known as the topic model. The evolution of topic models from 
    latent semantic analysis to hierarchical Dirichlet process, from non-Bayesian parametric 
    models to Bayesian nonparametric models, and from single-layer model to hierarchical 
    tree model shall be surveyed in an organized fashion. The inference approaches based on 
    variational Bayesian and Gibbs sampling are introduced. We will also present several 
    case studies on topic modeling for speech and language applications including language model, 
    document model, retrieval model, segmentation model and summarization model. 
    At last, we will point out new trends of topic models for speech and language processing.

    Organizers: Jen-Tzung Chien


ISCSLP-T4: Deep Learning for Speech Generation and Synthesis

    Deep learning, which can represent high-level abstractions in data with an architecture of 
    multiple non-linear transformation, has made a huge impact on automatic speech recognition (ASR) 
    research, products and services. However, deep learning for speech generation and synthesis 
    (i.e., text-to-speech), which is an inverse process of speech recognition (i.e., speech-to-text), 
    has not generated the similar momentum as it is for ASR yet. Recently, motivated by the success 
    of Deep Neural Networks in speech recognition, some neural network based research attempts have 
    been tried successfully on improving the performance of statistical parametric based 
    speech generation/synthesis. In this tutorial, we focus on deep learning approaches to the 
    problems in speech generation and synthesis, especially on Text-to-Speech (TTS) synthesis and voice conversion.

    First, we give a review for the current main stream of statistical parametric based speech generation 
    and synthesis, or the GMM-HMM based speech synthesis and GMM-based voice conversion with emphasis 
    on analyzing the major factors responsible for the quality problems in the GMM-based voice 
    synthesis/conversion and the intrinsic limitations of a decision-tree based, contextual state 
    clustering and state-based statistical distribution modeling. We then present the latest deep 
    learning algorithms for feature parameter trajectory generation, in contrast to deep learning for 
    recognition or classification. We cover common technologies in Deep Neural Network (DNN) and improved 
    DNN: Mixture Density Networks (MDN), Recurrent Neural Networks (RNN) with Bidirectional Long Short 
    Term Memory (BLSTM) and Conditional RBM (CRBM). Finally, we share our research insights and hand-on 
    experience on building speech generation and synthesis systems based upon deep learning algorithms.

    Organizers: Yao Qian and Frank K. Soong
Back  Top

3-1-11(2014-09-14) INTERSPEECH 2014 Grants and awards

--- INTERSPEECH 2014 - SINGAPORE
--- September 14-18, 2014
--- http://www.interspeech2014.org



INTERSPEECH 2014 Organizing Committee is glad to announce that
5 travel grants will be offered by INTESPEECH 2014 sponsors and
that ISCA will offer 55 grants for students and young scientists
and give 3 ISCA Best Student Paper Awards.
Modalities and details about the grants and Best Paper Awards
are given below.

-----------------------------------------------------------------------------------
-- ISCA Grants
-----------------------------------------------------------------------------------

General Information

    ISCA will offer grants to students and young scientists (age<35)
    in order to help them attend the conference. Each grant, equivalent
    to 650 Euros, will be disbursed in cash at the registration desk
    during the conference.

    All grant recipients are strongly encouraged to participate
    in the Student Volunteer Program with a workload of one day
    or 3 sessions for the entire conference. For details - please
    refer to the Student Volunteer Program page.

Conditions

    The grants for participation in INTERSPEECH 2014 will be
    administered by the ISCA Board and awarded, according to the
    available budget, to applicants who provide the information
    listed below:

        * Letter of acceptance of a paper to be presented.
        * Mentioning of other funding being requested,
          awarded or rejected by other sources
        * Sworn statement that the applicant never received any
          previous grants from ISCA.
       
    Note that preference is given to applicants from areas
    needing greater support.

    The purpose of these grants is to guarantee a representation
    of the diverse scientific topics of the conference and a cross-
    section of the international speech community as wide as possible.
    Exceptionally, researchers in special situations like unemployment
    or coming from low income-level countries could also apply.
    Applicants from all countries are eligible.

Application Process

    Applicants should complete the application form with
    additional documents:

        1) Curriculum Vitae with academic records
        2) Complete list of previous publications
           (title, year, co-authors journal/book/proceedings, pages).
        3) List of the conferences and workshops previously attended.
        4) Letter of acceptance of the paper(s) from the conference organizers
        5) Copy of the original manuscript(s) accepted to the conference
        6) Recommendation letter (email is acceptable)
           from the Director of your Laboratory

    Application form can be downloaded from

http://www.isca-speech.org/iscaweb/images/files/grants/grant_form.txt

    Please submit the application package by sending an email with
    subject heading “ISCA Grants for INTERSPEECH 2014” to the grants
    coordinator Prof. Alan Black:

        grants@isca-speech.org

    before 17 June 2014. Prof. Alan Black will send notification letter
    to all applicants.For more information of grant application,
    please refer to

        http://www.isca-speech.org/iscaweb/index.php/grants.

    As there is a limited period for application,
    interested applicants are encouraged to start preparing the
    required documents if s/he would like to consider this source
    of support to attend INTERSPEECH 2014.

************************************************************************
*                         Important Dates                              *
************************************************************************
Grant application start after acceptance notification:     June 10, 2014
Grant application deadline:                                June 17, 2014
Notification of grant acceptance:                          June 24, 2014
Early Registration:                                        July 10, 2014
************************************************************************

Registration Procedure

    All grant recipients must be ISCA members to receive the grants.
    All grant recipients will be given free registration to the
    INTERSPEECH main conference that includes the technical program,
    reception, and banquet, but excludes other options (such as
    tutorials). Grant recipients please inform the organizer your
    Registration ID, which you can obtain from the online registration
    system, before you make payment (if any) so that the organizer can
    waive your registration fee by changing your account setting.

    All grants will be disbursed onsite, during the conference.
    Please see Dr Tong Rong with the notification letter and paper ID
    at the registration desk, to collect your money.

    All grant recipients are required to register online to the system
    to confirm the attendance, to make purchases beyond the free
    package. As all accounts have to be settled immediately, the
    organizers will not entertain any request to transfer, ship or send
    money after the conference ends.

    ISCA Board will make the selection and provide the list of grant
    recipients to INTERSPEECH 2014 organizing committee. The committee
    will inform you the registration procedure. Please contact

        registration@interspeech2014.org,

    if you have any inquiry. Please note that grant recipients need to
    complete the registration by 10 Jul 2014. Late registration fees
    apply if this early registration deadline is not met.


-----------------------------------------------------------------------------------
-- INTERSPEECH 2014 Student Travel Grants
-----------------------------------------------------------------------------------

    INTERSPEECH 2014 would like to express sincere gratitude to Google,
    Samsung and IFLYTEK CO., LTD. for their combined support of
    5 student travel grants, 3 will be selected as the Google grantees,
    1 will be selected as the Samsung grantee, and 1 will be selected
    as the IFLYTEK CO., LTD. grantee. The grantees are selected based
    on the technical quality of their paper(s). The grantees will be
    notified and announced on the INTERSPEECH 2014 website by July 3,
    2014. Grantees will be announced by the INTERSPEECH General Chair
    during the closing ceremony.

    For any inquiry, please contact

        awards@interspeech2014.org.


-----------------------------------------------------------------------------------
ISCA Best Student Paper Awards
-----------------------------------------------------------------------------------

    Each year, ISCA awards 3 best student paper awards at INTERSPEECH
    based on the technical merit of the paper and the presentation at
    the conference. Note that the first author student needs to present
    the paper in person to be considered for the award. The ISCA Best
    Student Paper Award is sponsored by IBM Research. In addition,
    Dilek Hakkani-Tur and Larry Heck from Microsoft Research are
    donating their honorarium from their tutorial “Semantic Web and
    Linked Big Data Resources for Spoken Language Processing” to
    financially sponsor the ISCA Best Student Paper Award. The
    shortlist of Best Student Paper Awards will be announced on the
    INTERSPEECH 2014 website by July 3, 2014 and the final awardees
    will be announced by the president of ISCA during the closing
    ceremony.


    For any inquiry, please contact

        awards@interspeech2014.org.


The INTERSPEECH 2014 Organizing Committee


Back  Top

3-1-12(2014-09-14) INTERSPEECH 2014 News (June 2014)

INTERSPEECH 2014 satellite workshops will take place in Singapore, Phuket (Thailand) and Penang (Malaysia) for more speech events before or after #is2014 http://www.interspeech2014.org/public.php?page=workshop.html

 

 

Join INTERSPEECH 2014 satellite 2nd workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction: MA3HMI 2014 https://www.scss.tcd.ie/conferences/MA3HMI/index.html

 

Enter the Blizzard Challenge Workshop 2014 in Singapore right after INTERSPEECH 2014. More info on http://synsig.org/index.php/Blizzard_Challenge_2014_Workshop

 

After INTERSPEECH 2014, be welcome to join the4th Workshop on Child, Computer and Interaction that will be hel in Singapore on September 19th , 2014. More info athttp://www.wocci.org/

 

Do you know that the 17th Oriental COCOSDA will take place in Phuket, Thailand before INTERSPEECH 2014? more at http://www.ococosda2014.org/

 

The 2nd Workshop on Speech, Language and Audio in Multimedia (SLAM) is organized in Penang Malaysia as INTERSPEECH 2014 satellite event. http://language.cs.usm.my/SLAM2014/

 

Register for INTERSPEECH 2014 tutorial on “Non-speech acoustic event detection and classification” http://www.interspeech2014.org/public.php?page=tutorial.html#T1

Learn about the “Contribution of MRI to Exploring and Modeling Speech Production” during INTERSPEECH 2014 tutorials http://www.interspeech2014.org/public.php?page=tutorial.html#T2

 

Find out about what's new in “Computational Models for Audiovisual Emotion Perception” in INTERSPEECH 2014 tutorials. http://www.interspeech2014.org/public.php?page=tutorial.html#T3

 

Join the tutorial on “The Art and Science of Speech Feature Engineering” before INTERSPEECH 2014.

http://www.interspeech2014.org/public.php?page=tutorial.html#T4

 

What are the “Recent Advances in Speaker Diarization”? Come and learn about them in INTERSPEECH 2014 tutorials. http://www.interspeech2014.org/public.php?page=tutorial.html#T5

 

“Multimodal Speech Recognition with the AusTalk 3D Audio-Visual Corpus” will be the subject of a tutorial of INTERSPEECH 2014. http://www.interspeech2014.org/public.php?page=tutorial.html#T6

 

Please check out INTERSPEECH 2014 tutorial on “Semantic Web and Linked Big Data Resources for Spoken Language Processing”. http://www.interspeech2014.org/public.php?page=tutorial.html#T7

 

Participate to the tutorial on “Speech and Audio for Multimedia Semantics” during INTERSPEECH 2014. http://www.interspeech2014.org/public.php?page=tutorial.html#T8

 

Join ISCSLP@INTERSPEECH tutorial on 'Adaptation Techniques for Statistical Speech Recognition” in INTERSPEECH 2014 satellite event http://www.interspeech2014.org/public.php?page=tutorial.html#ISCSLP-T1

 

Register to ISCSLP@INTERSPEECH tutorial on “Emotion and Mental State Recognition: Features, Models, System Applications and Beyond”, http://www.interspeech2014.org/public.php?page=tutorial.html#ISCSLP-T2

 

Learn more about “Unsupervised Speech and Language Processing via Topic Models”,in ISCSLP@INTERSPEECH tutorial http://www.interspeech2014.org/public.php?page=tutorial.html#ISCSLP-T3

 

ISCSLP@ INTERSPEECH will host a tutorial on “Deep Learning for Speech Generation and Synthesis”, http://www.interspeech2014.org/public.php?page=tutorial.html#ISCSLP-T4

 

 

Back  Top

3-1-13(2014-09-14) INTERSPEECH 2014 Plenary talks

-----------------------------------------------------
-            INTERSPEECH 2014 - SINGAPORE           -
-               September 14-18, 2014               -
-           http://www.interspeech2014.org          -
-----------------------------------------------------

ISCA, COLIPS and the organizing Committee of INTERSPEECH 2014
are proud to announce that INTERSPEECH 2014 will feature
five plenary talks by internationally renowned experts.


- keynote speech
  by the ISCA Medallist 2014

- 'Decision Learning in Data Science:
  Where John Nash Meets Social Media'
  by Professor K. J. Ray Liu

- 'Language Diversity: Speech Processing In A Multi-Lingual Context'
  by Dr. Lori Lamel

- 'Sound Patterns In Language'
  by Professor William Shi-Yuan WANG 王士元

- 'Achievements and Challenges of Deep Learning
  From Speech Analysis And Recognition To Language
  And Multimodal Processing'
  by Dr. Li DENG

Details of the keynote speeches and biographies of the presenters are given below.

Looking forward to welcome you in Singapore,
the organizing committee



************************************************************************
* On Monday, 15th of September                                         *
************************************************************************

The ISCA Medallist 2014 will give a keynote speech.
The name of the Medallist and subject of the talk will be
disclosed on the first day of INTERSPEECH 2014.


************************************************************************
* On Tuesday morning, 16th of September                                *
************************************************************************

Professor K. J. Ray Liu
Department of Electrical and Computer Engineering
University of Maryland, College Park

will give a presentation on:

'Decision Learning in Data Science: Where John Nash Meets Social Media'

Abstract

    With the increasing ubiquity and power of mobile devices,
    as well as the prevalence of social media, more and more
    activities in our daily life are being recorded, tracked,
    and shared, creating the notion of “social media”.
    Such abundant and still growing real life data, known as
    “big data”, provide a tremendous research opportunity in many fields.
    To analyze, learn and understand such user-generated big data,
    machine learning has been an important tool and various
    machine learning algorithms have been developed.
    However, since the user-generated big data is
    the outcome of users’ decisions, actions and their socio-economic
    interactions, which are highly dynamic, without considering users’
    local behaviours and interests, existing learning approaches
    tend to focus on optimizing a global objective function at
    the macroeconomic level, while totally ignore users’ local
    decisions at the micro-economic level. As such there is a growing
    need in bridging machine/social learning with strategic decision
    making, which are two traditionally distinct research disciplines,
    to be able to jointly consider both global phenomenon and local
    effects to understand/model/analyze better the newly arising
    issues in the emerging social media. In this talk, we present
    the notion of “decision learning” that can involve users's
    behaviours and interactions by combining learning with strategic
    decision making.
    We will discuss some examples from social media with real data to
    show how decision learning can be used to better analyze users’
    optimal decision from a user’ perspective as well as design a
    mechanism from the system designer’s perspective
    to achieve a desirable outcome.

Biography of the speaker

    Dr. K. J. Ray Liu was named a Distinguished Scholar-Teacher
    of University of Maryland in 2007, where he is Christine Kim
    Eminent Professor of Information Technology.
    He leads the Maryland Signals and Information Group conducting
    research encompassing broad areas of signal processing and
    communications with recent focus on cooperative communications,
    cognitive networking, social learning and decision making,
    and information forensics and security. Dr. Liu has received
    numerous honours and awards including IEEE Signal Processing
    Society 2009 Technical Achievement Award and various best paper
    awards from IEEE Signal Processing, Communications, and Vehicular
    Technology Societies, and EURASIP. A Fellow of the IEEE and AAAS,
    he is recognized by Thomson Reuters as an ISI Highly Cited
    Researcher.
    Dr. Liu was the President of IEEE Signal Processing Society,
    the Editor-in-Chief of IEEE Signal Processing Magazine and
    the founding Editor-in-Chief of EURASIP Journal on Advances
    in Signal Processing. Dr. Liu also received various research
    and teaching recognitions from the University of Maryland,
    including Poole and Kent Senior Faculty Teaching Award,
    Outstanding Faculty Research Award, and Outstanding Faculty
    Service Award, all from A. James Clark School of Engineering;
    and Invention of the Year Award (three times)
    from Office of Technology Commercialization.


************************************************************************
* On Tuesday afternoon, 16th of September                              *
************************************************************************

Dr. Lori Lamel
Senior Research scientist (DR1), LIMSI-CNRS

will give a presentation on

'Language Diversity: Speech Processing In A Multi-Lingual Context'

Abstract

    Speech processing encompasses a variety of technologies
    that automatically process speech for some downstream processing.
    These technologies include identifying the language or dialect
    spoken, the person speaking, what is said and how it is said.
    The downstream processing may be limited to a transcription or
    to a transcription enhanced with additional meta-data, or may
    be used to carry out an action or interpreted within a spoken
    dialogue system or more generally for analytics.  With the
    availability of large spoken multimedia or multimodal data there is
    growing interest in using such technologies to provide structure
    and random access to particular segments. Automatic tools can also
    serve to annotate large corpora for exploitation in linguistic
    studies of spoken language, such as acoustic-phonetics,
    pronunciation variation and diachronic evolution,
    permitting the validation of hypotheses and models.
    In this talk I will present some of my experience with speech
    processing in multiple languages, drawing upon progress in the
    context of several research projects, most recently the Quaero
    program and the IARPA Babel program, both of which address the
    development of technologies in a variety of languages, with the aim
    to some highlight recent research directions and challenges.

Biography of the speaker

    I am a senior research scientist (DR1) at the CNRS, which I joined as
    a permanent researcher at LIMSI in October 1991.
    I received my Ph.D. degree in Electrical Engineering and Computer Science
    in May 1988 from the Massachusetts Institute of Technology.
    My research activities focus on large vocabulary speaker-
    independent, continuous speech recognition in multiple languages
    with a recent focus on low-resourced languages; lightly and
    unsupervised acoustic model training methods; studies in acoustic-
    phonetics; lexical and pronunciation modelling. I contributed to
    the design, and realization of large speech corpora (TIMIT, BREF,
    TED). I have been actively involved in the research projects, most
    recently leading the activities on speech processing in the OSEO
    Quaero program, and I am currently co-principal investigator for
    LIMSI as part of the IARPA Babel Babelon team led by BBN.
    I served on the Steering committee for Interspeech 2013 as
    co-technical program chair along with Pascal Perrier, and I am now
    serving on the Technical Program Committee of Interspeech 2014.


************************************************************************
* On Wednesday, 17th of September                                      *
************************************************************************

Professor William Shi-Yuan WANG 王士元
Centre for Language and Human Complexity,
Chinese University of Hong Kong
Professor Emeritus, University of California at Berkeley
Honorary Professor, Peking University
Academician, Academia Sinica

will give a presentation about

'Sound Patterns In Language'

Abstract

    In contrast to other species, humans are unique in having developed
    thousands of diverse languages which are not mutually
    intelligible. However, any infant can learn any language with ease,
    because all languages are based upon common biological
    infrastructures of sensori-motor, memorial, and cognitive
    faculties.  While languages may differ significantly in the sounds
    they use, the overall organization is largely the same.
    It is divided into a discrete segmental system for building words
    and a continuous prosodic system for expressing, phrasing,
    attitudes, and emotions. Within this organization, I will discuss a
    class of languages called 'tone languages', which makes special use
    of F0 to build words.  Although the best known of these is Chinese,
    tone languages are found in many parts of the world, and operate on
    different principles. I will also comment on relations between
    sound patterns in language and sound patterns in music, the two
    worlds of sound universal to our species.

Biography of the speaker

    William S-Y. Wang received his early schooling in China, and his
    PhD from the University of Michigan.  He was appointed
    Professor of Linguistics at the University of California at
    Berkeley in 1965, and taught there for 30 years.
    Currently he is in the Department of Electronic Engineering and in
    the Department of Linguistics and Modern Languages of the Chinese
    University of Hong Kong, and Director of the newly established
    Joint Research Centre for Language and Human Complexity. His
    primary interest is the evolution of language from a multi-
    disciplinary perspective.


************************************************************************
* On Thursday, 18th of September                                      *
************************************************************************

Dr. Li DENG
Principal Researcher and Research Manager
Deep Learning Technology Centre,
Microsoft Research, Redmond, USA

will give a presentation on the

'Achievements and Challenges of Deep Learning
From Speech Analysis And Recognition To Language And Multimodal Processing'

Abstract

    Artificial neural networks have been around for over half a century
    and their applications to speech processing have been almost as
    long, yet it was not until year 2010 that their real impact had
    been made by a deep form of such networks, built upon part of the
    earlier work on (shallow) neural nets and (deep) graphical models
    developed by both speech and machine learning communities. This
    keynote will first reflect on the path to this transformative
    success, sparked by speech analysis using deep learning methods
    on spectrogram-like raw features and then progressing rapidly to
    speech recognition with increasingly larger vocabularies and scale.
    The role of well-timed academic-industrial collaboration will be
    highlighted, so will be the advances of big data, big compute, and
    the seamless integration between the application-domain knowledge
    of speech and general principles of deep learning. Then, an
    overview will be given on sweeping achievements of deep learning in
    speech recognition since its initial success in 2010 (as well as in
    image recognition and computer vision since 2012). Such
    achievements have resulted in across-the-board, industry-wide
    deployment of deep learning. The final part of the talk will look
    ahead towards stimulating new challenges of deep learning ---
    making intelligent machines capable of not only hearing (speech)
    and seeing (vision), but also of thinking with a “mind”; i.e.
    reasoning and inference over complex, hierarchical relationships
    and knowledge sources that comprise a vast number of entities
    and semantic concepts in the real world based in part on multi-
    sensory data from the user.  To this end, language and multimodal
    processing --- joint exploitation and learning from text,
    speech/audio, and image/video --- is evolving into a new frontier
    of deep learning, beginning to be embraced by a mixture of research
    communities including speech and spoken language processing,
    natural language processing, computer vision, machine learning,
    information retrieval, cognitive science, artificial intelligence,
    and data/knowledge management. A review of recent published studies
    will be provided on deep learning applied to selected language and
    multimodal processing tasks, with a trace back to the relevant
    early connectionist modelling and neural network literature and
    with future directions in this new exciting deep learning frontier
    discussed and analyzed.


Biography of the speaker

    Li Deng received Ph.D. from the University of Wisconsin-Madison.
    He was a tenured professor (1989-1999) at the University of
    Waterloo, Ontario, Canada, and then joined Microsoft Research,
    Redmond, where he is currently a Principal Research Manager of its
    Deep Learning Technology Centre.
    Since 2000, he has also been an affiliate full professor at the
    University of Washington, Seattle, teaching computer speech
    processing. He has been granted over 60 US or international
    patents, and has received numerous awards and honours
    bestowed by IEEE, ISCA, ASA, and Microsoft including the latest
    IEEE SPS Best Paper Award (2013) on deep neural nets for speech
    recognition. He authored or co-authored 4 books including the
    latest one on Deep Learning: Methods and Applications. He is a
    Fellow of the Acoustical Society of America, a Fellow of the IEEE,
    and a Fellow of the ISCA. He served as the Editor-in-Chief
    for IEEE Signal Processing Magazine (2009-2011), and currently as
    Editor-in-Chief for IEEE Transactions on Audio, Speech and Language
    Processing. His recent research interests and activities have been
    focused on deep learning and machine intelligence applied to
    large-scale text analysis and to speech/language/image
    multimodal processing, advancing his earlier work with
    collaborators on speech analysis and recognition using deep neural
    networks since 2009.

Back  Top

3-1-14(2014-09-14) INTERSPEECH 2014 Singapore

 

 

It is a great pleasure to announce that the 15th edition of the Annual Conference of the International Speech Communication Association (INTERSPEECH) will be held in Singapore during September 14-18, 2014. INTERSPEECH 2014 will bring together the community to celebrate the diversity of spoken languages in the vibrant city state of Singapore.  INTERSPEECH 2014 is proudly organized by the Chinese and Oriental Languages Information Processing Society (COLIPS), the Institute for Infocomm Research (I2R), and the International Speech Communication Association (ISCA).

 

 

 

Ten steps to Singapore

 

You want to know more about Singapore?

 

During the next ten months, the organization committee will introduce you to Singaporean culture through a series of brief newsletters featuring topics related to spoken languages in Singapore. Please stay tuned!

 

 

 

Workshops

 

Submission deadline:  December 1, 2013

 

Satellite workshops related to speech and language research will be hosted in Singapore as well as in Phuket Island, Thailand (1 hr 20 min flight from Singapore) and in Penang, Malaysia (1 hr flight from Singapore).

 

Proposals must be submitted by email to workshops@interspeech2014.org before December 1, 2013. Notification of acceptance and ISCA approval/sponsorship will be announced by January 31, 2014.

 

 

 

Sponsorship and Exhibition

 

The objective of INTERSPEECH 2014 is to foster scientific exchanges in all aspects of Speech Communication sciences with a special focus on the diversity of spoken languages. We are pleased to invite you to take part in this major event as a sponsor. For more information, view the Sponsorship
Brochure
.

 

 

 

Conference venue

 

INTERSPEECH 2014 main conference will be held in the MAX Atria @ Singapore Expo.

 

 

 

Organizers

 

Lists of the organizing, advisory and technical program committees are available on line (here).

 

 

 

Follow us

 

Facebook: ISCA

 

Twitter: @Interspeech2014 follow hash tags: #is2014 or #interspeech2014

 

LinkedIn Interspeech

 

 

 

Contact

 

Conference website: www.interspeech2014.org

 

organizers.interspeech2014@isca-speech.org— For general enquiries

 

sponsorship@interspeech2014.org — For Exhibition & Sponsorship workshops@interspeech2014.org — For Workshops & Satellite Events

 

 

 

 

 

 

Back  Top

3-1-15(2014-09-14) Interspeech 2014 special session : Speech technologies for Ambient Assisted Living.

Interspeech 2014 special session : Speech technologies for Ambient Assisted Living.

Submission deadline: 24th March 2014

Singapore, 14-18 September 2014

http://www.interspeech2014.org/public.php?page=special_sessions.html#speech-technologies-ambient This special session focuses on the use of speech technologies for ambient assisted living, the creation of smart spaces and intelligent companions that can preserve independence and executive function, social communication and security of people with special needs. Currently, speech input for assistive technologies remains underutilized despite its potential to deliver highly informative data and serve as the primary means of interaction with the home. Speech interfaces could replace or augment obtrusive and sometimes outright inaccessible conventional computer interfaces. Moreover, the smart home context can support speech communication by providing a number of concurrent information sources (e.g., wearable sensors, home automation sensors, etc.), enabling multimodal communication. In practice, its use remains limited due to challenging real-world conditions, and because conventional speech interfaces can have difficulty with the atypical speech of many users. This, in turn, can be attributed to the lack of abundant speech material, and the limited adaptation to the user of these systems. Taking up the challenges of this domain requires a multidisciplinary approach to define the user's needs, record corpora in realistic usage conditions, develop speech interfaces that are robust to both environment and user's characteristics and are able to adapt to specific users. This special session aims at bringing together researchers in speech and audio technologies with people from the ambient assisted living and assistive technologies communities to meet and foster awareness between members of either community, discuss problems, techniques and datasets, and perhaps initiate common projects. Topics of the session will include: Assistive speech technology Applications of speech technology (ASR, dialogue, synthesis) for ambient assisted living Understanding, modelling, or recognition of aged and atypical speech Multimodal speech recognition (context-aware ASR) Multimodal emotion recognition Audio scene and smart space context analysis Assessment of speech and language processing within the context of assistive technology Speech synthesis and speech recognition for physical or cognitive impairments Symbol languages, sign languages, nonverbal communication Speech and NLP applied to typing interface applications Language modelling for Augmentative and Alternative Communication text entry and speech generating devices Deployment of speech and NLP tools in the clinic or in the field Linguistic resources; corpora and annotation schemes Evaluation of systems and components. Submission instructions: Researchers who are interested in contributing to this special session are invited to submit a paper according to the regular submission procedure of INTERSPEECH 2014, and to select 'Speech technologies for Ambient Assisted Living' in the special session field of the paper submission form. Please feel free to contact the organisers if you have any question regarding the special session.

Organizers: Michel Vacher michel.vacher [at] imag.fr Laboratoire d'Informatique de Grenoble, François Portet francois.portet [at] imag.fr Laboratoire d'Informatique de Grenoble, Frank Rudzicz frank [at] cs.toronto.edu University of Toronto, Jort F. Gemmeke jgemmeke [at] amadana.nl KU Leuven, Heidi Christensen h.christensen [at] dcs.shef.ac.uk University of Sheffield,

Back  Top

3-1-16(2014-09-14) Invitation to submit at the INTERSPEECH 2014 workshops (still open for only one of them)

 

To  INTERSPEECH 2014 Authors:

You should have received the INTERSPEECH 2014 notification of paper acceptance by now.

We are glad to let you know that 6 INTERSPEECH Workshops/Satellite Workshops are co-located with INTERSPEECH 2014. Their paper submissions are still open. You are encouraged to submit papers to the workshops and to join the workshops as part of your INTERSPEECH trip.

The details of the workshops are as follows.

ISCSLP 2014: 9th International Symposium on Chinese Spoken Language Processing
Date: 12-14 September 2014
Location: Singapore
Website: http://www.iscslp2014.org/
Submission Deadline: 17 June 2014 (special session - Advances in Human Language Technologies)

WOCCI 2014: 4th Workshop on Child, Computer and Interaction
Date: 19 September 2014
Location: Singapore
Website: http://www.wocci.org/
Submission Deadline: 17 June 2014

O-COCOSDA 2014: 17th Oriental COCOSDA
Date: 10-12 September 2014
Location: Phuket, Thailand
Website: http://www.ococosda2014.org/
Submission Deadline: 20 June 2014

SLAM 2014: 2nd Workshop on Speech, Language and Audio in Multimedia
Date: 11-12 September 2014
Location: Penang, Malaysia
Website: http://language.cs.usm.my/SLAM2014/
Submission Deadline: 20 June 2014

MA3HMI 2014: 2nd Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction
Date: 14 September 2014
Location: Singapore
Website: https://www.scss.tcd.ie/conferences/MA3HMI/index.html
Submission Deadline: 1 July 2014

Blizzard Challenge Workshop 2014
Date: 19 September 2014
Location: Singapore
Website: http://synsig.org/index.php/Blizzard_Challenge_2014_Workshop
Submission Deadline: 9 August 2014


Yours sincerely,
Helen Meng and Bin Ma
Technical Program Chairs, INTERSPEECH 2014
Back  Top

3-1-17(2014-09-14) Special sessions at Interspeech 2014: call for submissions

--- INTERSPEECH 2014 - SINGAPORE

--- September 14-18, 2014

--- http://www.INTERSPEECH2014.org

INTERSPEECH is the world's largest and most comprehensive conference on issues surrounding

the science and technology of spoken language processing, both in humans and in machines.

The theme of INTERSPEECH 2014 is

--- Celebrating the Diversity of Spoken Languages ---

INTERSPEECH 2014 includes a number of special sessions covering interdisciplinary topics

and/or important new emerging areas of interest related to the main conference topics.

Special sessions proposed for the forthcoming edition are:

• A Re-evaluation of Robustness

• Deep Neural Networks for Speech Generation and Synthesis

• Exploring the Rich Information of Speech Across Multiple Languages

• INTERSPEECH 2014 Computational Paralinguistics ChallengE (ComParE)

• Multichannel Processing for Distant Speech Recognition

• Open Domain Situated Conversational Interaction

• Phase Importance in Speech Processing Applications

• Speaker Comparison for Forensic and Investigative Applications

• Text-dependent for Short-duration Speaker Verification

• Tutorial Dialogues and Spoken Dialogue Systems

• Visual Speech Decoding

A description of each special session is given below.

For paper submission, please follow the main conference procedure and chose the Special Session track when selecting

your paper area.

Paper submission procedure is described at:

http://www.INTERSPEECH2014.org/public.php?page=submission_procedure.html

For more information, feel free to contact the Special Session Chair,

Dr. Tomi H. Kinnunen, at email tkinnu [at]cs.uef.fi

----------------------------------------------------------------------------------------------------

Special Session Description

----------------------------------------------------------------------------------------------------

A Re-evaluation of Robustness

The goal of the session is to facilitate a re-evaluation of robust speech

recognition in the light of recent developments. It’s a re-evaluation at two levels:

• a re-evaluation in perspective brought by breakthroughs in performance obtained

by Deep Neural Network which leads to a fresh questioning of the role and

contribution of robust feature extraction.

• A literal re-evaluation on common databases to be able to present and compare

performances of different algorithms and system approaches to robustness.

Paper submissions are invited on the theme of noise robust speech recognition

and required to submit results on the Aurora 4 database to facilitate cross comparison

of the performance between different techniques.

Recent developments raise interesting research questions that the session aims to help

Progress by bringing focus and exploration of these issues. For example

1. What role is there for signal processing to create feature representations to use as

inputs to Deep Learning or can deep learning do all the work?

2. What feature representations can be automatically learnt in a deep learning architecture?

3. What other techniques can give great improvement in robustness?

4. What techniques don’t work and why?

The session organizers wish to encourage submissions that bring insight and understanding to

the issues highlighted above. Authors are requested not only to present absolute performance

of the whole system but also to highlight the contribution made by various components in a

complex system.

Papers that are accepted for the session are encouraged to also evaluate their techniques on new test

data sets (available in July) and submit their results at the end of August.

Session organization

The session will be structured as a combination of

1. Invited talks

2. Oral paper presentations

3. Poster presentations

4. Summary of contributions and results on newly released test sets

5. Discussion

Organizers:

David Pearce, Audience dpearce [at]audience.com

Hans-Guenter Hirsch, Niederrhein University of Applied Sciences, hans-guenter.hirsch [at]hs-niederrhein.de

Reinhold Haeb-Umbach, University of Paderborn, haeb [at]nt.uni-paderborn.de

Michael Seltzer, Microsoft, mseltzer [at]microsoft.com

Keikichi Hirose, The University of Tokyo, hirose [at]gavo.t.u-tokyo.ac.jp

Steve Renals, University of Edinburgh, s.renals [at]ed.ac.uk

Sim Khe Chai, National University of Singapore, simkc [at]comp.nus.edu.sg

Niko Moritz, Fraunhofer IDMT, Oldenburg, niko.moritz [at]idmt.fraunhofer.de

K K Chin, Google, kkchin [at]google.com

Deep Neural Networks for Speech Generation and Synthesis

This special session aims to bring together researchers who work actively on deep neural

networks for speech research, particularly, in generation and synthesis, to promote and

to understand better the state-of-art DNN research in statistical learning and compare

results with the parametric HMM-GMM model which has been well-established for speech synthesis,

generation, and conversion. DNN, with its neuron-like structure, can simulate human speech

production system in a layered, hierarchical, nonlinear and self-organized network.

It can transform linguistic text information into intermediate semantic, phonetic and prosodic

content and finally generate speech waveforms. Many possible neural network architectures or

typologies exist, e.g. feed-forward NN with multiple hidden layers, stacked RBM or CRBM,

Recurrent Neural Net (RNN), which have been used to speech/image recognition and other applications.

We would like to use this special session as a forum to present updated results in the research frontiers,

algorithm development and application scenarios. Particular focused areas will be on

parametric TTS synthesis, voice conversion, speech compression, de-noising and speech enhancement.

Organizers:

Yao Qian, Microsoft Research Asia, yaoqian [at]microsoft.com

Frank K. Soong, Microsoft Research Asia, frankkps [at]microsoft.com

Exploring the Rich Information of Speech Across Multiple Languages

Spoken language is the most direct means of communication between human beings. However,

speech communication often demonstrates its language-specific characteristics because of,

for instance, the linguistic difference (e.g., tonal vs. non-tonal, monosyllabic vs. multisyllabic)

across languages. Our knowledge on the diversities of speech science across languages is still limited,

including speech perception, linguistic and non-linguistic (e.g., emotion) information, etc.

This knowledge is of great significance to facilitate our design of language-specific application of

speech techniques (e.g., automatic speech recognition, assistive hearing devices) in the future.

This special session will provide an opportunity for researchers from various communities

(including speech science, medicine, linguistics and signal processing) to stimulate further discussion

and new research in the broad cross-language area, and present their latest research on understanding

the language-specific features of speech science and their applications in the speech communication of

machines and human beings. This special session encourages contributions all fields on speech science,

e.g., production and perception, but with a focus on presenting the language-specific characteristics

and discussing their implications to improve our knowledge on the diversities of speech science across

multiple languages. Topics of interest include, but are not limited to:

1. characteristics of acoustic, linguistic and language information in speech communication across

multiple languages;

2. diversity of linguistic and non-linguistic (e.g., emotion) information among multiple spoken languages;

3. language-specific speech intelligibility enhancement and automatic speech recognition techniques; and

4. comparative cross-language assessment of speech perception in challenging environments.

Organizers:

Junfeng Li, Institute of Acoustics, Chinese Academy of Sciences, junfeng.li.1979 [at]gmail.com

Fei Chen, The University of Hong Kong, feichen1 [at]hku.hk

INTERSPEECH 2014 Computational Paralinguistics ChallengE (ComParE)

The INTERSPEECH 2014 Computational Paralinguistics ChallengE (ComParE) is an open Challenge

dealing with speaker characteristics as manifested in their speech signal's acoustic properties.

This year, it introduces new tasks by the Cognitive Load Sub-Challenge, the Physical Load

Sub-Challenge, and a Multitask Sub-Challenge: For these Challenge tasks,

the COGNITIVE-LOAD WITH SPEECH AND EGG database (CLSE), the MUNICH BIOVOICE CORPUS (MBC),

and the ANXIETY-DEPRESSION-EMOTION-SLEEPINESS audio corpus (ADES) with high diversity of

speakers and different languages covered (Australian English and German) are provided by the organizers.

All corpora provide fully realistic data in challenging acoustic conditions and feature rich

annotation such as speaker meta-data. They are given with distinct definitions of test,

development, and training partitions, incorporating speaker independence as needed in most

real-life settings. Benchmark results of the most popular approaches are provided as in the years before.

Transcription of the train and development sets will be known. All Sub-Challenges allow contributors

to find their own features with their own machine learning algorithm. However, a standard feature set

will be provided per corpus that may be used. Participants will have to stick to the definition of

training, development, and test sets. They may report on results obtained on the development set,

but have only five trials to upload their results on the test sets, whose labels are unknown to them.

Each participation will be accompanied by a paper presenting the results that undergoes peer-review

and has to be accepted for the conference in order to participate in the Challenge.

The results of the Challenge will be presented in a Special Session at INTERSPEECH 2014 in Singapore.

Further, contributions using the Challenge data or related to the Challenge but not competing within

the Challenge are also welcome.

More information is given also on the Challenge homepage:

http://emotion-research.net/sigs/speech-sig/is14-compare

Organizers:

Björn Schuller, Imperial College London / Technische Universität München,schuller [at]IEEE.org

Stefan Steidl, Friedrich-Alexander-University, stefan.steidl [at]fau.de

Anton Batliner, Technische Universität München / Friedrich-Alexander-University,

batliner [at]cs.fau.de

Jarek Krajweski, Bergische Universität Wuppertal, krajewsk [at]uni-wuppertal.de

Julien Epps, The University of New South Wales / National ICT Australia, j.epps [at]unsw.edu.au

Multichannel Processing for Distant Speech Recognition

Distant speech recognition in real-world environments is still a challenging problem: reverberation

and dynamic background noise represent major sources of acoustic mismatch that heavily decrease ASR

performance, which, on the contrary, can be very good in close-talking microphone setups.

In this context, a particularly interesting topic is the adoption of distributed microphones for

the development of voice-enabled automated home environments based on distant-speech interaction:

microphones are installed in different rooms and the resulting multichannel audio recordings capture

multiple audio events, including voice commands or spontaneous speech, generated in various locations

and characterized by a variable amount of reverberation as well as possible background noise.

The focus of the proposed special session will be on multichannel processing for automatic speech recognition (ASR)

in such a setting. Unlike other robust ASR tasks, where static adaptation or training with noisy data sensibly

ameliorates performance, the distributed microphone scenario requires full exploitation of multichannel

information to reduce the highly variable dynamic mismatch. To facilitate better evaluation of the proposed

algorithms the organizers will provide a set of multichannel recordings in a domestic environment.

The recordings will include spoken commands mixed with other acoustic events occurring in different

rooms of a real apartment.

The data is being created in the frame of the EC project DIRHA (Distant speech Interaction for Robust

Home Applications)

which addresses the challenges of speech interaction for home automation.

The organizers will release the evaluation package (datasets and scripts) on February 17;

the participants are asked to submit a regular paper reporting speech recognition results

on the evaluation set and comparing their performance with the provided reference baseline.

Further details are available at: http://dirha.fbk.eu/INTERSPEECH2014

Organizers:

Marco Matassoni, Fondazione Bruno Kessler, matasso [at]fbk.eu

Ramon Fernandez Astudillo, Instituto de Engenharia de Sistemas e Computadores, ramon.astudillo [at]inesc-id.pt

Athanasios Katsamanis, National Technical University of Athens, nkatsam [at]cs.ntua.gr

Open Domain Situated Conversational Interaction

Robust conversational systems have the potential to revolutionize our interactions with computers.

Building on decades of academic and industrial research, we now talk to our computers, phones,

and entertainment systems on a daily basis. However, current technology typically limits conversational

interactions to a few narrow domains/topics (e.g., weather, traffic, restaurants). Users increasingly want

the ability to converse with their devices over broad web-scale content. Finding something on your PC or

the web should be as simple as having a conversation.

A promising approach to address this problem is situated conversational interaction. The approach leverages

the situation and/or context of the conversation to improve system accuracy and effectiveness.

Sources of context include visual content being displayed to the user, Geo-location, prior interactions,

multi-modal interactions (e.g., gesture, eye gaze), and the conversation itself. For example, while a user

is reading a news article on their tablet PC, they initiate a conversation to dig deeper on a particular topic.

Or a user is reading a map and wants to learn more about the history of events at mile marker 121.

Or a gamer wants to interact with a game’s characters to find the next clue in their quest.

All of these interactions are situated – rich context is available to the system as a source of priors/constraints

on what the user is likely to say.

This special session will provide a forum to discuss research progress in open domain situated

conversational interactions.

Topics of the session will include:

• Situated context in spoken dialog systems

• Visual/dialog/personal/geo situated context

• Inferred context through interpretation and reasoning

• Open domain spoken dialog systems

• Open domain spoken/natural language understanding and generation

• Open domain semantic interpretation

• Open domain dialog management (large-scale belief state/policy)

• Conversational Interactions

• Multi-modal inputs in situated open domains (speech/text + gesture, touch, eye gaze)

• Multi-human situated interactions

Organizers:

Larry Heck, Microsoft Research, larry [at]ieee.org

Dilek Hakkani-Tür, Microsoft Research, dilek [at]ieee.org

Gokhan Tur, Microsoft Research, gokhan [at]ieee.org

Steve Young, Cambridge University, sjy [at]eng.cam.ac.uk

Phase Importance in Speech Processing Applications

In the past decades, the amplitude of speech spectrum is considered to be the most important feature in

different speech processing applications and phase of the speech signal has received less

attention. Recently, several findings justify the phase importance in speech and audio processing communities.

The importance of phase estimation along with amplitude estimation in speech enhancement,

complementary phase-based features in speech and speaker recognition and phase-aware acoustic

modeling of environment are the most prominent

reported works scattered in different communities of speech and audio processing. These examples suggest

that incorporating the phase information can push the limits of state-of-the-art phase-independent solutions

employed for long in different aspects of audio and speech signal processing. This Special Session aims

to explore the recent advances and methodologies to exploit the knowledge of signal phase information in different

aspects of speech processing. Without a dedicated effort to bring researchers from different communities,

a quick advance in investigation towards the phase usefulness in speech processing applications

is difficult to achieve. Therefore, as the first step in this direction, we aim to promote the 'phase-aware

speech and audio signal processing' to form a community of researchers to organize the next steps.

Our initiative is to unify these efforts to better understand the pros and cons of using phase and the degree

of feasibility for phase estimation/enhancement in different areas of speech processing including: speech

enhancement, speech separation, speech quality estimation, speech and speaker recognition,

voice transformation and speech analysis and synthesis. The goal is to promote the importance of

the phase-based signal processing and studying its importance and sharing interesting findings from different

speech processing applications.

Organizers:

Pejman Mowlaee, Graz University of Technology, pejman.mowlaee [at]tugraz.at

Rahim Saeidi, University of Eastern Finland, rahim.saeidi [at]uef.fi

Yannis Styilianou, Toshiba Labs Cambridge UK / University of Crete, yannis [at]csd.uoc.gr

Speaker Comparison for Forensic and Investigative Applications

In speaker comparison, speech/voice samples are compared by humans and/or machines

for use in investigation or in court to address questions that are of interest to the legal system.

Speaker comparison is a high-stakes application that can change people’s lives and it demands the best

that science has to offer; however, methods, processes, and practices vary widely.

These variations are not necessarily for the better and though recognized, are not generally appreciated

and acted upon. Methods, processes, and practices grounded in science are critical for the proper application

(and non-application) of speaker comparison to a variety of international investigative and forensic applications.

This special session will contribute to scientific progress through 1) understanding speaker comparison

for investigative and forensic application (e.g., describe what is currently being done and critically

analyze performance and lessons learned); 2) improving speaker comparison for investigative and forensic

applications (e.g., propose new approaches/techniques, understand the limitations, and identify challenges

and opportunities); 3) improving communications between communities of researchers, legal scholars,

and practitioners internationally (e.g., directly address some central legal, policy, and societal questions

such as allowing speaker comparisons in court, requirements for expert witnesses, and requirements for specific

automatic or human-based methods to be considered scientific); 4) using best practices (e.g., reduction of bias

and presentation of evidence); 5) developing a roadmap for progress in this session and future sessions; and 6)

producing a documented contribution to the field. Some of these objectives will need multiple sessions

to fully achieve and some are complicated due to differing legal systems and cultures.

This special session builds on previous successful special sessions and tutorials in forensic applications

of speaker comparison at INTERSPEECH beginning in 2003. Wide international participation is planned,

including researchers from the ISCA SIGs for the Association Francophone de la Communication Parlée (AFCP)

and the Speaker and Language Characterization (SpLC).

Organizers:

Joseph P. Campbell, PhD, MIT Lincoln Laboratory, jpc [at]ll.mit.edu

Jean-François Bonastre, l'Université d'Avignon, jean-francois.bonastre [at]univ-avignon.fr

Text-dependent for Short-duration Speaker Verification

In recent years, speaker verification engines have reached maturity and have been deployed in

commercial applications. Ergonomics of such applications is especially demanding and imposes

a drastic limitation in terms of speech duration during authentication. A well known tactic to address

the problem of lack of data, due to short duration, is using text-dependency. However, recent breakthroughs

achieved in the context of text-independent speaker verification in terms of accuracy and robustness

do not benefit text-dependent applications. Indeed, large development data required by the recent

approaches is not available in the text-dependent context. The purpose of this special session is

to gather the research efforts from both academia and industry toward a common goal of establishing

a new baseline and explore new directions for text-dependent speaker verification.

The focus of the session is on robustness with respect to duration and modeling of lexical information.

To support the development and evaluation of text-dependent speaker verification technologies,

the Institute for Infocomm Research (I2R) has recently released the RSR2015 database,

including 150 hours of data recorded from 300 speakers. The papers submitted to the special

session are encouraged, but not limited, to provide results based on the RSR2015 database

in order to enable comparison of algorithms and methods. For this purpose, the organizers strongly

encourage the participants to report performance on the protocol delivered with the database

in terms of EER and minimum cost (in the sense of NIST 2008 Speaker Recognition evaluation).

To get the database, please contact the organizers.

Further details are available at: http://www1.i2r.a-star.edu.sg/~kalee/is2014/tdspk.html

Organizers:

Anthony LARCHER (alarcher [at]i2r.a-star.edu.sg) Institute for Infocomm Research

Hagai ARONOWITZ (hagaia [at]il.ibm.com) IBM Research – Haifa

Kong Aik LEE (kalee [at]i2r.a-star.edu.sg) Institute for Infocomm Research

Patrick KENNY (patrick.kenny [at]crim.ca) CRIM – Montréal

Tutorial Dialogues and Spoken Dialogue Systems

The growing interest in educational applications that use spoken interaction and dialogue technology has boosted

research and development of interactive tutorial systems, and over the recent years, advances have been achieved

in both spoken dialogue community and education research community, with sophisticated speech and multi-modal

technology which allows functionally suitable and reasonably robust applications to be built.

The special session combines spoken dialogue research, interaction modeling, and educational applications,

and brings together the two INTERSPEECH SIG communities: SLaTE and SIGdial. The session focuses

on methods, problems and challenges that are shared by both communities, such as sophistication

of speech processing and dialogue management for educational interaction, integration of the models

with theories of emotion, rapport, and mutual understanding, as well as application of the techniques

to novel learning environments, robot interaction, etc. The session aims to survey issues related

to the processing of spoken language in various learning situations, modeling of the teacher-student

interaction in MOOC-like environments, as well as evaluating tutorial dialogue systems from

the point of view of natural interaction, technological robustness, and learning outcome.

The session encourages interdisciplinary research and submissions related to the special focus

of the conference, 'Celebrating the Diversity of Spoken Languages'.

For further information click http://junionsjlee.wix.com/INTERSPEECH

Organizers:

Maxine Eskenazi, max+ [at]cs.cmu.edu

Kristiina Jokinen, kristiina.jokinen [at]helsinki.fi

Diane Litman, litman [at]cs.pitt.edu

Martin Russel, M.J.RUSSELL [at]bham.ac.uk

Visual Speech Decoding

Speech perception is a bi-modal process that takes into account both the acoustic (what we hear)

and visual (what we see) speech information. It has been widely acknowledged that visual clues play

a critical role in automatic speech recognition (ASR) especially when audio is corrupted by,

for example, background noise or voices from untargeted speakers, or even inaccessible.

Decoding the visual speech is utterly important for ASR technologies to be widely implemented

to realize truly natural human-computer interactions. Despite the advances in acoustic ASR,

visual speech decoding remains a challenging problem.

The special session aims to attract more effort to tackle this important problem. In particular,

we would like to encourage researchers to focus on some critical questions in the area.

We propose four questions as the initiative as follows:

1. How to deal with the speaker dependency in visual speech data?

2. How to cope with the head-pose variation?

3. How to encode temporal information in visual features?

4. How to automatically adapt the fusion rule when the quality of the two individual (audio and visual)

modalities varies?

Researchers and participants are encouraged to raise more questions related to visual speech decoding.

We expect the session to draw a wide range of attention from both the speech recognition and machine vision

communities to the problem of visual speech decoding.

Organizers:

Ziheng Zhou, University of Oulu, ziheng.zhou [at]ee.oulu.fi

Matti Pietikäinen, University of Oulu, matti.pietikainen [at]ee.oulu.fi

Guoying Zhao, University of Oulu, gyzhao [at]ee.oulu.fi

Back  Top

3-1-18(2015-09-06) Call for Satellite Workshops of INTERSPEECH 2015, Dresden, Germany
**** Call for Satellite Workshops **** 
INTERSPEECH 2015 will be held in the beautiful city of Dresden, Germany, on September 6-10, 2015
The theme is 'Speech beyond Speech - Towards a Better Understanding of the Most Important 
Biosignal'. The Organizing Committee of INTERSPEECH 2015 is now inviting proposals for 
satellite workshops, which will be held in proximity to the main conference. 
The Organizing Committee will work to facilitate the organization of such satellite workshops, 
to stimulate discussion in research areas related to speech and language, at locations in Central 
Europe, and around the same time as INTERSPEECH. We are particularly looking forward to 
proposals from neighboring countries. If you are interested in organizing a satellite workshop, 
or would like a planned event to be listed as an official satellite event, please contact the organizers
 or the Satellite Workshop Chair at fmetze@cs.cmu.edu The Satellite Workshop coordinator along 
with the INTERSPEECH team will help to connect (potential) workshop organizers with local 
contacts in Germany, if needed, and will try to be helpful with logistics such as payment, publicity,
 and coordination with ISCA or other events. Proposals should include:
 * workshop name and acronym 
* organizers' name and contact info 
* website (if already known) 
* date and proposed location of the workshop 
* estimated number of participants 
* a short description of the motivation for the workshop 
* an outline of the program and invited speakers 
* a description of the submission process (e.g. deadlines, target acceptance rate) 
* a list of the scientific committee members 
 
Proposals for satellite workshops should be submitted by email to workshops@interspeech2015.org
 by August 31st, 2014 We strongly recommend that organizers also apply for
 ISCA approval/ sponsorship, which will greatly facilitate acceptance as an INTERSPEECH satellite 
event. We plan to notify proposers no later than October 30, 2014. If you have any questions about 
whether a potential event would be a good candidate for an INTERSPEECH 2015 satellite workshop 
feel free to contact the INTERSPEECH 2015 Satellite Workshops Chair. 
 
Sincerely, 
Florian Metze 
Satellite Workshops Chair fmetze@cs.cmu.edu

 

 
Back  Top

3-1-19(2015-09-06) INTERSPEECH 2015 Dresden RFA

Interspeech 2015

 

September 6-10, 2015, Dresden, Germany

www.interspeech2015.org

 

SPECIAL TOPIC

Speech Beyond Speech: Towards a Better Understanding of the Most Important Biosignal

 

MOTIVATION

Speech is the most important biosignal humans can produce and perceive. It is the most common means of human-human communication, and therefore research and development in speech and language are not only paramount for understanding humans, but also to facilitate human-machine interaction. Still, not all characteristics of speech are fully understood, and even fewer are used for developing successful speech and language processing applications. Speech can exploit its full potential only if we consider the characteristics which are beyond the traditional (and still important) linguistic content. These characteristics include other biosignals that are directly accessible to human perception, such as muscle and brain activity, as well as articulatory gestures.

 

INTERSPEECH 2015

will therefore be organized around the topic “Speech beyond Speech: Towards a Better Understanding of the Most Important Biosignal”. Our conviction is that spoken language processing can make a substantial leap if it caters for the full information which is available in the speech signal. By opening our prestigious conference to researchers in other biosignal communities, we expect that substantial advances can be made discussing ideas and approaches across discipline and community boundaries.

 

 

ORGANIZERS

 

 

 

The following people organize INTERSPEECH 2015:

 

  • General Chair: Sebastian Möller, Telekom Innovation Laboratories, Technische Universität Berlin
  • General Co-Chair & International Outreach: Hermann Ney, Chair of Computer Science 6, RWTH Aachen University
  • Technical Program: Bernd Möbius, Dept. of Computational Linguistics and Phonetics, Universität des Saarlandes; Elmar Nöth, Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg
  • Local Organization: Rüdiger Hoffmann, Senior Professor, Technische Universität Dresden; Ercan Altinsoy and Ute Jekosch, Chair for Communication Acoustics, Technische Universität Dresden
  • Plenaries: Gerhard Rigoll, Institute for Human-Machine Communication, Technische Universität München
  • Special Sessions & Challenges: Anton Batliner, Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg; Björn Schuller, Imperial College London & Technische Universität München
  • Tutorials: Alexander Raake, Telekom Innovation Laboratories, Technische Universität Berlin
  • Satellite Workshops: Florian Metze, Carnegie Mellon University, Pittsburgh
  • Industry Liaison: Jimmy Kunzmann, EML European Media Laboratory GmbH, Heidelberg
  • Sponsoring: Tim Fingscheidt, Institute for Communications Technology, Technische Universität Braunschweig; Claudia Pohlink, Telekom Innovation Laboratories, Deutsche Telekom AG, Berlin
  • Special Events: David Sündermann, Duale Hochschule Baden-Württemberg, Stuttgart
  • Show and Tell: Georg Stemmer, Intel, München
  • Social Events: Petra Wagner, Phonetics and Phonology Workgroup, Universität Bielefeld
  • Exhibits: Reinhold Häb-Umbach, Universität Paderborn
  • Finance: Volker Steinbiss, RWTH Aachen University and Accipio Projects GmbH, Aachen
  • Community Outreach: Norbert Reithinger, DFKI Projektbüro Berlin
  • Publicity: Oliver Jokisch, Hochschule für Telekommunikation Leipzig
  • Publications: Stefan Steidl, Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg
  • Students Affairs: Benjamin Weiss, Telekom Innovation Laboratories, Technische Universität Berlin
  • Web & Tools: Tim Polzehl, Telekom Innovation Laboratories, Technische Universität Berlin
  • Grants: Michael Wagner, University of Canberra
  • PCO: Lisa Hertel, TUBS GmbH TU Berlin ScienceMarketing

LOCATION

The event will be staged in the recently built Maritim International Congress Center (ICD) in Dresden, Germany. As the capital of Saxony, an up-and-coming region located in the former eastern part of Germany, Dresden combines glorious and painful history with a strong dedication to future and technology. It is located in the heart of Europe, easily reached via two airports, and will offer a great deal of history and culture to INTERSPEECH 2015 delegates. Guests are well catered for in a variety of hotels of different standards and price ranges, making INTERSPEECH 2015 an exciting as well as an affordable event.

 

CONTACT

Prof. Dr.-Ing. Sebastian Möller, Quality and Usability Lab, Telekom Innovation Laboratories, TU Berlin

Sekr. TEL-18, Ernst-Reuter-Platz 7, D-10587 Berlin, Germany

Web: www.interspeech2015.org

 

 

Back  Top

3-1-20(2016) INTERSPEECH 2016, San Francisco, CA, USA

Interspeech 2016 will take place

from September 8-12 2016 in San Francisco, CA, USA

General Chair is Nelson Morgan.

 

Back  Top

3-2 ISCA Supported Events
3-2-1(2014-09-08) Seventeenth International Conference on TEXT, SPEECH and DIALOGUE (TSD 2014)

17th International Conference on TEXT, SPEECH and DIALOGUE (TSD 2014) Brno, Czech Republic,

8-12 September 2014

http://www.tsdconference.org/

The conference is organized by the Faculty of Informatics, Masaryk University, Brno, and the Faculty of Applied Sciences, University of West Bohemia, Pilsen. The conference is supported by International Speech Communication Association. Venue: Brno, Czech Republic

TSD SERIES

TSD series evolved as a prime forum for interaction between researchers in both spoken and written language processing from all over the world. Proceedings of TSD form a book published by Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI) series.

TOPICS

Topics of the conference will include (but are not limited to): Corpora and Language Resources (monolingual, multilingual, text and spoken corpora, large web corpora, disambiguation, specialized lexicons, dictionaries) Speech Recognition (multilingual, continuous, emotional speech, handicapped speaker, out-of-vocabulary words, alternative way of feature extraction, new models for acoustic and language modelling) Tagging, Classification and Parsing of Text and Speech (morphological and syntactic analysis, synthesis and disambiguation, multilingual processing, sentiment analysis, credibility analysis, automatic text labeling, summarization, authorship attribution) Speech and Spoken Language Generation (multilingual, high fidelity speech synthesis, computer singing) Semantic Processing of Text and Speech (information extraction, information retrieval, data mining, semantic web, knowledge representation, inference, ontologies, sense disambiguation, plagiarism detection) Integrating Applications of Text and Speech Processing (machine translation, natural language understanding, question-answering strategies, assistive technologies) Automatic Dialogue Systems (self-learning, multilingual, question-answering systems, dialogue strategies, prosody in dialogues) Multimodal Techniques and Modelling (video processing, facial animation, visual speech synthesis, user modelling, emotions and personality modelling) Papers on processing of languages other than English are strongly encouraged.

PROGRAM COMMITTEE

Hynek Hermansky, USA (general chair) Eneko Agirre, Spain Genevieve Baudoin, France Paul Cook, Australia Jan Cernocky, Czech Republic Simon Dobrisek, Slovenia Karina Evgrafova, Russia Darja Fiser, Slovenia Radovan Garabik, Slovakia Alexander Gelbukh, Mexico Louise Guthrie, GB Jan Hajic, Czech Republic Eva Hajicova, Czech Republic Yannis Haralambous, France Ludwig Hitzenberger, Germany Jaroslava Hlavacova, Czech Republic Ales Horak, Czech Republic Eduard Hovy, USA Maria Khokhlova, Russia Daniil Kocharov, Russia Ivan Kopecek, Czech Republic Valia Kordoni, Germany Steven Krauwer, The Netherlands Siegfried Kunzmann, Germany Natalija Loukachevitch, Russia Vaclav Matousek, Czech Republic Diana McCarthy, United Kingdom France Mihelic, Slovenia Hermann Ney, Germany Elmar Noeth, Germany Karel Oliva, Czech Republic Karel Pala, Czech Republic Nikola Pavesic, Slovenia Fabio Pianesi, Italy Maciej Piasecki, Poland Adam Przepiorkowski, Poland Josef Psutka, Czech Republic James Pustejovsky, USA German Rigau, Spain Leon Rothkrantz, The Netherlands Anna Rumshisky, USA Milan Rusko, Slovakia Mykola Sazhok, Ukraine Pavel Skrelin, Russia Pavel Smrz, Czech Republic Petr Sojka, Czech Republic Stefan Steidl, Germany Georg Stemmer, Germany Marko Tadic, Croatia Tamas Varadi, Hungary Zygmunt Vetulani, Poland Pascal Wiggers, The Netherlands Yorick Wilks, GB Marcin Wolinski, Poland Victor Zakharov, Russia KEYNOTE SPEAKERS Ralph Grishman, New York University, USA Bernardo Magnini, FBK - Fondazione Bruno Kessler, Italy

FORMAT OF THE CONFERENCE

The conference program will include presentation of invited papers, oral presentations, and poster/demonstration sessions. Papers will be presented in plenary or topic oriented sessions. Social events including a trip in the vicinity of Brno will allow for additional informal interactions.

CONFERENCE PROGRAM

The conference program will include oral presentations and poster/demonstration sessions with sufficient time for discussions of the issues raised.

IMPORTANT DATES

March 15 2014 ............ Submission of abstract

March 22 2014 ............ Submission of full papers

May 15 2014 .............. Notification of acceptance

May 31 2014 .............. Final papers (camera ready) and registration

August 3 2014 ............ Submission of demonstration abstracts

August 10 2014 ........... Notification of acceptance for demonstrations sent to the authors

September 8-12 2014 ...... Conference date

The contributions to the conference will be published in proceedings that will be made available to participants at the time of the conference. OFFICIAL LANGUAGE The official language of the conference is English. ADDRESS All correspondence regarding the conference should be addressed to Ales Horak, TSD 2014 Faculty of Informatics, Masaryk University Botanicka 68a, 602 00 Brno, Czech Republic phone: +420-5-49 49 18 63 fax: +420-5-49 49 18 20 email: tsd2014@tsdconference.org The official TSD 2014 homepage is: http://www.tsdconference.org/

LOCATION

Brno is the second largest city in the Czech Republic with a population of almost 400.000 and is the country's judiciary and trade-fair center. Brno is the capital of South Moravia, which is located in the south-east part of the Czech Republic and is known for a wide range of cultural, natural, and technical sights. South Moravia is a traditional wine region. Brno had been a Royal City since 1347 and with its six universities it forms a cultural center of the region. Brno can be reached easily by direct flights from London, Moscow, Saint Petersburg, Eindhoven, Rome and Prague and by trains or buses from Prague (200 km) or Vienna (130 km).

Back  Top

3-2-2(2014-09-08) TSD 2014 CALL FOR DEMONSTRATIONS AND PARTICIPATION
  *********************************************************
	 TSD 2014 - CALL FOR DEMONSTRATIONS AND PARTICIPATION
      *********************************************************

Seventeenth International Conference on TEXT, SPEECH and DIALOGUE (TSD 2014)
              Brno, Czech Republic, 8-12 September 2014
		    http://www.tsdconference.org/


SUBMISSION OF DEMONSTRATION ABSTRACTS

Authors are invited to present actual projects, developed software and
hardware or interesting material relevant to the topics of the
conference. The authors of the demonstrations should provide the
abstract not exceeding one page as plain text. The submission must be
made using the online form available at the conference www pages.

The accepted demonstrations will be presented during a special
Demonstration Session (see the Demo Instructions at
www.tsdconference.org).  Demonstrators can present their contribution
with their own notebook with an Internet connection provided by the
organisers or the organisers can prepare a PC computer with multimedia
support for demonstrators.


IMPORTANT DATES

August  3 2014 ............ Submission of demonstration abstracts
August 10 2014 ............ Notification of acceptance for
                            demonstrations sent to the authors
September 3-7 2014 ........ Conference dates

The demonstration abstracts will not appear in the Proceedings of TSD
2014 but they will be published electronically at the conference website.

KEYNOTE SPEAKERS

    Ralph Grishman, New York University, USA
    Active Learning for Information Extraction

    Bernardo Magnini, FBK - Fondazione Bruno Kessler, Italy
    Entailment graphs for text analytics

    Salim Roukos, IBM, USA
    Recent Progress in Statistical Machine Translation: Algorithms and Applications


The conference is organized by the Faculty of Informatics, Masaryk
University, Brno, and the Faculty of Applied Sciences, University of
West Bohemia, Pilsen.  The conference is supported by International
Speech Communication Association.

Venue: Brno, Czech Republic


TSD SERIES

TSD series evolved as a prime forum for interaction between researchers in
both spoken and written language processing from all over the world.
Proceedings of TSD form a book published by Springer-Verlag in their
Lecture Notes in Artificial Intelligence (LNAI) series.  TSD Proceedings
are regularly indexed by Thomson Reuters Conference Proceedings Citation
Index.  Moreover, LNAI series are listed in all major citation databases
such as DBLP, SCOPUS, EI, INSPEC or COMPENDEX.


TOPICS

Topics of the conference will include (but are not limited to):

    Corpora and Language Resources (monolingual, multilingual,
    text and spoken corpora, large web corpora, disambiguation,
    specialized lexicons, dictionaries)

    Speech Recognition (multilingual, continuous, emotional
    speech, handicapped speaker, out-of-vocabulary words,
    alternative way of feature extraction, new models for
    acoustic and language modelling)

    Tagging, Classification and Parsing of Text and Speech
    (morphological and syntactic analysis, synthesis and
    disambiguation, multilingual processing, sentiment analysis,
    credibility analysis, automatic text labeling, summarization,
    authorship attribution)

    Speech and Spoken Language Generation (multilingual, high
    fidelity speech synthesis, computer singing)

    Semantic Processing of Text and Speech (information
    extraction, information retrieval, data mining, semantic web,
    knowledge representation, inference, ontologies, sense
    disambiguation, plagiarism detection)

    Integrating Applications of Text and Speech Processing
    (machine translation, natural language understanding,
    question-answering strategies, assistive technologies)

    Automatic Dialogue Systems (self-learning, multilingual,
    question-answering systems, dialogue strategies, prosody in
    dialogues)

    Multimodal Techniques and Modelling (video processing, facial
    animation, visual speech synthesis, user modelling, emotions
    and personality modelling)

Papers on processing of languages other than English are strongly
encouraged.


PROGRAM COMMITTEE

    Hynek Hermansky, USA (general chair)
    Eneko Agirre, Spain
    Genevieve Baudoin, France
    Paul Cook, Australia
    Jan Cernocky, Czech Republic
    Simon Dobrisek, Slovenia
    Karina Evgrafova, Russia
    Darja Fiser, Slovenia
    Radovan Garabik, Slovakia
    Alexander Gelbukh, Mexico
    Louise Guthrie, GB
    Jan Hajic, Czech Republic
    Eva Hajicova, Czech Republic
    Yannis Haralambous, France
    Ludwig Hitzenberger, Germany
    Jaroslava Hlavacova, Czech Republic
    Ales Horak, Czech Republic
    Eduard Hovy, USA
    Maria Khokhlova, Russia
    Daniil Kocharov, Russia
    Ivan Kopecek, Czech Republic
    Valia Kordoni, Germany
    Steven Krauwer, The Netherlands
    Siegfried Kunzmann, Germany
    Natalija Loukachevitch, Russia
    Vaclav Matousek, Czech Republic
    Diana McCarthy, United Kingdom
    France Mihelic, Slovenia
    Hermann Ney, Germany
    Elmar Noeth, Germany
    Karel Oliva, Czech Republic
    Karel Pala, Czech Republic
    Nikola Pavesic, Slovenia
    Fabio Pianesi, Italy
    Maciej Piasecki, Poland
    Adam Przepiorkowski, Poland
    Josef Psutka, Czech Republic
    James Pustejovsky, USA
    German Rigau, Spain
    Leon Rothkrantz, The Netherlands
    Anna Rumshisky, USA
    Milan Rusko, Slovakia
    Mykola Sazhok, Ukraine
    Pavel Skrelin, Russia
    Pavel Smrz, Czech Republic
    Petr Sojka, Czech Republic
    Stefan Steidl, Germany
    Georg Stemmer, Germany
    Marko Tadic, Croatia
    Tamas Varadi, Hungary
    Zygmunt Vetulani, Poland
    Pascal Wiggers, The Netherlands
    Yorick Wilks, GB
    Marcin Wolinski, Poland
    Victor Zakharov, Russia


FORMAT OF THE CONFERENCE

The conference program will include presentation of invited papers,
oral presentations, and poster/demonstration sessions. Papers will
be presented in plenary or topic oriented sessions.

Social events including a trip in the vicinity of Brno will allow
for additional informal interactions.


OFFICIAL LANGUAGE

The official language of the conference is English.


ACCOMMODATION

The organizing committee will arrange discounts on accommodation in
the 4-star hotel at the conference venue. The current prices of the
accommodation are available at the conference website.


ADDRESS

All correspondence regarding the conference should be
addressed to
    
    Ales Horak, TSD 2014
    Faculty of Informatics, Masaryk University
    Botanicka 68a, 602 00 Brno, Czech Republic
    phone: +420-5-49 49 18 63
    fax: +420-5-49 49 18 20
    email: tsd2014@tsdconference.org

The official TSD 2014 homepage is: http://www.tsdconference.org/


LOCATION

Brno is the second largest city in the Czech Republic with a
population of almost 400.000 and is the country's judiciary and
trade-fair center. Brno is the capital of South Moravia, which is
located in the south-east part of the Czech Republic and is known
for a wide range of cultural, natural, and technical sights.
South Moravia is a traditional wine region. Brno had been a Royal
City since 1347 and with its six universities it forms a cultural
center of the region.

Brno can be reached easily by direct flights from London, Moscow,
and Eindhoven, and by trains or buses from Prague (200 km) or Vienna
(130 km).

For the participants with some extra time, nearby places may
also be of interest.  Local ones include: Brno Castle now called
Spilberk, Veveri Castle, the Old and New City Halls, the
Augustine Monastery with St. Thomas Church and crypt of Moravian
Margraves, Church of St.  James, Cathedral of St. Peter & Paul,
Cartesian Monastery in Kralovo Pole, the famous Villa Tugendhat
designed by Mies van der Rohe along with other important
buildings of between-war Czech architecture.

For those willing to venture out of Brno, Moravian Karst with
Macocha Chasm and Punkva caves, battlefield of the Battle of
three emperors (Napoleon, Russian Alexander and Austrian Franz
- Battle by Austerlitz), Chateau of Slavkov (Austerlitz),
Pernstejn Castle, Buchlov Castle, Lednice Chateau, Buchlovice
Chateau, Letovice Chateau, Mikulov with one of the largest Jewish
cemeteries in Central Europe, Telc - a town on the UNESCO
heritage list, and many others are all within easy reach.
Back  Top

3-2-3(2014-12-07) 2014 Spoken Language Technology Workshop, South Lake Tahoe, NV, USA
2014 Spoken Language Technology Workshop
December 7-10, 2014 - South Lake Tahoe, NV, USA

IEEE - IEEE Signal Processing Society

http://www.slt2014.org - Follow @SLT_2014


The Fifth IEEE Workshop on Spoken Language Technology (SLT 2014) will be held in South Lake Tahoe, Nevada, on Dec 7-10, 2014.


Workshop Technical Theme & Main Goals & Novelties

The main theme of the workshop will be 'machine learning in spoken language technologies'. There will be keynote/guest speakers from the machine learning community.
One of the workshop goals is to increase both intra and inter community interactions. Towards this goal, in addition to tutorials and keynote speeches on main workshop theme and emerging areas, this year's SLT will host special sessions and self-organizing Special Interest Group (SIG) meetings, as well as panel discussions before/during workshop. If you want to excite the community about a topic and to have an impact on the workshop content, now this is your chance!

In addition to submitting papers and/or proposing/organizing SIG meetings, you can get involved in workshop organization in different ways: by nominating keynote speakers (nominations@slt2014.org), or by volunteering to be part of workshop organization (volunteers@slt2014.org). Please visit www.slt2014.org for more details.


Call for Papers: Areas/Topics

Submission of papers in all areas of spoken language technology is encouraged, with emphasis on the following topics, including both traditional SLT areas as well as emerging ones:

- Traditional topic coverage: speech recognition and synthesis, spoken language understanding, spoken dialog systems, spoken document summarization, machine translation for speech, question answering from speech, speech data mining, spoken document retrieval, spoken language databases, speaker/language recognition, multimodal processing, human/computer interaction, assistive technologies, natural language processing, educational and healthcare applications.  

- Emerging areas: large scale spoken language understanding, massive data resources for SLT, unsupervised methods in SLT, capturing and representing world knowledge in SLT, web search with SLT, SLT in social networks, multimedia applications, intelligent environments.

Prospective authors are invited to submit full-length, 4-6 page papers, including figures and references, to the SLT 2014 website (slt2014.org).


Important Dates

Paper submission: July 21, 2014
Notification of acceptance: September 5, 2014
Demo submission: September 10, 2014
Notification of Demo acceptance: October 10, 2014
Special Session (SS) proposal submission: June 6, 2014
Notification of SS proposals (1st/2nd decision):  June 15 / September 19, 2014
Special Interest Group (SIG) proposal submission: November 21, 2014
Early registration deadline: October 17, 2014
Workshop: December 7-10, 2014


Supported by:
- Association for Computational Linguistics (ACL)
- International Speech Communication Association (ISCA)
Back  Top

3-3 Other Events
3-3-1(2014-08-17) Summer school on “Tools & Techniques in Geolinguistics”, Univ Kiel, Germany

Summer school on “Tools & Techniques in Geolinguistics”

 

Dealing with methods and techniques in geolinguistics, an international summer school will take place at the University of Kiel (Germany) 17-27 August 2014. In this new and interdisciplinary research paradigm, regional varieties will be analysed with respect to their linguistic, geographical, social, perceptual and spatial characteristics. With its many Low German dialects and the endangered Frisian language, Northern Germany is a very dynamic language area right on Kiel’s doorstep, and the summer school will take advantage of this. In the case of Low German, students will learn how to collect speech data in the laboratory and in the field, how to compile a text corpus and how to analyse the material multifactorially from a geolinguistic perspective.

 

The summer school does not only address students and graduates of (German) dialectology and geolinguistics but also provides new insights for everyone interested in speech documentation, field research, phonetics, corpus linguistics, perceptual dialectology, sociolinguistics and typology. International experts in dialectology and geolinguistics will offer a wide range of lectures, interactive workshops and practical exercises. Additionally, participants will be supported by student mentors.

 

The summer school is addressed to about 50 national and international students. Applicants will be postgraduates holding a bachelor degree (or higher) in linguistics, phonetics, language documentation/typology, German studies or a similar field. Please send the following documents (preferably in pdf format) by email to contact@geoling.uni-kiel.de

- relevant academic achievements, i.e. certificates and complementary proofs of qualification

- curriculum vitae, including experiences in statistics and speech processing software

- letter of motivation briefly summarizing the linguistic expertise and outlining personal research interests and future aims

- recommendation letter of a supervising academic teacher

 

We offer up to 30 full scholarships that cover all costs for travel and accommodation. Please note in your application whether or not you apply for a scholarship. If possible, all successful applicants from outside Kiel will receive a scholarship. The expenses will be reimbursed after the summer school, but other financial arrangements can be made as well, if necessary.

 

Applications should be sent by email to contact@geoling.uni-kiel.de by 28 February 2014. For further information, please visit our web site on http://www.geoling.uni-kiel.de/en/home

 

Funded by the Volkswagen Foundation, the summer school is organised by Prof Dr Oliver Niebuhr, Dr Christina A Anders as well as Dr Uwe Vosberg and hosted by the Institute for Scandinavian studies, Frisian and General Linguistics along with a research centre on “The areality and sociality of language” (http://www.arealitaet.uni-kiel.de) at the University of Kiel.

Back  Top

3-3-2(2014-08-23) 4th WORKSHOP ON COGNITIVE ASPECTS OF THE LEXICON (CogALex), Dublin, Ireland
1st Call for Papers

4th WORKSHOP ON COGNITIVE ASPECTS OF THE LEXICON (CogALex)
together with a shared task concerning the ‘lexical access-problem’

Pre-conference workshop at COLING 2014 (August 23d, Dublin, Ireland)

Submission deadline: May 25, 2014


Invited speaker: Roberto Navigli (Sapienza University of Rome)

For more information, see : http://pageperso.lif.univ-mrs.fr/~michael.zock/cogalex-webpage/index.html
(Beware though that this page is still under construction)

==============================================================
GOAL

The aim of the workshop is to bring together researchers involved in the construction and application of electronic dictionaries to discuss modifications of existing resources in line with the users' needs, thereby fully exploiting the advantages of the digital form. Given the breadth of the questions, we welcome reports on work from many perspectives, including but not limited to: computational lexicography, psycholinguistics, cognitive psychology, language learning and ergonomics.



MOTIVATION

The way we look at dictionaries (their creation and use) has changed dramatically over the past 30 years. While being considered as an appendix to grammar in the past, by now they have moved to centre stage. Indeed, there is hardly any task in NLP which can be conducted without them. Also, rather than being static entities (data-base view), dictionaries are now viewed as dynamic networks, i.e. graphs, whose nodes and links (connection strengths) may change over time. Interestingly, properties concerning topology, clustering and evolution known from other disciplines (society, economy, human brain) also apply to dictionaries: everything is linked, hence accessible, and everything is evolving. Given these similarities, one may wonder what we can learn from these disciplines.

In this 4th edition of the CogALex workshop we therefore also invite scientists working in these fields, with the goal to broaden the picture, i.e. to gain a better understanding concerning the mental lexicon and to integrate these findings into our dictionaries in order to support navigation. Given recent advances in neurosciences, it appears timely to seek inspiration from neuroscientists studying the human brain. There is also a lot to be learned from other fields studying graphs and networks, even if their object of study is something else than language, for example biology, economy or society.


TOPICS OF INTEREST

This workshop is about possible enhancements of lexical resources and electronic dictionaries. To perform the groundwork for the next generation of such resources we invite researchers involved in the building of such tools. The idea is to discuss modifications of existing resources by taking the users’ needs and knowledge states into account, and to capitalize on the advantages of the digital media. For this workshop we solicit papers including but not limited to the following topics, each of which can be considered from various points of view: linguistics, neuro- or psycholinguistics (tip of the tongue problem, associations), network related sciences (sociology, economy, biology), mathematics (vector-based approaches, graph theory, small-world problem), etc.


1) Analysis of the conceptual input of a dictionary user

  • What does a language producer start from (bag of words)?
  • What is in the authors' minds when they are generating a message and looking for a word?
  • What does it take to bridge the gap between this input and the desired output (target word)?


2) The meaning of words

  • Lexical representation (holistic, decomposed)
  • Meaning representation (concept based, primitives)
  • Revelation of hidden information (distributional semantics, latent semantics, vector-based approaches: LSA/HAL)
  • Neural models, neurosemantics, neurocomputational theories of content representation.


3) Structure of the lexicon

  • Discovering structures in the lexicon: formal and semantic point of view (clustering, topical structure)
  • Creative ways of getting access to and using word associations (reading between the lines, subliminal communication);
  • Evolution, i.e. dynamic aspects of the lexicon (changes of weights)
  • Neural models of the mental lexicon (distribution of information concerning words, organisation of words)


4) Methods for crafting dictionaries or indexes

  • Manual, automatic or collaborative building of dictionaries and indexes (crowd-sourcing, serious games, etc.)
  • Impact and use of social networks (Facebook, Twitter) for building dictionaries, for organizing and indexing the data (clustering of words), and for allowing to track navigational strategies, etc.
  • (Semi-) automatic induction of the link type (e.g. synonym, hypernym, meronym, association, collocation, ...)
  • Use of corpora and patterns (data-mining) for getting access to words, their uses, combinations and associations


5) Dictionary access (navigation and search strategies, interface issues,...)

  • Search based on sound, meaning or associations
  • Search (simple query vs multiple words)
  • Context-dependent search (modification of users’ goals during search)
  • Recovery
  • Navigation (frequent navigational patterns or search strategies used by people)
  • Interface problems, data-visualisation

6) Dictionary applications

  • Methods supporting vocabulary learning (for example, creation of data-bases showing words in various contexts)
  • Tools for supporting Human translation


IMPORTANT DATES

Deadline for paper submissions: May 25, 2014
Notification of acceptance: June 15, 2014
Camera-ready papers due: July 7, 2014
Workshop date: August 23, 2014


SUBMISSION INFORMATION

Papers should follow the COLING main conference formatting details (http://www.coling-2014.org/call-for-papers.php) and should be submitted as a PDF-file via the START workshop manager at https://www.softconf.com/coling2014/WS-1/ (you must register first). 

Contributions can be short or long papers. Short paper submission must describe original and unpublished work without exceeding six (6) pages (references included). Characteristics of short papers include: a small, focused contribution; work in progress; a negative result; a piece of opinion; an interesting application nugget. Long paper submissions must describe substantial, original, completed and unpublished work without exceeding twelve (12) pages (references included).

Reviewing will be double blind, so the papers should not reveal the authors' identity. Accepted papers will be published in the workshop proceedings.

For further details see: http://pageperso.lif.univ-mrs.fr/~michael.zock/cogalex-webpage/index.html

SHARED TASK

We invite participation in a shared task devoted to the problem of lexical access in language production, with the aim of providing a quantitative comparison between different systems.

Motivation of shared task

The quality of a dictionary depends not only on coverage, but also on the accessibility of the information. That is, a crucial point is dictionary access. Access strategies vary with the task (text understanding vs. text production) and the knowledge available at the very moment of consultation (words, concepts, speech sounds). Unlike readers who look for meanings, writers start from them, searching for the corresponding words. While paper dictionaries are static, permitting only limited strategies for accessing information, their electronic counterparts promise dynamic, proactive search via multiple criteria (meaning, sound, related words) and via diverse access routes. Navigation takes place in a huge conceptual lexical space, and the results are displayable in a multitude of forms (e.g. as trees, as lists, as graphs, or sorted alphabetically, by topic, by frequency).

To bring some structure into this multitude of possibilities, the shared task will concentrate on a crucial subtask, namely multiword association.  we will organize a novel type of shared task which will allow quantitative comparisons between different systems. The task chosen is multiword association.

What we mean by this in the context of this workshop is the following. Suppose, we were looking for a word expressing the following ideas: ísuperior dark coffee made of beans from Arabiaí, but could not remember the intended word mocha. Since people always remember something concerning the elusive word, it would be nice to have a system accepting this kind of input, to propose then a number of candidates for the target word. Given the above example, we might enter dark, coffee, beans, and Arabia, and the system would be supposed to come up with lists of associated words such as mocha, espresso, or cappuccino.


Procedure

The participants will receive lists of five given words (primes) such as 'circus', 'funny', 'nose', 'fool', and 'fun' and are supposed to compute the word which is most closely associated to all of them. In this case, the word 'clown' would be the expected answer. Here are some more examples:

given words: gin, drink, scotch, bottle, soda    
expected answer: whisky

 

given words: wheel, driver, bus, drive, lorry
expected answer: car

given words: neck, animal, zoo, long, tall
expected answer: giraffe

given words: holiday, work, sun, summer, abroad    
expected answer: vacation

given words: home, garden, door, boat, chimney
expected answer: house

given words: blue, cloud, stars, night, high
expected answer: sky


We will provide a training set of 2000 sets of five input words (multiword stimuli), together with the expected target words (associative response). The participants will have several weeks to train their systems on this data. After the the training phase, we will release a test set containing another 2000 sets of five input words, but without providing the expected target words.

Participants will have five days to run their systems on the test data, thereby predicting the target words. For each system, we will compare the results to the expected target words and compute an accuracy. The participants will be invited to submit a paper describing their approach and the results.

For the participating systems, we will distinguish two categories: (1) Unrestricted systems. They can use any kind of data to compute their results. (2) Restricted systems: These systems are only allowed to draw on the freely available ukWaC corpus (comprising 2 billion words) in order to extract information on word associations. Participants are allowed to compete in either category or in both.


Schedule for Shared Task

  • Training Data Release:  March 25, 2014
  • Test Data Release:  May 5, 2014
  • Final Results:  May 9, 2014
  • Deadline for Paper Submission:  May 25, 2014
  • Reviewers' feedback:  June, 15, 2014
  • Camera-Ready Version:  July 7, 2014
  • Workshop date:  August 23, 2014

All data releases to be found on the workshop website.


PROGRAMME COMMITTEE

  • Bel Enguix, Gemma(LIF-CNRS, France) and (GRLMC, Tarragona, Spain)
  • Chang, Jason(National Tsing Hua University, Taiwan)
  • Cook, Paul(University of Melbourne, Australia)
  • Cristea, Dan(University A.I.Cuza, Iasi, Romania)
  • De Deyne, Simon(Experimental Psychology, Leuven, Belgium) and (Adelaide, Australia)
  • De Melo, Gerard(IIIS, Tsinghua University, Beijing, China)
  • Ferret, Olivier(CEA LIST, Gif sur Yvette, France)
  • Fontenelle, Thierry(CDT, Luxemburg)
  • Gala, Nuria(LIF-CNRS, Aix Marseille University, Marseille, France)
  • Granger, Sylviane(Université Catholique de Louvain, Belgium)
  • Grefenstette, Gregory (Inria, Saclay, France)
  • Hirst, Graeme(University of Toronto, Canada)
  • Hovy, Eduard(CMU, Pittsburgh, USA)
  • Hsieh, Shu-Kai(National Taiwan University, Taipei, Taiwan)
  • Huang, Chu-Ren(Hongkong Polytechnic University, China)
  • Joyce, Terry(Tama University, Kanagawa-ken, Japan)
  • Lapalme, Guy(RALI, University of Montreal, Canada)
  • Lenci, Alessandro(CNR, university of Pisa, Italy)
  • L'Homme, Marie Claude(University of Montreal, Canada)
  • Mihalcea, Rada(University of Texas, USA)
  • Navigli, Roberto(Sapienza, University of Rome, Italy)
  • Pirrelli, Vito(ILC, Pisa, Italy)
  • Polguère, Alain(ATILF-CNRS, Nancy, France)
  • Rapp, Reinhard(LIF-CNRS, France) and (Mainz, Germany)
  • Rosso, Paolo(NLEL, Universitat Politècnica de València, Spain)
  • Schwab, Didier(LIG-GETALP, Grenoble, France)
  • Serasset, Gilles(IMAG, Grenoble, France)
  • Sharoff, Serge(University of Leeds, UK)
  • Su, Jun-Ming(University of Tainan, Taiwan)
  • Tiberius, Carole(Institute for Dutch Lexicology, The Netherlands)
  • Tokunaga, Takenobu(TITECH, Tokyo, Japan)
  • Tufis, Dan(RACAI, Bucharest, Romania)
  • Valitutti, Alessandro(Helsinki Institute of Information Technology, Finland)
  • Wandmacher, Tonio(IRT SystemX, Saclay, France)
  • Zock, Michael(LIF-CNRS, Marseille, France), currently (University of Tainan, Taiwan)



WORKSHOP ORGANIZERS and CONTACT PERSONS

  • Michael Zock (LIF-CNRS, Marseille, France), michael.zock AT lif.univ-mrs.fr
  • Reinhard Rapp (University of Aix Marseille (France) and Mainz (Germany), reinhardrapp AT gmx.de
  • Chu-Ren Huang (The Hong Kong Polytechnic University, Hong Kong), churen.huang AT inet.polyu.edu.hk


For more details see:

http://pageperso.lif.univ-mrs.fr/~michael.zock/cogalex-webpage/index.html
(again, this page is still under construction)

Back  Top

3-3-3(2014-08-23) CfP 25th International Conference on Computational Linguistics (COLING 2014)

 

 

             1st (Preliminary) Call for Papers - Coling 2014

 

The 25th International Conference on Computational Linguistics

August 23 - 29, 2014

Dublin, Ireland

 

http://www.coling-2014.org

 

The International Committee on Computational Linguistics (ICCL) is pleased to announce the 25th International Conference on Computational Linguistics (Coling 2014), at Dublin City University (DCU, Dublin, Ireland, European Union). DCU is a young, dynamic and ambitious university with a mission to transform lives and societies through education, research and innovation. Most of the local organizers are from CNGL, Ireland’s Centre for Global Intelligent Content (formerly the Centre for Next Generation Localization), which embodies the leading position of Ireland in the global localization/internationalization business, a strong focus on language technologies including machine translation, computational linguistics and natural language processing, as well as on intelligent management, search, retrieval, transformation and adaptation of content.

Coling will cover a broad spectrum of technical areas related to natural language and computation. The conference will include full papers (presented as oral presentations or posters), demonstrations, tutorials, and workshops.

 

TOPICS OF INTEREST

 

Coling 2014 solicits papers and demonstrations on original and unpublished research on the following topics, including, but not limited to:

 

- pragmatics, semantics, syntax, grammars and the lexicon;

- cognitive, mathematical and computational models of language processing;

- models of communication by language; 

- lexical semantics and ontologies;  

- word segmentation, tagging and chunking;

- parsing, both syntactic and deep;

- generation and summarization;

- paraphrasing, textual entailment and question answering;

- speech recognition, text-to-speech and spoken language understanding;

- multimodal and natural language interfaces and dialogue systems;

- information retrieval, information extraction and knowledge base linking;

- machine learning for natural language;

- modeling of discourse and dialogue;

- sentiment analysis, opinion mining and social media;

- multilingual processing, machine translation and translation aids;              

- applications, tools and language resources;

- system evaluation methodology and metrics.

 

In all relevant areas, we encourage authors to include analysis of the influence of theories (intuitions, methodologies, insights, ? to technologies (computational algorithms, methods, tools, data, ? and/or contributions of technologies to theory development. In technologically oriented papers, we encourage in-depth analysis and discussion of errors made in the experiments described, if possible linking them to the presence or absence of linguistically-motivated features. Contributions that display and rigorously discuss future potential, even if not (yet) attested in standard evaluation, are welcome.

 

PAPER REQUIREMENTS

 

Papers should describe original work; they should emphasize completed work or well-advanced ongoing research rather than intended work, and should indicate clearly the state of completion of the reported results. Wherever appropriate, concrete evaluation results should be included.

 

Submissions will be judged on correctness, originality, technical strength, significance and relevance to the conference, and interest to the attendees.

Submissions presented at the conference should mostly contain new material that has not been presented at any other meeting with publicly available proceedings. Papers that are being submitted in parallel to other conferences or workshops must indicate this on the title page, as must papers that contain significant overlap with previously published work.

 

REVIEWING

 

Reviewing will be double blind. It will be managed by an international Conference Program Committee consisting of Program Chairs, members of the Scientific Advisory Board and Area Chairs, who will be assisted by invited reviewers.

 

INSTRUCTIONS FOR AUTHORS

 

For Coling 2014, there will be one category of research papers only. All of the papers will be included in conference proceedings, this time in electronic form only.

 

The maximum submission length is 8 pages (A4), plus two extra pages for references.  Authors of accepted papers will be given additional space in the camera-ready version to reflect space needed for changes stemming from reviewers?comments.  Authors can indicate their preference for presentation mode (i.e. oral or poster presentation) in the submission form, and the reviewers will recommend an appropriate mode of presentation to the program committee which will then decide. There will be no distinction in the proceedings between research papers presented orally vs. as posters.

 

Papers shall be submitted in English, anonymized with regard to the authors and/or their institution (no author-identifying information on the title page nor anywhere in the paper), including referencing style as usual. Papers must conform to official Coling 2014 style guidelines, which will be available on the Coling 2014 website. Submission and reviewing will be managed online by the START system. The only accepted format for submitted papers is in Adobe’s PDF.

 

Submissions must be uploaded on the START system by the submission deadlines; submissions after that time will not be reviewed. To minimize network congestion, we request authors to upload their submissions as early as possible.

 

 

Important Notice

 

[1] In order to allow participants to be acquainted with the published papers ahead of time which in turn should facilitate discussions at Coling 2014, we have set the official publication date two weeks before the conference, i.e., on August 11, 2014. On that day, the papers will be available online for all participants to download, print and read. If your employer is taking steps to protect intellectual property related to your paper, please inform them about this timing.

 

[2] While submissions are anonymous, we strongly encourage authors to plan for depositing language resources and other data as well as tools used and/or developed for the experiments described in the papers, if the paper is accepted. In this respect, we encourage authors then to deposit resources and tools to available open-access repositories of language resources and/or repositories of tools (such as META-SHARE, Clarin, ELRA, LDC or AFNLP/COCOSDA for data, and github, sourceforge, CPAN and similar for software and tools) and refer to them instead of submitting them with the paper, even though it will also be an open possibility (through the START system). The details will be given in the submission site for camera-ready versions of accepted papers.

 

[3] There will be a separate call for demonstrations in February. Accepted papers on demonstrations will also be included in the proceedings.

 

 

IMPORTANT DATES

 

January, 2014: Opening of the submission website

March 21, 2014: Paper submission deadline

May 9-12, 2014: Author response period

May 23, 2014: Author notification

June 6, 2014: Camera-ready PDF due

August 11, 2014: Official paper publication date

August 25-29, 2014: Main conference

 

PROGRAM COMMITTEE

 

Program Committee Co-chairs

 

Junichi Tsujii (Microsoft Research, China)

Jan Hajic (Charles University in Prague, Czech Republic)

 

Scientific Advisory Board members

 

Ralph Grishman (New York University, USA)

Yuji Matsumoto (Nara Institute of Science and Technology, Japan)

Joakim Nivre (Uppsala University, Sweden)

Michael Picheny (IBM T. J. Watson Research Center, USA)

Donia Scott (Unviersity of Sussex, United Kingdom)

Chengqing Zong (Chinese Academy of Sciences, China)

 

Area Chairs

 

1. Linguistic Issues in CL and NLP

Emily M. Bender  (University of Washington, USA)

Eva Hajicova (Charles University in Prague, Czech Republic)

Igor Boguslavsky (Universidad Politecnica de Madrid, Spain)

 

2. Machine Learning for CL and NLP

Jason Eisner (Johns Hopkins University, USA)

Yoshimasa Tsuruoka (University of Tokyo, Japan)

 

3. Cognitive Issues in CL and NLP

Philippe Blache (CNRS & CNRS & Aix-Marseille Université, France)

Ted Gibson (MIT, USA)

 

4.  Morphology, Word Segmentation, Tagging and Chunking

Reut Tsarfaty (Weizmann Institute of Science, Israel)

Yue Zhang (Singapore University of Technology and Design, Singapore)

 

5. Syntax, Grammar Induction, Syntactic and Semantic Parsing

Laura Kallmeyer (Heinrich-Heine-Universität, Germany)

Ryan McDonald (Google, USA)

 

6. Lexical Semantics and Ontologies

Chu-Ren Huang (Hong Kong Polytechnic University, Hong Kong)

Alessandro Oltramari (Carnegie Mellon University, USA)

 

7. Semantic Processing, Distributional Semantics and Compositional Semantics

Stephen Clark (University of Cambridge, UK)

Alessandro Lenci (University of Pisa, Italy)

 

8. Modeling of Discourse and Dialogue

Nicolas Asher (CNRS & Université Paul Sabatier, France)

Marilyn Walker (University of California Santa Cruz, USA)

 

9. Natural Language Generation and Summarization

Albert Gatt (University of Malta, Malta)

Advaith Siddharthan (University of Aberdeen, UK)

 

10. Paraphrasing and Textual Entailment

Ido Dagan (Bar Ilan University, Israel)

Kentaro Inui (Tohoku University, Japan)

 

11. Sentiment Analysis, Opinion Mining and Social Media

Rada Mihalcea (University of Michigan, USA)

Bing Liu (University of Illinois at Chicago, USA)

 

12.  Information Retrieval and Question Answering

Gareth Jones (Dublin City University, Ireland)

Siddharth Patwardhan (IBM Research, USA)

 

13. Information Extraction and Database Linking

James Curran (University of Sydney, Australia)

Seung-won Hwang (Postec, Korea)

 

14. Applications

Srinivas Bangalore (AT&T Labs-Research, USA)

Heyan Huang (Beijing Institute of Technology, China)

Guillaume Jacquet (Joint Research Centre, Italy)

 

15. Multimodal and Natural Language Interfaces and Dialog Systems

Kristiina Jokinen  (University of Helsinki, Finland)

David Traum (University of Southern California, USA)

 

16. Speech Recognition, Text-To-Speech, Spoken Language Understanding

Nick Campbell  (Trinity College Dublin, Ireland)

Alex Potamianos (National Technical University Crete, Greece)

 

17. Machine Translation

Phillip Koehn (University of Edinburgh, UK / Johns Hopkins University, USA)

Chris Quirk (Microsoft Research, USA)

Tiejun Zhao (Harbin Institute of Technology, China)

 

18. Resources

Pushpak Bhattacharyya (IIT Bombay, India)

Nicoletta Calzolari (ILC-CNR, Pisa, Italy)

Martha Palmer (University of Colorado, USA)

 

19. Languages with less resources

Steven Bird (University of Melbourne, Australia)

Mark Liberman (University of Pennsylvania, USA)

Rajeev Sangal (IIT Banaras Hindu University, India)

Koenraad De Smedt (University of Bergen, Norway)

 

20. Software and Tools

Jesús Cardeñosa (Universidad Politecnica de Madrid, Spain)

 

Jing-Shin Chang (National Chi Nan University,Taiwan)




org

Back  Top

3-3-4(2014-08-23) SHARED TASK ON THE LEXICAL ACCESS PROBLEM (with COGALEX)
SHARED TASK ON THE LEXICAL ACCESS PROBLEM
(COMPUTING ASSOCIATIONS WHEN BEING GIVEN MULTIPLE STIMULI)


In the framework of the 4th Workshop on Cognitive Aspects of the Lexicon (CogALex) to be held at COLING 2014, we invite participation in a shared task devoted to the problem of lexical access in language production, with the aim of providing a quantitative comparison between different systems.

 
MOTIVATION

The quality of a dictionary depends not only on coverage, but also on the accessibility of the information. That is a crucial point is dictionary access. Access strategies vary with the task (text understanding vs. text production) and the knowledge available at the very moment of consultation (words, concepts, speech sounds). Unlike readers who look for meanings, writers start from them, searching for the corresponding words. While paper dictionaries are static, permitting only limited strategies for accessing information, their electronic counterparts promise dynamic, proactive search via multiple criteria (meaning, sound, related words) and via diverse access routes. Navigation takes place in a huge conceptual lexical space, and the results are displayable in a multitude of forms (e.g. as trees, as lists, as graphs, or sorted alphabetically, by topic, by frequency).

To bring some structure into this multitude of possibilities, the shared task will concentrate on a crucial subtask, namely multiword association.  What we mean by this in the context of this workshop is the following. Suppose, we were looking for a word expressing the following ideas: 'superior dark coffee made of beans from Arabia', but could not remember the intended word 'mocha' due to the tip-of-the-tongue problem. Since people always remember something concerning the elusive word, it would be nice to have a system accepting this kind of input, to propose then a number of candidates for the target word. Given the above example, we might enter 'dark', 'coffee', 'beans', and 'Arabia', and the system would be supposed to come up with one or several associated words such as 'mocha', 'espresso', or 'cappuccino'.

 
TASK DEFINITION

The participants will receive lists of five given words (primes) such as 'circus', 'funny', 'nose', 'fool', and 'fun' and are supposed to compute the word which is most closely associated to all of them. In this case, the word 'clown' would be the expected response. Here are some more examples:

   given words:  gin, drink, scotch, bottle, soda
   target word:  whisky

   given words:  wheel, driver, bus, drive, lorry
   target word:  car

   given words:  neck, animal, zoo, long, tall
   target word:  giraffe

   given words:  holiday, work, sun, summer, abroad
   target word:  vacation

   given words:  home, garden, door, boat, chimney
   target word:  house

   given words:  blue, cloud, stars, night, high
   target word:  sky

We will provide a training set of 2000 sets of five input words (multiword stimuli), together with the expected target words (associative responses). The participants will have about five weeks to train their systems on this data. After the training phase, we will release a test set containing another 2000 sets of five input words, but without providing the expected target words.

Participants will have five days to run their systems on the test data, thereby predicting the target words. For each system, we will compare the results to the expected target words and compute an accuracy. The participants will be invited to submit a paper describing their approach and their results.

For the participating systems, we will distinguish two categories:

(1) Unrestricted systems. They can use any kind of data to compute their results.
(2) Restricted systems: These systems are only allowed to draw on the freely available ukWaC corpus in order to extract information on word associations. The ukWaC corpus comprises about 2 billion words and is can be downloaded from http://wacky.sslmit.unibo.it/doku.php?id=corpora.

Participants are allowed to compete in either category or in both.


VENUE

The shared task will take place as part of the CogALex workshop which is co-located with COLING 2014 (Dublin). The workshop date is August 23, 2014. Shared task participants who wish to have a paper published in the workshop proceedings will be required to present their work at the workshop.
 

SHARED TASK SCHEDULE

Training data release:  March 27, 2014
Test data release:  May 5, 2014
Final results due:  May 9, 2014
Deadline for paper submission: May 25, 2014 
Reviewers' feedback:  June, 15, 2014
Camera-ready version:  July 7, 2014
Workshop date:  August 23, 2014


FURTHER INFORMATION

CogALex workshop website: http://pageperso.lif.univ-mrs.fr/~michael.zock/CogALex-IV/cogalex-webpage/index.html
Data releases: To be found on the above workshop website from the dates given in the schedule.
Registration for the shared task: Send e-mail to Michael Zock, with Reinhard Rapp in copy.


WORKSHOP ORGANIZERS

Michael Zock (LIF-CNRS, Marseille, France), michael.zock AT lif.univ-mrs.fr
Reinhard Rapp (University of Aix Marseille (France) and Mainz (Germany), reinhardrapp AT gmx.de
Chu-Ren Huang (The Hong Kong Polytechnic University, Hong Kong), churen.huang AT inet.polyu.edu.hk

Back  Top

3-3-5(2014-09-01) 22nd European Signal Processing Conference (EUSIPCO 2014) Lisbon, Portugal
The 22nd European Signal Processing Conference  September 1 – 5, 2014, Lisbon, Portugal http://www.eusipco2014.org/ ============================================================== Deadline for the submission of Full Papers: FEBRUARY 17, 2014 ============================================================== EUSIPCO 2014 will be held on September 1- 5, 2014, in Lisbon, Portugal. This is one of the largest international conferences in the field of signal processing and will address all the latest developments in research and technology. The conference will bring together individuals from academia, industry, regulation bodies, and government, to exchange and discuss ideas in all the areas and applications of signal processing. EUSIPCO 2014 will feature world-class keynote speakers, special sessions, plenary talks, tutorials, and technical sessions. We invite the submission of original, unpublished technical papers on signal processing topics, including but not limited to: • Audio and acoustic signal processing • Design and implementation of signal processing systems • Multimedia signal processing • Speech processing • Image and video processing • Machine learning • Signal estimation and detection • Sensor array and multichannel signal processing • Signal processing for communications including wireless and optical communications and networking • Signal processing for location, positioning and navigation • Nonlinear signal processing • Signal processing applications including health and biosciences Submitted papers must be camera-ready and no more than five pages long, and conforming to the format that will soon be specified on the EUSIPCO website (http://www.eusipco2014.org/ ). ============================================================== Best Paper Awards ============================================================== Two “EUSIPCO best young author paper awards” will be given at the dinner banquet of EUSIPCO 2014 to the two best papers from authors under the age of 30. ============================================================== Important Dates ============================================================== Proposal for special sessions: December 9, 2013 Proposal for tutorials: February 17, 2014 Electronic submission of full papers: February 17, 2014 Notification of acceptance: May 26, 2014 Submission of camera-ready papers and copyright forms: June 23, 2014 _______________________________________________
Back  Top

3-3-6(2014-09-03) Laboratory Approaches to Romance Phonology 7 (LARP VII), Aix en Provence, FR


Laboratory Approaches to Romance Phonology 7 (LARP VII)
   

                
Aix-en-Provence, France
Sept. 3-5, 2014

The biannual conference on Laboratory Approaches to Romance Phonology
(LARP) seeks to bring together international researchers interested in all
areas of Romance phonetics and phonology, in particular within the
 laboratory phonology approach. In the past decades, research in the
 laboratory phonology paradigm has expanded significantly so that the
 disciplines of phonetics and phonology are being investigated from a unique
 interdisciplinary angle. LARP aims at providing an interdisciplinary forum
 for world-wide research focusing on the experimental investigation
 of
 Romance phonetics and phonology and their related areas, such as language
 acquisition, language variation and change, prosody, speech pathology,
 speech technology, as well as the phonology-phonetics interface.
 LARP VII will be hosted for the first time in Europe, by the Laboratoire
 Parole et Langage in Aix-en-Provence, and will be the result of a joint
 effort between Aix-Marseille University (Aix-en-Provence, France) and the
 Universitat Pompeu Fabra (Barcelona, Spain).

 Meeting Dates:
 Laboratory Approaches to Romance Phonology VII will be held from
 03-Sept-2014 to 05-Sept-2014.

 Contact Information:
 Mariapaola D'Imperio: larp7conference@gmail.com
 
Organizers:
 Mariapaola D'Imperio (Aix-Marseille University & LPL,CNRS)
 Pilar Prieto (ICREA & Universitat Pompeu Fabra)

 Conference webpage:
 http://larp7.sciencesconf.org/

 Abstract Submission Information:
Abstracts can be submitted from 15-Dec-2013 until 15-April-2014. 
 Invited speakers
 Laura Bosch, Univ. Barcelona
 Martine Grice, Univ. Koeln, Germany
 Thierry Nazzi, CNRS, Paris
 Daniel Recasens, Univ. Autonoma, Barcelona
           
         
Back  Top

3-3-7(2014-09-10) 56th International Symposium ELMAR-2014 , Zadar, Croatia
 56th International Symposium ELMAR-2014 
********************************** 
September 10-12, 2014 Zadar, Croatia
 Paper submission deadline: May 15, 2014 
http://www.elmar-zadar.org/ 
CALL FOR PAPERS TECHNICAL CO-SPONSORS IEEE Region 8 
IEEE Croatia Section 
IEEE Croatia Section SP, 
AP and MTT Chapters 
EURASIP - 
European Association for Signal Processing 
TOPICS
 --> Image and Video Processing 
--> Multimedia Communications
 --> Speech and Audio Processing 
--> Wireless Communications 
--> Telecommunications
 --> Antennas and Propagation
 --> e-Learning and m-Learning 
--> Navigation Systems 
--> Ship Electronic Systems 
--> Power Electronics and Automation 
--> Naval Architecture
 --> Sea Ecology 
--> Special Sessions: http://www.elmar-zadar.org/2014/special_sessions/ 
 
KEYNOTE SPEAKER 
Prof. Milo Oravec, Slovak University of Technology in Bratislava, SLOVAKIA: 
Feature Extraction and Classification by Machine Learning Methods for Biometric Recognition 
of Face and Iris
 SCHEDULE OF IMPORTANT DATES 
Deadline for submission of full papers: May 15, 2014 
Notification of acceptance mailed out by: June 3, 2014 
Submission of (final) camera-ready papers: June 10, 2014 
Preliminary program available online by: June 17, 2014 
Registration forms and payment deadline: June 17, 2014 
Back  Top

3-3-8(2014-09-10) CfP 3rd SWIP - Swiss Workshop on Prosody, Université de Genève, Switzerland
Second Call for contributions
 
3rd SWIP - Swiss Workshop on Prosody
Special Theme : PhonoGenres and Speaking Styles
10-11 September 2014 - University of Geneva
 
 
The SWIP (Swiss Workshop on Prosody) is an annual meeting gathering
researchers in the field of prosody. After Zurich in 2012, and
Neuchâtel in 2013, the 3rd SWIP will take place in Geneva on
10-11 September 2014. For this edition, the special theme is
PhonoGenres and Speaking Styles. By this event we mark the end
of the three year FNS research project 'Prosodic and linguistic
characterisation of speaking styles: semi-automatic approach and
applications'.
 
Phonostylistic prosodic variation, whether regional, social or
situational, is the object of a growing number of studies. They are
systematic or isolated, based on phonetic-phonological studies of
large-scale corpora or on the examination of narrow samples. Approaches
vary between systematic methodologies and ad hoc procedures. Thus, one
of the major goals of the conference is to index different approaches
and to confront their results.
 
Topics of interest include, but are not limited to:
 
*PhonoGenres: phonetic-prosodic dimensions; situational, regional,
communicative, macro- or micro-social variations; comparative analysis
*speaker-specific behavior: cliché, idiosyncrasy, distinctive features
*diachronic speaking style variation
*identification of discourse genres and styles
*methodologies and tools for corpus processing of speech in general,
and especially those developed to process the speaking style variation
 
Invited speakers:
 
Julia Hirschberg
Philippe Boula de Mareüil
 
Submission:
 
First, a one page abstract, plus references, shall be submitted in
English or in French via EasyChair by the 1st of February 2014.
 
Second, the definitive version of paper shall be submitted by the
1st of June 2014 in order to publish the proceedings - both in paper
and electronic format - at the beginning of the conference. Proceedings
will be published in Cahiers de la Linguistique Française in a short
(6 pages max., about 2000 words) or in a long version (12 pages max.,
about 4000 words). Papers can be written in English or in French with
an abstract in the other language and they must follow style sheet
 
Please note that the conference language is English.
 
Important dates:
 
Submission of abstracts : 1 February 2014
Notification of acceptance: 1 Mars 2014
Submission of final paper for proceedings publication: 1 June 2014
Conference: 10-11 September 2014
 
Scientific committee:
 
Antoine Auchlin
Mathieu Avanzi
Philippe Boula de Mareüil
Nick Campbell
Elisabeth Delais-Roussarie
Céline De Looze
Volker Dellwo
Jean-Philippe Goldman
Julia Hirschberg
Daniel Hirst
Ingrid Hove
Adrian Leemann
Joaquim Llisterri
Philippe Martin
Piet Mertens
Anne Lacheret
Nicolas Obin
Tea Pršir
Stephan Schmid
Sandra Schwab
Elizabeth Shriberg
Anne Catherine Simon
 
Organising committee:
 
Antoine Auchlin
Jean-Philippe Goldman
Tea Pršir
 
 
 
 
 
Back  Top

3-3-9(2014-09-11) CfP 2nd Workshop on Speech, Language and Audio in Multimedia (SLAM 2014), Penang, Malaysia UPDATED

SLAM2014 Call for Paper (important update at the end of this announcement)
================

2nd Workshop on Speech, Language and Audio in Multimedia (SLAM 2014)Penang, Malaysia
http://language.cs.usm.my/SLAM2014/
11-12 September, 2014

Following the first successful edition of the Workshop on Speech, Language and Audio in Multimedia (SLAM) in Marseille, France last year, we will be bringing the next edition of the workshop to Penang, Malaysia! SLAM workshop aims at bringing together researchers working in speech, language and audio processing to analyze, index and access multimedia data. Multimedia data are now available in very large amounts: Lectures, meetings, interviews, debates, conversational broadcast, podcasts, social videos on the Web, etc. Such data, along with the associated use scenarios, raise specific challenges: Robustness facing the high variability in quality; Efficiency to handle very large amount of data; Semantics shared across modalities; Potentially high error rates in transcription; etc. Worldwide, several national and international research projects are focusing on audio analysis of multimedia data. Similarly, various benchmark initiatives have been initiated such as TRECVID MED, Me!
diaEval, or ETAPE and REPERE in France.

SLAM 2014 is organized in conjunction with Interspeech 2014 over one and a half days, starting Thursday 11 September 2014 and ending Friday 12 September 2014. Penang is conveniently connected by bus, train and flight to Singapore, where the Interspeech 2014 conference will take place. The format of the workshop will include an invited talk, oral presentations of scientific work and a poster session for project and benchmark presentations. SLAM 2014 Workshop is jointly organized by the ISCA SIG on Speech and Language in Multimedia and the IEEE SIG on Audio and Speech Processing in Multimedia. The proceeding will be published by ISCA and available online at ISCA Online Archive. Best papers selected will be able to extend their paper to be submitted to special issue in a dedicated journal that we will announce later.


SCIENTIFIC COMMITTEE:
Chng Eng Siong, Nanyang Technological University - Singapore
Eric Castelli, Hanoi University of Science and Technology - Vietnam
Fernando Fernández-Martínez, Universidad Carlos III, Madrid - Spain
Florian Metze, Carnegie Mellon University, Pittsburgh - USA
Frédéric Bechet, Aix-Marseille Université, Marseille - France
Gareth Jones, Dublin City University, Dublin - Ireland
Gerald Friedland, University of California, Berkeley - USA
Guillaume Gravier, Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Rennes - France
Juan Manuel Montero, Universidad Politécnica de Madrid, Madrid - Spain
Laurent Besacier, Université Joseph Fourier, Grenoble - France
Lin-Shan Lee, Nanyang Technological University - Singapore
Luis Fernando D’Haro, Universidad Politécnica de Madrid, Madrid - Spain
Martha Larson, Delft University of Technology, Delft - Netherlands
Ricardo de Córdoba, Universidad Politécnica de Madrid, Madrid - Spain
Roberto Barra-Chicote, Universidad Politécnica de Madrid, Madrid - Spain
Rubén San-Segundo, Universidad Politécnica de Madrid, Madrid - Spain
Sadaoki Furui, Tokyo Institute of Technology, Tokyo - Japan
Tang Enya Kong, Linton University College, Negeri Sembilan - Malaysia
Xavier Anguera, Telefónica Research, Barcelona - Spain

Two distinguished professors will be giving keynotes presentations during the workshop, they are Prof. Shrikanth S. Narayanan and Assoc. Prof. Min-Yen Kan

IMPORTANT DATES:
Full paper submission deadline 20th June 2014 (extended deadline)
Notification of acceptance 16th July 2014
Camera ready paper 31st July 2014
SLAM workshop 11-12th September 2014
Interspeech conference 14-18th September 2014

Independently of the scientific actions we will provide tour around George Town, Penang, which is a Unesco World Heritage Historical City.  

Jointly Organised by:
Universiti Sains Malaysia, SLAM Organising Committee, Interspeech 2014, ISCA SIG on Speech and Language in Multimedia, IEEE SIG on Audio and Speech Processing in Multimedia

  ***** ISCA/IEEE SLAM UPDATE *****


The deadline for the 2nd ISCA/IEEE Workshop on Speech, Language and Multimedia is now very close! Below are two important pieces of information if you are considering submitting a paper (which I hope you are). Please circulate the information to relevant mailing list you might have access to and to your colleagues.

*Student grants*

We're happy to inform you that ISCA will reserve two grants for students attending SLAM 2014. Grant application will be handled by ISCA, following the standard procedure as described in http://isca-speech.org/iscaweb/index.php/grants. The ISCA grant coordinator is expecting applications for SLAM before August, 1, i.e., shortly after the author notification date.

*Paper submission procedure*

We were informed that some people are experiencing difficulties to access the SLAM 2014 website at http://language.cs.usm.my/SLAM2014. Our apologies if this is the case: We're currently trying to resolve the problem. The unavailability of the web site does not mean that SLAM has been cancelled. We recall here the main information you need to prepare and submit your paper even if you can't access the SLAM 2014 website:

Paper template: same as Interspeech 2014 (see http://www.interspeech2014.org/doc/IS2014_AuthorsKit.zip), following the 4+1 page standard

Submission website: https://www.easychair.org/conferences/?conf=slam2014

Submission deadline: June 20, 2014
Author notification: July 21, 2014
Camera ready paper: August 1, 2014
Early bird registration: before August 15, 2014
SLAM workshop: September 11-12, 2014


Back  Top

3-3-10(2014-09-12) ISCSLP in Singapore

 

Welcome to ISCSLP 2014

第九届中文口语语言处理国际会议

http://www.iscslp2014.org



The 9th International Symposium on Chinese Spoken Language Processing (ISCSLP 2014) will be held on September 12-14, 2014 in Singapore.

ISCSLP 2014 is a joint conference between ISCA Special Interest Group on Chinese Spoken Language Processing and National Conference on Man-Machine Speech Communication of China.

ISCSLP is a biennial conference for scientists, researchers, and practitioners to report and discuss the latest progress in all theoretical and technological aspects of spoken language processing.

While the ISCSLP is focused primarily on Chinese languages, works on other languages that may be applied to Chinese speech and language are also encouraged. The working language of ISCSLP is English.

 

Topics of interest for submission include, but are not limited to the following:

  1. Speech Production and Perception

  2. Speech Analysis

  3. Speech Coding

  4. Speech Enhancement

  5. Hearing Aids and Cochlear Implant

  6. Phonetics and Phonology

  7. Corpus-based Linguistics

  8. Speech and Language Disorders

  9. Speech Recognition

  10. Spoken Language Translation

  11. Speaker, Language, and Emotion Recognition

  12. Speech Synthesis

  13. Language Modeling

  14. Speech Prosody

  15. Spoken Dialog Systems

  16. Machine Learning Techniques in Speech and Language Processing

  17. Voice Conversion

  18. Indexing, Retrieval and Authoring of Speech Signals

  19. Multi-Modal Interfaces

  20. Speech and Language Processing in Education

  21. Spoken Language Resources and Technology Evaluation

  22. Applications of Spoken Language Processing Technology

  23. Others

 

 

Important Dates

Regular and special session paper submission deadline

April 10, 2014

Notification of paper acceptance

June 20, 2014

Revised camera-ready paper upload deadline

June 30, 2014

Author’s registration deadline

July 10, 2014

 

Back  Top

3-3-11(2014-09-22) 8th Workshop: 'emotion and computing - current research and future impact', Stuttgart, Germany

                    Call for Papers
--------------------------------------------------------------
8th Workshop:
     'emotion and computing  - 
            current research and future impact'

WORKSHOP at the KI 2014
Stuttgart, September 22nd, 2014


--------------------------------------------------------------


The workshop series Òemotion and computing Ð current research and future impactÓ has been
providing a platform for discussion of emotion related topics of computer science and AI since
2006. In recent years computer science research has shown increasing efforts in the field of
software agents which incorporate emotion. Several approaches have been made concerning
emotion recognition, emotion modelling, generation of emotional user interfaces and dialogue
systems as well as anthropomorphic communication agents. Motivations for emotional
computing are manifold. From a scientific point of view, emotions play an essential role in
decision making, as well as in perception and learning. Furthermore, emotions influence rational
thinking and therefore should be part of rational agents as proposed by artificial intelligence
research. Another focus is on human computer interfaces which include believable animations of
interface agents. From a user perspective, emotional interfaces can significantly increase
motivation and engagement which is of high relevance to the games and e-learning industry.
Moreover, motivational and emotional aspects may play a key role in persuasive technologies,
which intend to influence the user behaviour.


Contributions are solicited from the following fields:
 
-Artificial Intelligence Research
-Cognitive Sciences and Cognitive Robotics
-Multi-agent System Technology
-Speech Synthesis and Speech Recognition
-Dialogue Systems and Communication
-Modeling Uncertainty and Vagueness
-Computer Game Development
-User Modeling and Personalization
-Applications using models of emotion
-Persuasive Computing/Technologies
-Affective Computing
        
Contributions are expected in the following form:

- Presentations should have a duration of 15-20 minutes. Each 
  presenter is required to submit a short paper on the presented
  topic. Papers are subject to regular peer review and subsequent
  publication within the workshop proceedings (4-8 pages).

- Demonstrations are documented by an extended abstract which
  should not exceed 1 page in total

- Workshop submission is electronic. Submitted papers
  should conform Springer LNCS style and must
  be written in English. Papers will be published on the
  workshop website. Further publication is in discussion
  and depends on submitted papers.

Important Dates:

Workshop paper submission deadline:          July 4th, 2014
Notification of workshop paper acceptance:   July 18th, 2014
Workshop camera ready copy submission:       August 1st, 2014


Organization and Scientific Committee:


Prof. Dr. Dirk Reichardt, Baden-WŸrttemberg Cooperative State University Stuttgart (main contact)
Dr. Joscha Bach, MIT
Dr. Christian Becker-Asano, Freiburg Institute for Advanced Studies
Dr. Hana Boukricha,University of Bielefeld
Dr. Patrick Gebhard, DFKI SaarbrŸcken
Prof. Dr. Michael Kipp, Hochschule Augsburg
Prof. Dr. Paul Levi, University of Stuttgart
Prof. Dr. John-Jules Charles Meyer, University of Utrecht
Dr. Gštz Renner, Daimler AG, Customer Research Center
Prof. Dr. Michael M. Richter, University of Calgary
Dr.-Ing. Bjšrn Schuller, Imperial College London / TU MŸnchen
Prof. Dr. David SŸndermann, DHBW Stuttgart


Please refer to the workshop website for further information:

Workshop Website:       http://www.emotion-and-computing.de
Email:                  mailto://info@emotion-and-computing.de

Back  Top

3-3-12(2014-09-25) XLVIII Congresso Internazionale - Società di Linguistica Italiana , Udine, Italy

XLVIII Congresso Internazionale - Società di Linguistica Italiana (SLI) 2014

(Udine, 25-27.9.2014)

 

WORKSHOP

 

Between linguistics and linguistic medical clinic. The role of the linguist

 

 

Workshop topics

- Medical terminology

- The medical discourse and the effectiveness of corporate communication health

- Communicative interaction doctor-patient in multilingual contexts

- Oral language, written language, and specific disabilities

- Grammar diseases: the role of the linguist

- Linguistic symptoms in the context of specific diseases

 

Invited speakers

Charles Antaki

Maria Teresa Guasti

 

Scientific Committee

Grazia Basile

Anna Cardinaletti

Francesca M. Dovetto

Vincenzo Orioles

Franca Orletti

Patrizia Sorianello

 

 

Abstract submission guidelines

Scholars, researchers and PhD students interested in presenting a paper or poster should send an abstract by email to <medcli.sli2014@libero.it>.

The deadline for submission is 20st February 2014.

Notifications of acceptance will be sent by email by 31th March 2014.

Authors must submit an anonymous abstract (.doc/.pdf format) while in the email they should clearly include: name of the author(s), affiliation(s) and email address(es). Abstracts should be no longer than 1000 words including the bibliography.

Conference languages: Italian, English, French and Spanish.

 

Info: <dovetto@unina.it>

 

 

Back  Top

3-3-13(2014-10-05) 16th International Conference on Speech and Computer (SPECOM-2014), Novi Sad, Serbia

SPECOM 2014 - CALL FOR PAPERS

*********************************************************

 

16th International Conference on Speech and Computer (SPECOM-2014)

Venue: Novi Sad, Serbia, 5-9 October 2014

Web: www.specom.nw.ru

 

 

SPECOM NEWS

 

SPECOM this year is organized in parallel with DOGS (The Tenth Conference on Digital Speech and Image Processing) in the same time at the same place. Participants will be able to attend both conferences.

 

ORGANIZERS

 

The conference is organized by the Faculty of Technical Sciences, University of Novi Sad (UNS, Novi Sad, Serbia), in cooperation with Moscow State Linguistic University (MGLU, Moscow, Russia) and St. Petersburg Institute for Informatics and Automation of the Russian Academy of Science (SPIIRAS, St. Petersburg, Russia).

 

SPECOM conferences

 

The SPECOM conferences are long time being organised by SPIIRAS (St.Petersburg) and MGLU (Moscow). Recently SPECOM venue is significantly varied: Patras, Greece, 2005; Kazan, Russia, 2011; Plzen, Czech Republic, 2013.

The last conference was organized in parallel with TSD'2013 (The 16th International Conference of Text, Speech and Dialogue) and had a great success and benefits of joining the various research teams. Continue this tradition SPECOM'2014 and DOGS'2014 will be organized jointly. The both conferences are devoted to issues of human-machine interaction and their topics harmonically add each other.

Since 2013 due to extending contribution of University of West Bohemia, Czech Republic the SPECOM proceedings are published by Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI) series. LNAI series are listed in all major citation databases such as DBLP, SCOPUS, EI, INSPEC, or COMPENDEX.

 

TOPICS

 

Topics of the conference will include (but are not limited to):

Signal processing and feature extraction

Speech enhancement

Multichannel signal processing

Speech recognition and understanding

Spoken language processing

Spoken dialogue systems

Speaker identification and diarization

Speech forensics and security

Language identification

Text-to-speech systems

Speech perception and speech disorders

Speech translation

Multimodal analysis and synthesis

Audio-visual speech processing

Multimedia processing

Speech and language resources

Applications for human-computer interaction

 

OFFICIAL LANGUAGE

 

The official language of the event will be English. However, papers on processing of languages other than English are strongly encouraged.

 

FORMAT OF THE CONFERENCE

 

The conference program will include presentation of invited papers, oral presentations, and a poster/demonstration sessions. Papers will be presented in plenary or topic oriented sessions.

Social events including a trip to the Krusedol monastery and wine makers on Fruska Gora will allow for additional informal interactions. Details about the social event will be available on the web page.

 

SUBMISSION OF PAPERS

 

Authors are invited to submit a full paper not exceeding 8 pages formatted in the LNCS style (see below). Those accepted will be presented either orally or as posters. The decision on the presentation format will be based upon the recommendation of three independent reviewers. The authors are asked to submit their papers using the on-line submission form accessible from the conference web site.

Papers submitted to SPECOM 2014 must not be under review by any other conference or publication during the SPECOM review cycle, and must not be previously published or accepted for publication elsewhere.

As the reviewing is blind, the paper should not include authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g., 'We previously showed (Smith, 1991) ...', should be avoided. Instead, use citations, such as 'Smith previously showed (Smith, 1991) ...'. Papers that do not conform to the requirements above are determined to be rejected without review.

The paper format for the review has to be the PDF file with all required fonts included. Upon notification of acceptance, speakers will receive further information on submitting their camera-ready and electronic sources (for detailed instructions on the final paper format see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0).

 

 

IMPORTANT DATES

 

May 18, 2014 ............. Submission of full papers

June 1, 2014 ............. Notification of acceptance

June 15, 2014 ............ Final papers (camera ready) and registration

October 5-9, 2014 ........ Conference dates

 

The contributions to the conference will be published in proceedings that will be made available to participants at the time of the conference.

 

 

CONFERENCE FEES

 

The conference fee depends on the date of payment and on your status. It includes one copy of the conference proceedings, refreshments/coffee breaks, opening dinner, welcome party, mid-conference social event admissions, and organizing costs. In order to lower the fee as much as possible, meals during the conference, the accommodation, and the conference trip are not included.

 

Full participant:

early registration by June 15, 2013 – RSD 40000 (approx. 350 EUR)

late registration by September 1, 2013 – RSD 43000 (approx. 380 EUR)

on-site registration – RSD 50000 (approx. 440 EUR)

 

Student (reduced):

early registration by June 15, 2013 – RSD 32000 (approx. 280 EUR)

late registration by September 1, 2013 – RSD 35000 (approx. 310 EUR)

on-site registration – RSD 40000 (approx. 350 EUR)

 

The payment may be refunded up until September 15, at the cost of RSD 6.500. No refund is possible after this date. All costs are in Serbian Dinar (RSD), see e.g. http://www.xe.com/ucc/ for the current exchange rate.

At least one of the authors has to register and pay the registration fee by June 15, 2014 for their paper to be included in the conference proceedings. Only one paper of up to 8 pages is included in the regular registration fee. The additional paper and page charge is RSD 5000 per page. Any additional paper is treated as extra pages. An extra page charge is RSD 5000 per page. An author with more than one paper pays the additional paper rates unless a co-author has also registered and paid the full registration fee. In the case of uncertainty, feel free to contact the organising committee for clarification.

 

VENUE

 

The conference will be organized in Hotel Park, Novi Sad, Serbia (http://hotelparkns.com).

Novi Sad is the second largest city in Serbia. The city has population of 231,798 inhabitants. It is located in the southern part of Pannonian Plain, on the border of the Bačka and Srem regions, on the banks of the Danube river and Danube-Tisa-Danube Canal, facing the northern slopes of Fruška Gora mountain. The city was founded in 1694, when Serb merchants formed a colony across the Danube from the Petrovaradin fortress, a Habsburg strategic military post. In the 18th and 19th centuries, it became an important trading and manufacturing centre, as well as a centre of Serbian culture of that period, earning the nickname Serbian Athens. Today, Novi Sad is an industrial and financial centre of the Serbian economy, as well as a major cultural center.

The University of Novi Sad was founded on 28 June 1960. Today it comprises 14 faculties located in the four major towns of the Autonomous Province of Vojvodina: Novi Sad, Subotica, Zrenjanin, and Sombor. The University of Novi Sad is now the second largest among six state universities in Serbia. The main University Campus, covering an area of 259,807m², provides the University of Novi Sad with a unique and beautiful setting in the region and the city of Novi Sad. Having invested considerable efforts in intensifying international cooperation and participating in the process of university reforms in Europe, the University of Novi Sad has come to be recognized as a reform-oriented university in the region and on the map of universities in Europe.

The Faculty of Technical Sciences (Fakultet Tehničkih Nauka, FTN, www.ftn.uns.ac.rs) with 1,200 employees and more than 12,000 students is the largest faculty at UNS. FTN offers engineering education within 71 study programmes. As a research and scientific institution, FTN has 13 departments and 31 research centres. FTN also publishes 4 international journals and organises 16 scientific conferences on various aspects of engineering, including the conference DOGS which is dedicated to the area of speech technologies where FTN has the leading position in the Western Balkan region.

 

ACCOMMODATION

 

The organizing committee has arranged accommodation for reasonable prices in the same Hotel Park, which is situated near the city center. The rooms with sufficient discount are reserved for the conference days.

 

ADDRESS

 

All correspondence regarding the conference should be addressed to:

SPECOM Secretariat

E-mail: specom@iias.spb.su

Phone/Fax: +7 812 328 7081

Fax: +7 812 328 4450 — Please, designate the faxed material with capitals 'SPECOM' on top.

SPECOM-2014 conference web site: www.specom.nw.ru

 

Back  Top

3-3-14(2014-10-14) 2nd INTERNATIONAL CONFERENCE ON STATISTICAL LANGUAGE AND SPEECH PROCESSING

2nd INTERNATIONAL CONFERENCE ON STATISTICAL LANGUAGE AND SPEECH

PROCESSING

SLSP 2014

Grenoble, France

October 14-16,

2014

Organised by:

Équipe GETALP

Laboratoire d’Informatique de Grenoble

Research Group on Mathematical Linguistics (GRLMC)

Rovira i Virgili University

http://grammars.grlmc.com/slsp2014/

**********************************************************************************

AIMS:

SLSP is a yearly conference series aimed at promoting and displaying excellent research

on the wide spectrum of statistical methods that are currently in use in computational

language or speech processing. It aims at attracting contributions from both fields. Though

there exist large, well‐known conferences and workshops hosting contributions to any of

these areas, SLSP is a more focused meeting where synergies between subdomains and

people will hopefully happen. In SLSP 2014, significant room will be reserved to young

scholars at the beginning of their career and particular focus will be put on methodology.

VENUE:

SLSP 2014 will take place in Grenoble, at the foot of the French Alps.

SCOPE:

The conference invites submissions discussing the employment of statistical methods

(including machine learning) within language and speech processing. The list below is

indicative and not exhaustive:

phonology, morphology

syntax, semantics

discourse, dialogue, pragmatics

statistical models for natural language processing

supervised, unsupervised and semi‐supervised machine learning methods applied to

natural language, including speech

statistical methods, including biologically‐inspired methods

similarity

alignment

language resources

part‐of‐speech tagging

parsing

semantic role labelling

natural language generation

anaphora and coreference resolution

speech recognition

speaker identification/verification

speech transcription

text‐to‐speech synthesis

machine translation

translation technology

text summarisation

information retrieval

text categorisation

information extraction

term extraction

spelling correction

text and web mining

opinion mining and sentiment analysis

spoken dialogue systems

author identification, plagiarism and spam filtering

STRUCTURE:

SLSP 2014 will consist of:

invited talks

invited tutorials

peer‐reviewed contributions

INVITED SPEAKERS:

to be announced

PROGRAMME COMMITTEE:

Sophia Ananiadou (Manchester, UK)

Srinivas Bangalore (Florham Park, US)

Patrick Blackburn (Roskilde, DK)

Hervé Bourlard (Martigny, CH)

Bill Byrne (Cambridge, UK)

Nick Campbell (Dublin, IE)

David Chiang (Marina del Rey, US)

Kenneth W. Church (Yorktown Heights, US)

Walter Daelemans (Antwerpen, BE)

Thierry Dutoit (Mons, BE)

Alexander Gelbukh (Mexico City, MX)

Ralph Grishman (New York, US)

Sanda Harabagiu (Dallas, US)

Xiaodong He (Redmond, US)

Hynek Hermansky (Baltimore, US)

Hitoshi Isahara (Toyohashi, JP)

Lori Lamel (Orsay, FR)

Gary Geunbae Lee (Pohang, KR)

Haizhou Li (Singapore, SG)

Daniel Marcu (Los Angeles, US)

Carlos Martín‐Vide (Tarragona, ES, chair)

Manuel Montes‐y‐Gómez (Puebla, MX)

Satoshi Nakamura (Nara, JP)

Shrikanth S. Narayanan (Los Angeles, US)

Vincent Ng (Dallas, US)

Joakim Nivre (Uppsala, SE)

Elmar Nöth (Erlangen, DE)

Maurizio Omologo (Trento, IT)

Barbara H. Partee (Amherst, US)

Gerald Penn (Toronto, CA)

Massimo Poesio (Colchester, UK)

James Pustejovsky (Waltham, US)

Gaël Richard (Paris, FR)

German Rigau (San Sebastián, ES)

Paolo Rosso (Valencia, ES)

Yoshinori Sagisaka (Tokyo, JP)

Björn W. Schuller (London, UK)

Satoshi Sekine (New York, US)

Richard Sproat (New York, US)

Mark Steedman (Edinburgh, UK)

Jian Su (Singapore, SG)

Marc Swerts (Tilburg, NL)

Jun'ichi Tsujii (Beijing, CN)

Renata Vieira (Porto Alegre, BR)

Dekai Wu (Hong Kong, HK)

Feiyu Xu (Berlin, DE)

Roman Yangarber (Helsinki, FI)

Geoffrey Zweig (Redmond, US)

ORGANISING COMMITTEE:

Laurent Besacier (Grenoble, co‐chair)

Adrian Horia Dediu (Tarragona)

Benjamin Lecouteux (Grenoble)

Carlos Martín‐Vide (Tarragona, co‐chair)

Florentina Lilica Voicu (Tarragona)

SUBMISSIONS:

Authors are invited to submit non‐anonymized papers in English presenting original and

unpublished research. Papers should not exceed 12 single‐spaced pages (including

eventual appendices) and should be prepared according to the standard format for

Springer Verlag's LNAI/LNCS series (see

http://www.springer.com/computer/lncs?SGWID=0‐164‐6‐793341‐0).

Submissions have to be uploaded to:

https://www.easychair.org/conferences/?conf=slsp2014

PUBLICATIONS:

A volume of proceedings published by Springer in the LNAI/LNCS series will be available

by the time of the conference.

A special issue of a major journal will be later published containing peer‐reviewed

extended versions of some of the papers contributed to the conference. Submissions to it

will be by invitation.

REGISTRATION:

The period for registration is open from January 16, 2014 to October 14, 2014. The

registration form can be found at:

http://grammars.grlmc.com/slsp2014/Registration.php

DEADLINES:

Paper submission: May 7, 2014 (23:59h, CET)

Notification of paper acceptance or rejection: June 18, 2014

Final version of the paper for the LNAI/LNCS proceedings: June 25, 2014

Early registration: July 2, 2014

Late registration: September 30, 2014

Submission to the post‐conference journal special issue: January 16, 2015

QUESTIONS AND FURTHER INFORMATION:

florentinalilica.voicu@urv.cat

POSTAL ADDRESS:

SLSP 2014

Research Group on Mathematical Linguistics (GRLMC)

Rovira i Virgili University

Av. Catalunya, 35

43002 Tarragona, Spain

Phone: +34 977 559 543

Fax: +34 977 558 386

ACKNOWLEDGEMENTS:

Departament d’Economia i Coneixement, Generalitat de Catalunya

Laboratoire d’Informatique de Grenoble

Universitat Rovira i Virgili

Back  Top

3-3-15(2014-10-16) CfP MediaEval 2014 Multimedia Benchmark Evaluation, Barcelona (SP)

--------------------------------------------------
Call for Participation
MediaEval 2014 Multimedia Benchmark Evaluation
http://www.multimediaeval.org
Early registration deadline: 1 May 2014
--------------------------------------------------

MediaEval is a multimedia benchmark evaluation that offers tasks promoting research and innovation in areas related to human and social aspects of multimedia. MediaEval 2014 focuses on aspects of multimedia that include speech and audio. Participants carry out one or more of the tasks offered and submit runs to be evaluated. They then write up their results and present them at the MediaEval 2014 workshop.

The tasks that focus on speech are:

*QUESST: Query by Example Search on Speech Task (ex SWS)*
*Search and Hyperlinking*

The entire list of tasks and their descriptions is below.

For each task, participants receive a task definition, task data and accompanying resources (dependent on task) such as shot boundaries, keyframes, visual features, speech transcripts and social metadata. In order to encourage participants to develop techniques that push forward the state-of-the-art, a 'required reading' list of papers will be provided for each task. Participation is open to all interested research groups. To sign up, please click the “MediaEval 2014 registration site” link at:

http://www.multimediaeval.org/mediaeval2014

The following tasks are available to participants at MediaEval 2014:

*Synchronization of multi-user Event Media (New!)*
This task requires participants to automatically create a chronologically-ordered outline of multiple image galleries corresponding to the same event, where data collections are synchronized altogether and aligned along parallel lines over the same time axis, or mixed in the correct order.

*C@merata: Question Answering on Classical Music Scores (New!)*
In this task, systems take as input a noun phrase (e.g., 'harmonic perfect fifth') and a short score in MusicXML (e.g., J.S. Bach, Suite No. 3 in C Major for Cello, BWV 1009, Sarabande) and return an answer stating the location of the requested feature (e.g., 'Bar 206').

*Retrieving Diverse Social Images Task*
This task requires participants to automatically refine a ranked list of Flickr photos with landmarks using provided visual and textual information. The objective is to select only a small number of photos that are equally representative matches but also diverse representations of the query.

*Search and Hyperlinking*
This task requires participants to find video segments relevant to an information need and to provide a list of useful hyperlinks for each of these segments. The hyperlinks point to other video segments in the same collection and should allow the user of the system to explore the collection with respect to the current information need in a non-linear fashion. The task focuses on television data provided by the BBC and real information needs from home users.

*QUESST: Query by Example Search on Speech Task (ex SWS)*
The task involves searching FOR audio content WITHIN audio content USING an audio content query. This task is particularly interesting for speech researchers in the area of spoken term detection or low-resource speech processing.

*Visual Privacy*
This task requires participants to implement privacy filtering solutions that provide an optimal balance between obscuring information that personally identifies people in a video, and retraining information that allows viewers otherwise to interpret the video.

*Emotion in Music (an Affect Task)*
We aim at detecting emotional dynamics of music using its content. Given a set of songs, participants are asked to automatically generate continuous emotional representations in arousal and valence.

*Placing: Geo-coordinate Prediction for Social Multimedia*
This task requires participants to estimate the geographical coordinates (latitude and longitude) of multimedia items (photos, videos and accompanying metadata), as well as predicting how “placeable” a media item actually is. The Placing Task integrates all aspects of multimedia: textual meta-data, audio, image, video, location, time, users and context.

*Affect Task: Violent Scenes Detection*
This task requires participants to automatically detect portions of movies depicting violence. Participants are encouraged to deploy multimodal approaches (audio, visual, text) to solve the task.

*Social Event Detection in Web Multimedia*
This task requires participants to discover, retrieve and summarize social events, within a collection of Web multimedia. Social events are events that are planned by people, attended by people and for which the social multimedia are also captured by people.

*Crowdsourcing: Crowdsorting Multimedia Comments (New!)*
This task asks participants to combine human computation (i.e., input from the crowd) with automatic computation to carry out classification. The classification involves sorting timed-comments in music, i.e., comments that users have made at certain points in a song, into categories according to their type (e.g., useful vs. non-useful and informative vs. affective).

Tasks marked 'New!' are the 2014 Brave New Tasks. If you sign up for these tasks, please be aware that you will be asked to keep in close touch with the task organizers concerning the details of the task over the course of the benchmarking cycle. We ask for extra-tight communication in order to ensure that these tasks have the flexibility they need to reach their goals.

MediaEval 2014 Timeline
(dates vary slightly from task to task, see the individual task pages for the individual deadlines: http://www.multimediaeval.org/mediaeval2014)

April-May: Registration and return usage agreements.
May-June: Release of development/training data.
June-July: Release of test data.
Mid-Sept.: Participants submit their completed runs.
Mid-Sept.-End-Sept.: Evaluation of submitted runs. Participants write their 2-page working notes papers.
16-17+18 October: MediaEval 2014 Workshop, Barcelona, Spain

We ask you to register by 1 May, when the first task will release its data set. After that point, late registration will be possible, but we encourage teams to register as early as they can.

Contact
For questions or additional information please contact Martha Larson m.a.larson@tudelft.nl or visit http://www.multimediaeval.org

MediaEval 2014 Organization Committee:

Martha Larson at Delft University of Technology and Gareth Jones at Dublin City University act as the overall coordinators of MediaEval. Individual tasks are coordinated by a group of task organizers, who form the MediaEval Organizing Committee. It is the collective efforts of this group of people that makes MediaEval possible. The complete list of MediaEval organizers is at:

http://www.multimediaeval.org/who

A large number of organizations and projects make a contribution to MediaEval organization, including the projects (alphabetical): AXES (http://www.axes-project.eu), CUbRIK (http://www.cubrikproject.eu/), CNGL (http://www.cngl.ie), CrowdRec (http://crowdrec.eu), Glocal (http://www.glocal-project.eu), LinkedTV (http://www.linkedtv.eu), Media Mixer (http://mediamixer.eu), Mucke (http://www.chistera.eu/projects/mucke), Promise (http://www.promise-noe.eu), Quaero (http://www.quaero.org), Sealinc Media (http://www.commit-nl.nl), SocialSensor (http://www.socialsensor.org), and VideoSense (http://www.videosense.eu).

Back  Top

3-3-16(2014-10-16) Journées de pausologie à Montpellier France Deadline extension

 

Journées de pausologie  http://itic.univ-montp3.fr/pausologie/
Le point sur la pause


Les Journées de Pausologie s’intéressent aux travaux originaux et novateurs réalisés en phonétique, psycholinguistique et linguistique générale sur la pause. 

 

D’un point de vue purement formel, une séquence de parole peut être décrite comme une succession de sons entrecoupée par des phases silencieuses. Si la phonétique s’est largement appliquée à décrire ces séquences sonores du point de vue de leurs caractéristiques articulatoires, de leur dimension acoustique et de leurs conséquences au niveau perceptif, les pauses ont fait l’objet d’un nombre d’études moins conséquent, alors même qu’un certain nombre de recherches (Goldman-Eisler, 1968 par ex.) ont révélé la nécessité de marquer de brèves interruptions lors de la production de la parole.

 

Ce caractère essentiel de la pause s’explique notamment par le fait qu’elle est le reflet à la fois du mouvement respiratoire mais aussi d’une activité cognitive importante. En effet, la pause permet au locuteur de reprendre son souffle mais aussi de planifier le contenu de son message pour structurer son énoncé et le mettre en scène, comme dans le cas des discours politiques par exemple (Duez, 1999 par ex.). En outre, la pause est également l’un des éléments révélant la fin d’un tour du parole et le signal du début de la prise de parole pour l’interlocuteur (Sacks et al., 1974). A l’écrit, les fonctions prosodiques, mais aussi syntaxiques et sémantiques, de la pause sont traditionnellement marquées par des signes de ponctuation dont l’interprétation a varié au cours de l’histoire (Catach, 1994 ; 2001).

 

La dimension cognitive de la pause qui a été évoquée plus haut permet également d’exploiter ce paramètre rythmique en linguistique clinique dans la mesure où la pause, prise en tant que disfluence, est révélatrice des capacités langagières de l’individu : la localisation et la durée des pauses peuvent en effet servir d’indices pour respectivement identifier des difficultés pathologiques d’accès au lexique (Gayraud et al., 2011) ou pour différencier une disfluence classique et un bégayage (Starkweather, 1987 ; Hirsch et al., 2012).    

 

Les propositions de communication devront répondre à une des thématiques suivantes :

 

  1. La pause en tant que contribution à la mise en place d’un phonostyle ;
  2. La pause et le rythme en paroles normale/pathologique ;
  3. Les répercussions de la pause sur les plans sémantique et syntaxique ;
  4. La pause en tant qu’indice de tour de parole dans l’interaction.

 

Le lien avec le sujet du colloque devra être explicité dans le résumé. Par ailleurs, des propositions n’entrant pas directement dans l’une des thématiques proposées ci-dessus peuvent également être acceptées à la condition que soit manifesté le lien avec le thème des Journées.

 

Bibliographie :

 

Catach N., (1994) La ponctuation : histoire et système, collection Que sais-je ?, n° 2818, Paris, PUF.

Catach N. (2001) Histoire de l'orthographe française, éd. posthume réalisée par Renée Honvault, avec la collab. de Irène Rosier-Catach, collection Lexica, n° 9, Paris, Champion.

Duez D. (1999), La fonction symbolique des pauses dans la parole de l'homme politique, Faits de langues, vol. 13, p. 91-97.

Gayraud, F., Lee H.R., Barkat-Defradas, M. (2010), Syntactic and lexical context of pauses and hesitations in the discourse of Alzheimer patients and healthy elderly subjects, Clinical Linguistics & Phonetics, vol. 25(3):198-209 (DOI : 10.3109/02699206.2010.521612).

Goldman-Eisler F. (1968) Psycholinguistics. Experiments in spontaneous speech, New York, Academic Press.

Hirsch F., Monfrais-Pfauwadel M.C., Crevier-Buchman L., Sock R., Fauth C., Pialot H. (2012) Using nasovideofibroscopic data to observe abnormal laryngeal behavior in stutterers, Proceedings of the 7th World Congress on Fluency Disorders, 2-5 juillet, Tours.

Sacks H., Schegloff E A., Jefferson G. (1974), A Simplest Systematics for the Organization of Turn-Taking for Conversation, Language, n° 50, 4, p. 696-735.

Starkweather C. (1987), Fluency and Stuttering. Englewood Cliffs: Prentice Hall.

 

Comité Scientifique :

 

Barkat-Defradas Mélissa, Praxiling CNRS UMR5267-Université de Montpellier

Bres Jacques, Praxiling CNRS-Université de Montpellier

Delais-Roussare Elisabeth, CNRS-Université Paris 7 Paris Diderot,

Dodane Christelle, CNRS-Université de Montpellier

Ferré Gaëlle, Laboratoire de Linguistique de Nantes

Gayraud Frédérique, Université Lyon 2 & CNRS (Dynamique du Langage UMR5596)

Ghio Alain, Laboratoire Parole et Langage UMR 7309 CNRS - Université Aix-Marseille

Goldman Jean-Phillipe, Université de Genève

Hirsch Fabrice, Praxiling CNRS UMR5267-Université de Montpellier

Kleiber Georges, Université de Strasbourg, EA 1339 Lilpa

Rochet-Capellan Amélie, GIPSA Lab CNRS UMR 5216, Grenoble

Simon Anne-Catherine, Université Catholique de Louvain

Sock Rudolph, Université de Strasbourg, EA 1339 Lilpa

Steuckardt Agnès, Praxiling CNRS UMR5267-Université de Montpellier

Vaxelaire Béatrice, Université de Strasbourg, EA 1339 Lilpa

 

Comité d’Organisation :

 

Barkat-Defradas Mélissa, Université Paul Valéry, UMR5267 Praxiling

Bellemouche Hacène, Université Paul Valéry, UMR5267 Praxiling

Didirkova Ivana, Université Paul Valéry, UMR5267 Praxiling

Dodane Christelle, Université Paul Valéry, UMR5267 Praxiling

Hirsch Fabrice, Université Paul Valéry, UMR5267 Praxiling

Maturafi Lavie, Université Paul Valéry, UMR5267 Praxiling

Sauvage Jérémi, Université Paul Valéry, UMR5267 Praxiling

 

Calendrier :

 La date limite de soumission des propositions de communications (résumé de 500 mots hors bibliographie) a été repoussée au 30 Juin 2014

Les résumés sont à adresser à fabrice.hirsch@univ-montp3.fr  ET melissa.barkat@univ-montp3.fr  pour cette date.

Date de notification aux auteurs : 15 juillet 2014

Journées d’Etudes de Pausologie : 16-17 octobre 2014

 

 

 

 

Soumission :

 

Les soumissions aux Journées de Pausologie se présentent sous la forme de résumés rédigés en français, d'une longueur maximale de 500 mots (hors bibliographie), police Times New Roman, 12pt, interligne simple. Les résumés devront être soumis au format PDF aux adresses suivantes : fabrice.hirsch@univ-montp3.fr ET melissa.barkat@univ-montp3.fr. Dans un souci d’anonymisation, ne figureront dans le fichier PDF que le titre de la proposition, le résumé et la bibliographie. Le nom des auteurs et leur affiliation devront être présents dans le courriel mais pas dans le fichier PDF.

 

Un article sera demandé à l’issue des Journées en vue d’une publication.

   

Pour toute question, écrire à : fabrice.hirsch@univ-montp3.fr

Back  Top

3-3-17(2014-11-03) CfP ACM Multimedia 2014 - Area on Music, Speech, and Audio Processing in Multimedia, Orlando, Florida, USA
Call for short and long paper contributions for ACM Multimedia 2014 - 
Area on Music, Speech, and Audio Processing in Multimedia 
November 3-7, 2014 Orlando, Florida, USA 
(For general information and information on other areas check http://www.acmmm.org/2014/) 
As a core part of multimedia data, the acoustic modality is of great importance as a source of 
information that is orthogonal to other modalities like video or text. This allows for richer
 information to be extracted when performing content analysis, as well as a rich mean of 
communication of information. We are seeking strong technical submissions revolving around 
music, speech and audio processing in multimedia. One topic of interest is submissions
 performing an analysis of the acoustic signals in order to extract information from multimedia 
content (e.g. what notes are being played, what is being said, or what sounds appear), or the 
context (e.g. language spoken, age and gender of the speaker, localization using sound). 
Another topic of interest is submissions performing synthesis of acoustic content for multimedia 
purposes (e.g. speech synthesis, singing voices, acoustic scene synthesis). Furthermore, we are 
also interested in ways to represent acoustic data as multimedia; for example, in symbolic form
 (e.g. closed captioning of speech), in the form of sensor input and visual images (e.g. recordings
 of gestures in musical performances). or others. Another topic of interest is applications that 
involve the acoustic modality. The inclusion of acoustics opens up interesting possibilities for 
novel multimedia interfaces and user interactions. In addition, contextual, social and affective 
aspects play an important role when using acoustics, which can be seen, for example, in the 
consumption and enjoyment of music, and the sound design of cinematic productions.
 All submissions should maintain a clear relation to multimedia: there either should be an 
explicit relation to multimedia items, applications or systems, or an application of a multimedia 
perspective, in which information sources from different modalities are considered. 
Topics of interest include, but are not limited to:
 · Multimedia audio analysis and synthesis · Multimedia audio indexing, search, and retrieval
 · Music, speech, and audio annotation, similarity measures, and evaluation 
· Multimodal and multimedia approaches to music, speech, and audio
 · Multimodal and multimedia context models for music, speech, and audio 
· Computational approaches to music, speech, and audio inspired by other domains (e.g. 
computer vision, information retrieval, musicology, psychology) 
· Multimedia localization using acoustic information 
· Social data, user models and personalization in music, speech, and audio 
· Music, audio, and aural aspects in multimedia user interfaces
 · Algorithms and applications of music, speech, and audio
 · New and interactive musical instruments, systems and other music, speech, and audio 
applications · Novel interaction interfaces using/with music, speech, and audio 
· Music, speech, and audio coding, transmission, and storage for multimedia applications 
Deadlines for long papers is March 31st, 2014 and for short papers is April 14th, 2014 
For other deadlines please check http://www.acmmm.org/2014/important_dates.html 
 
 
Back  Top

3-3-18(2014-11-19) ALBAYZIN 2014 SEARCH ON SPEECH EVALUATION, Gran Canaria, Spain
ALBAYZIN 2014 SEARCH ON SPEECH EVALUATION

The Spanish Thematic Network on Speech Technology (RTTH) and the ISCA Special Interest Group
on Iberian Languages (SIG-IL) are pleased to announce the ALBAYZIN 2014 Search on Speech Evaluation,
which will be carried out as part of Iberspeech 2014, a biennial event gathering the Spanish researchers
on speech Technology. This year’s event will take place in Las Palmas de Gran Canaria (Spain),
on November 19-21, 2014 (see http://iberspeech2014.ulpgc.es for details).

Research groups worldwide are invited to participate in this evaluation.
Here we just provide the key points. The full evaluation plan can be found in:

http://iberspeech2014.ulpgc.es/index.php/albayzin/search-on-speech-evaluation

**TASKS**

The ALBAYZIN 2014 Search on Speech evaluation involves searching in audio content a list of terms/queries.
This evaluation focuses on retrieving the appropriate audio files that contain any of those terms/queries.
Four different tasks are defined:

1) KEYWORD SPOTTING (KWS), where the input to the system is a list of terms, which is known
when processing the audio and hence word-based recognizers can be effectively used to hypothesize detections.

2) SPOKEN TERM DETECTION (STD), where the input to the system is a list of terms (as in the KWS task),
but terms/queries are unknown when processing the audio. This is the same task as in NIST STD 2006 evaluation
[1] and Open Keyword Search 2013 [2].

3) QUERY-BY-EXAMPLE SPOKEN TERM DETECTION (QbE STD), where the input to the system
is an acoustic example per query and hence a prior knowledge of the correct word/phone transcription
corresponding to each query cannot be made. This task must generate a set of occurrences for each
query detected in the audio files, along with their timestamps as output, as in the STD task.
QbE STD is the same task as those proposed in MediaEval 2011, 2012 and 2013 [3].

4) QUERY-BY-EXAMPLE SPOKEN DOCUMENT RETRIEVAL (QbE SDR), where the input to the system
is composed of several acoustic examples per query and hence a prior knowledge of the correct word/phone
transcription corresponding to each query cannot be made. This task must generate an output score for each
of the provided queries, which reflects the probability that each of the queries appears in each audio file,
and no information about the timestamp is required. Formally, given a spoken example of a given query q
and a spoken document x (whose transcriptions are unknown), a QbE SDR system must carry out some kind
of detection procedure and output a score s ∈ R, the higher (the more positive) the score the higher the likelihood
that q appears in x. Note that systems are neither required to make a strong decision about whether or not
q appears in x, nor to provide the time marks of the place (or places) where q appears. Systems are just required
to produce a score, which must be computed by automatic means, with no human supervision.
This is the same task as that proposed in MediaEval 2014 Query-by-Example Search on Speech (QUESST) [4].

**REGISTRATION**

Interested groups must register for the evaluation before July 15th 2014, by contacting the organizing team at:

javiertejedornoguerales@gmail.com
luisjavier.rodriguez@ehu.es

with CC to the Chairs of Iberspeech 2014(iberspeech2014@ulpgc.es), and providing the following information:

    Research group (name and acronym)
    Institution (university, research center, etc.)
    Contact person (name)
    Email

**SCHEDULE**

•      June 30, 2014: Release of training and development data

•      July 15, 2014: Registration deadline

•      September 3, 2014: Release of evaluation data

•      September 30, 2014: Deadline for the submission of system outputs and description papers

•      October 15, 2014: Results distributed to participants

•      November 19-21, 2014: Evaluation Workshop at Iberspeech 2014


For more information, please follow the link to the Albayzin 2014 Search on Speech Evaluation:

http://iberspeech2014.ulpgc.es/index.php/albayzin/search-on-speech-evaluation

**REFERENCES**

[1] http://www.itl.nist.gov/iad/mig/tests/std/2006/index.html

[2] http://www.nist.gov/itl/iad/mig/openkws13.cfm

[3] Florian Metze, Xavier Anguera, Etienne Barnard, Marelie Davel and Guillaume Gravier. 'Language Independent Search in Mediaeval's Spoken Web Search Task'. Computer Speech and Language, Special Issue on Information Extraction & Retrieval, 2014.

[4] http://multimediaeval.pbworks.com/w/page/79432139/QUESST2014
Back  Top

3-3-19(2014-12-01) CfP IEEE Global Conference on Signal and Information Processing - Atlanta Georgia 2014
IEEE GlobalSIP’14 – Call for Symposium Proposals
IEEE Global Conference on Signal and Information Processing - Atlanta Georgia 2014

Technical Program Chairs: Douglas Williams, Timothy Davidson, and Ghassan AlRegib
General Chairs: Geoffrey Li and Fred Juang

The IEEE Global Conference on Signal and Information Processing (GlobalSIP) is a recently launched flagship conference of the IEEE Signal Processing Society. GlobalSIP’14 will be held in Atlanta, Georgia, USA, during the week of December 1, 2014. The conference will focus broadly on signal and information processing with an emphasis on up-and-coming signal processing themes. The conference will feature world-class speakers, tutorials, exhibits, and technical sessions consisting of poster or oral presentations. GlobalSIP’14 will be comprised of colocated symposia selected competitively based on responses to this call-for-symposium proposals. Symposium topics may include, but are not limited to:

  • Signal processing in communications and networks, including green communication and optical communications
  • Image and video processing
  • Selected topics in speech and language processing
  • Acoustic array signal processing
  • Signal processing in security applications
  • Signal processing in finance
  • Signal processing in energy and power systems
  • Signal processing in genomics and bioengineering (physiological, pharmacological, and behavioral)
  • Neural signal processing
  • Selected topics in statistical signal processing
  • Seismic signal processing
  • Graph-theoretic signal processing
  • Machine learning and human machine interfaces
  • Compressed sensing, sparsity analysis, and applications
  • Big data processing, heterogeneous information processing, and informatics
  • Radar and array processing including localization and ranging techniques
  • Multimedia transmission, indexing and retrieval, and playback challenges
  • Hardware and real-time implementations
  • Other novel and significant applications of selected areas of signal processing

Symposium proposals should include the title of the symposium; length of the symposium (one day or two days); projected selectivity of the symposium; paper length requirements (submission: from 2 to 6 pages, final: 4-6 pages, invited papers may be longer); names, addresses, and short CVs (up to 250 words) of the organizers, including the general organizers and the technical chairs; an up-to two page description of the technical issues that the symposium will address (including timeliness and relevance to the signal processing community; names of (potential) technical program committee members; name of (potential) invited speakers (up to 2 for one-day symposia and 4 for two-day ones)); and a draft call-for-papers. Please package everything in a single pdf file. More detailed information can be found at http://renyi.ece.iastate.edu/globalsip2014/cfs.html

Symposium proposals should be emailed to Doug Williams (doug.williams@ece.gatech.edu) and Geoffrey Li (liye@ece.gatech.edu) according the following timeline:

November 8, 2013: Symposium proposals due
November 22, 2013: Symposium selection decision notification
November 29, 2013: Final version of the call-for-papers for the accepted symposia due

Tentative timeline for paper submission:
May 16, 2014: Paper submission deadline (regular and invited)
June 27, 2014: Review results announced
September 5, 2014: Camera-ready regular and invited papers due

 

Back  Top

3-3-20(2014-12-03) GlobalSIP Symposium: Machine Learning Applications in Speech Processing
GlobalSIP Symposium: Machine Learning Applications in Speech 
Processing:   Submission deadline June 16

This is a reminder of the upcoming submission deadline of June 16, for
the Machine Learning Applications in Speech Processing symposium of the 
IEEE SPS GlobalSIP conference (Atlanta Georgia, December 3-5, 2014).

The symposium is accepting papers, as the title suggests, that apply 
machine learning methods in interesting ways to speech processing tasks.

For more information, please see
    http://www.ieeeglobalsip.org/symposium/mlasp.html

Back  Top

3-3-21(2014-12-07) 3rd Dialog State Tracking Challenge (DSTC3).

We are pleased to announce the opening of the third Dialog State Tracking Challenge (DSTC3). Complete information, including the challenge handbook, training data, evaluation scripts, and baseline trackers are available on the DSTC3 website:

http://camdial.org/~mh521/dstc/

The Dialog State Tracking Challenge (DSTC) is a research challenge focused on improving the state of the art in tracking the state of spoken dialog systems. State tracking refers to accurately estimating the user's goal as a dialog progresses. Accurate state tracking is desirable because it provides robustness to errors in speech recognition, and helps reduce ambiguity inherent in language within a temporal process like dialog.

In this challenge, participants are given labelled corpora of dialogs to develop state tracking algorithms. The trackers will then be evaluated on a common set of held-out dialogs which are released, un-labelled, during a one week period. This is a corpus-based challenge: participants do not need to implement a speech recognizer, a semantic parser, or an end-to-end dialog system.

The first DSTC completed in 2013, with 9 teams participating and a total of 27 entries, with 9 papers presented at SIGDIAL 2013, advancing the state-of-the-art in several dimensions. DSTC2 introduced a completely new dataset, in a new domain (restaurant information), with more complicated and dynamic dialog states that may change throughout the dialog. DSTC2 concluded a few months ago, again with 9 participating teams (about half new) -- results have been submitted to and will be presented at a special session at SIGDIAL 2014.

DSTC3 will focus on the task of adapting and expanding to a new domain, when there is a lot of labelled data in a smaller domain. The 'smaller domain' is the restaurants domain from DSTC2; the 'new extended domain' is a larger tourist information domain: DSTC3 includes restaurants and adds pubs and coffee shops, and more detail (slots) for restaurants relative to the DSTC2 data.

Participants are encouraged to submit papers describing their work to SLT 2014, whose deadline will be approx. 20 July. The organisers are awaiting confirmation of a proposed special session at the conference.

DSTC3 schedule:

- 4 April 2014 : Labelled tourist information seed set released

- 9 June 2014 : Unlabelled tourist information test set released

- 16 June 2014 : Tracker output on tourist information test set due

- 23 June 2014 : Results on tourist information test set given to participants

- 20 July 2014 : SLT paper deadline (approximate)

- 7-10 Dec 2014 : SLT workshop (Lake Tahoe, Nevada, USA)

The training data, scoring scripts, baselines, domain ontology and database are all available for public download. Prospective participants are strongly encouraged to join the mailing list, to ensure you receive notifications of updates to data or scripts, and are included in discussions about the challenge. To join, email listserv@lists.research.microsoft.com with 'subscribe DSTC' in the body of the message (without quotes).

Feel free to direct questions to the organizers. We hope you will consider participating!

DSTC3 organizers

Matt Henderson (lead) - Cambridge University [matthen@gmail.com]

Blaise Thomson - Cambridge University [brmt2@cam.ac.uk]

Jason D. Williams - Microsoft Research [jason.williams@microsoft.com]

DSTC3 advisory board

Bill Byrne - University of Cambridge

Paul Crook - Microsoft Research

Maxine Eskenazi - Carnegie Mellon University

Milica Gasic - University of Cambridge

Helen Hastie - Herriot Watt

Kee-Eung Kim - KAIST

Sungjin Lee - Carnegie Mellon University

Oliver Lemon - Herriot Watt

Olivier Pietquin - SUPELEC

Joelle Pineau - McGill University

Deepak Ramachandran - Nuance Communications

Brian Strope - Google

Steve Young - University of Cambridge

 
Back  Top

3-3-22(2014-12-07)The 2014 IEEE Spoken Language Technology Workshop (SLT 2014) , South Lake Tahoe, California/Nevada, USA

The 2014 IEEE Spoken Language Technology Workshop (SLT 2014) will be held in South Lake Tahoe, California and Nevada, on Dec 7-10, 2014. The main theme of the workshop will be 'machine learning in spoken language technologies'. One of our goals is to increase both intra and inter community interaction, by means of (inter alia)

  • keynote/guest speakers from machine learning community (e.g. Neural Information Processing Systems (NIPS) and others);

  • online panel discussions before/during the conference;

  • miniSIGs - small discussion groups to get organized before/during /after the panel discussions or as independent SIG meetings;

  • highlight sessions, where 3-5 best papers will be presented orally.

 

Following tradition from the last two SLT workshops in 2010 (Berkeley, CA) and 2012 (Miami, FL), we are looking forward to hosting challenges and special or themed sessions.

Submission of papers in all areas of spoken language technology is encouraged, with emphasis on the following topics:

·      Speech recognition and synthesis

·      Spoken language understanding

·      Spoken dialog systems

·      Spoken document summarization

·      Machine translation for speech

·      Question answering from speech

·      Speech data mining

·      Spoken document retrieval

·      Spoken language databases

·      Multimodal processing

·      Human/computer interaction

·      Educational and healthcare applications

·      Assistive technologies

·      Natural Language Processing



Important Deadlines

Paper Submission Monday, July 21, 2014

Notification of Acceptance Friday, September 5, 2014

Demo Submission September 2014

Demo Acceptance October 2014

Early registration deadline October 17, 2014

Workshop December 7-10, 2014



Submission Procedure

 

Prospective authors are invited to submit full-length, 4-6 page papers, including figures and references, to the SLT 2014 website. All papers will be handled and reviewed electronically. Please note that the submission dates for papers are strict deadlines.

 

 

Back  Top

3-3-23(2014-12-10) 4th International Conference on Acoustics and Vibration (ISAV2014), Tehran, Iran

 

 

The Iranian Society of Acoustics and Vibration (ISAV) has the great pleasure of inviting you to the 4th International Conference on Acoustics and Vibration (ISAV2014). ISAV2014 is jointly organized by ISAV and Iran University of Science and Technology and will be held in Tehran, Iran from 10-11 December, 2014. Following three successful conferences in three last years (ISAV2011, ISAV2012 and ISAV2013), we look forward to welcoming you to another memorable and exciting ISAV conference in 2014. Through keynote lectures and oral and poster presentations, the conference will present an overview of the latest developments in theoretical and applied acoustics and vibration. We cordially invite you to submit your papers for oral and poster sessions through the conference website www.isav.ir/2014. We also invite sponsors and exhibitors to participate in the conference exhibition where they can present their latest scientific and industrial achievements, advertise their new products and services, and learn about the latest developments in acoustics and vibration. Please use the link here to go to the conference website. The deadline for paper submission is 22th June 2014. We look forward to seeing you at ISAV2014 in December 2014 in Tehran.

Sincerely,

ISAV2014 Secretariat

Back  Top

3-3-24(2014-12-23) CfP International Conference on Human Machine Interaction, New Delhi India

Call for papers

International Conference on Human Machine Interaction 2014 23 – 25, December 2014 http://intconfhmi.com

In association with SETIT, Sfax University, Tunisia. and ASDF (Association of Scientists, Developers and Faculties) Chennai Chapter, we will organize the International Conference HMI 2014 which will be held in New delhi -INDIA.

Human Machine Interaction (HMI), is a main annual research conference aimed at presenting current research being carried out. The idea of the conference is for the scientists, scholars, engineers and students from the Universities all around the world and the industry to present ongoing research activities, and hence to foster research relations between the Universities and the industry. HMI 2014 is co-sponsored by Association of Scientists, Developers and Faculties and SETIT, Sfax University, Tunisia and technical co-sponsored by many other universities and institutes.
The HMI 2014 conference proceeding will be published in the ASDF Proceedings as one volume, and will be included in the Engineering & Technology Digital Library, and indexed by EBSCO, World Cat, Google Scholar, and sent to be reviewed by Ei Compendex and ISI Proceedings. Selected papers will be recommended to be published in the Journals.

Area of Submission

 

  • Active Vision
  • Agents and Multi-Agent Systems
  • Applications of Perception
  • Artificial Intelligence
  • Brain Machine Interfaces
  • Cognitive Engineering
  • Collaborative Design and Manufacturing
  • Collaboration Technologies and Systems
  • Computer Graphics
  • Computer Vision
  • Cooperative Design
  • Dimensionality Reduction
  • Distributed Intelligent Systems
  • Ergonomics
  • Fuzzy Systems
  • Health Care
  • Human Centered Transportation System
  • Human Factors
  • Human Perception
  • Hybrid Intelligent System Design
  • Image Analysis
  • Intelligent Transportation
  • Knowledge Representation
  • Machine Learning
  • Material Appearance Modeling Medical Imaging
  • Mental Workload
  • Multimedia
  • Multiview Learning
  • Next Generation Network
  • Network Security and Management
  • Ontologies
  • Patient Safety
  • Pattern Recognition
  • Perceptual Factors
  • Physiological Indicators
  • Production Planning and Scheduling
  • Protocol Engineering
  • Semi-Supervised Learning
  • Service-Oriented Computing
  • Simulator Training
  • Systems Integration and Collaboration
  • Systems Safety and Security
  • Team Performance
  • Video Processing
  • Virtual Reality
  • Visualization

Topics of interest for HMI is widely declared for the above, but not limited to.

Conference Registration Fees Rebate (Discount)

We are pleased to inform you that the organizing committee of the HMI2014 allocates a financial support for all participants from developing or emerging countries. This Financial support of among of 150 Dollars is available to help participants to attend HMI2014

You can find more details in: http://intconfhmi.com/register.html

  • REGISTRATION without discount

    Author Registration (Full Paper)

    250 USD

    Author Registration (Short Paper / Poster) 200 USD

    Listener Registration

    200 USD

    Extra Pages

    15 USD

  • REGISTRATION with discount

    Author Registration (Full Paper)

    100 USD

    Author Registration (Short Paper / Poster) 50 USD

    Listener Registration

    50 USD

    Extra Pages

    15 USD

 

We are waiting for seeing you in India.

NB : A select number of Post Conference Excursions will take place during 5 days.

As examples : 1 Day Tour to Taj Mahal, Agra Fort, Mathura in AC Bus : 25 $ per person 1 Day Tour to Qutub Minar, Parliament, Lotus Temple, India Gate, Gandhi Smiriti, Red Fort, Humayun's Tomb, Rajghat: 25 $ per person

 

Best Regards

 Mohamed Salim BOUHLEL General Co-Chair, HMI2014 Head of Research Unit: Sciences & Technologies of Image and Telecommunications ( Sfax University ) GSM +216 20 200005

 

Back  Top

3-3-25(2015-03-02) LATA 2015
LATA 2015

Nice, France

March 2-6, 2015

Organized by:

CNRS, I3S, UMR 7271
Nice Sophia Antipolis University

Research Group on Mathematical Linguistics (GRLMC)
Rovira i Virgili University

http://grammars.grlmc.com/lata2015/

****************************************************************************************

AIMS:

LATA is a conference series on theoretical computer science and its applications. Following the tradition of the diverse PhD training events in the field developed at Rovira i Virgili University in Tarragona since 2002, LATA 2015 will reserve significant room for young scholars at the beginning of their career. It will aim at attracting contributions from classical theory fields as well as application areas.

VENUE:

LATA 2015 will take place in Nice, the second largest French city on the Mediterranean coast. The venue will be the University Castle at Parc Valrose.

SCOPE:

Topics of either theoretical or applied interest include, but are not limited to:

algebraic language theory
algorithms for semi-structured data mining
algorithms on automata and words
automata and logic
automata for system analysis and programme verification
automata networks
automata, concurrency and Petri nets
automatic structures
cellular automata
codes
combinatorics on words
computational complexity
data and image compression
descriptional complexity
digital libraries and document engineering
foundations of finite state technology
foundations of XML
fuzzy and rough languages
grammars (Chomsky hierarchy, contextual, unification, categorial, etc.)
grammatical inference and algorithmic learning
graphs and graph transformation
language varieties and semigroups
language-based cryptography
parallel and regulated rewriting
parsing
patterns
power series
string and combinatorial issues in bioinformatics
string processing algorithms
symbolic dynamics
term rewriting
transducers
trees, tree languages and tree automata
unconventional models of computation
weighted automata

STRUCTURE:

LATA 2015 will consist of:

invited talks
invited tutorials
peer-reviewed contributions

INVITED SPEAKERS:

to be announced

PROGRAMME COMMITTEE:

Andrew Adamatzky (West of England, Bristol, UK)
Andris Ambainis (Latvia, Riga, LV)
Franz Baader (Dresden Tech, DE)
Rajesh Bhatt (Massachusetts, Amherst, US)
José-Manuel Colom (Zaragoza, ES)
Bruno Courcelle (Bordeaux, FR)
Erzsébet Csuhaj-Varjú (Eötvös Loránd, Budapest, HU)
Aldo de Luca (Naples Federico II, IT)
Susanna Donatelli (Turin, IT)
Paola Flocchini (Ottawa, CA)
Enrico Formenti (Nice, FR)
Tero Harju (Turku, FI)
Monika Heiner (Brandenburg Tech, Cottbus, DE)
Yiguang Hong (Chinese Academy, Beijing, CN)
Kazuo Iwama (Kyoto, JP)
Sanjay Jain (National Singapore, SG)
Maciej Koutny (Newcastle, UK)
Antonín Kučera (Masaryk, Brno, CZ)
Thierry Lecroq (Rouen, FR)
Salvador Lucas (Valencia Tech, ES)
Veli Mäkinen (Helsinki, FI)
Carlos Martín-Vide (Rovira i Virgili, Tarragona, ES, chair)
Filippo Mignosi (L’Aquila, IT)
Victor Mitrana (Madrid Tech, ES)
Ilan Newman (Haifa, IL)
Joachim Niehren (INRIA, Lille, FR)
Enno Ohlebusch (Ulm, DE)
Arlindo Oliveira (Lisbon, PT)
Joël Ouaknine (Oxford, UK)
Wojciech Penczek (Polish Academy, Warsaw, PL)
Dominique Perrin (ESIEE, Paris, FR)
Alberto Policriti (Udine, IT)
Sanguthevar Rajasekaran (Connecticut, Storrs, US)
Jörg Rothe (Düsseldorf, DE)
Frank Ruskey (Victoria, CA)
Helmut Seidl (Munich Tech, DE)
Ayumi Shinohara (Tohoku, Sendai, JP)
Bernhard Steffen (Dortmund, DE)
Frank Stephan (National Singapore, SG)
Paul Tarau (North Texas, Denton, US)
Andrzej Tarlecki (Warsaw, PL)
Jacobo Torán (Ulm, DE)
Frits Vaandrager (Nijmegen, NL)
Jaco van de Pol (Twente, Enschede, NL)
Pierre Wolper (Liège, BE)
Zhilin Wu (Chinese Academy, Beijing, CN)
Slawomir Zadrozny (Polish Academy, Warsaw, PL)
Hans Zantema (Eindhoven Tech, NL)

ORGANIZING COMMITTEE:

Sébastien Autran (Nice)
Adrian Horia Dediu (Tarragona)
Enrico Formenti (Nice, co-chair)
Sandrine Julia (Nice)
Carlos Martín-Vide (Tarragona, co-chair)
Christophe Papazian (Nice)
Julien Provillard (Nice)
Pierre-Alain Scribot (Nice)
Bianca Truthe (Giessen)
Florentina Lilica Voicu (Tarragona)

SUBMISSIONS:

Authors are invited to submit non-anonymized papers in English presenting original and unpublished research. Papers should not exceed 12 single-spaced pages (including eventual appendices, references, etc.) and should be prepared according to the standard format for Springer Verlag's LNCS series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0).

Submissions have to be uploaded to:

https://www.easychair.org/conferences/?conf=lata2015

PUBLICATIONS:

A volume of proceedings published by Springer in the LNCS series will be available by the time of the conference.

A special issue of a major journal will be later published containing peer-reviewed substantially extended versions of some of the papers contributed to the conference. Submissions to it will be by invitation.

REGISTRATION:

The period for registration is open from July 21, 2014 to March 2, 2015. The registration form can be found at:

http://grammars.grlmc.com/lata2015/Registration.php

DEADLINES:

Paper submission: October 10, 2014 (23:59 CET)
Notification of paper acceptance or rejection: November 18, 2014
Early registration: November 25, 2014
Final version of the paper for the LNCS proceedings: November 26, 2014
Late registration: February 16, 2015
Submission to the journal special issue: June 6, 2015

QUESTIONS AND FURTHER INFORMATION:

florentinalilica.voicu@urv.cat

POSTAL ADDRESS:

LATA 2015
Research Group on Mathematical Linguistics (GRLMC)
Rovira i Virgili University
Av. Catalunya, 35
43002 Tarragona, Spain

Phone: +34 977 559 543
Fax: +34 977 558 386

ACKNOWLEDGEMENTS:

Nice Sophia Antipolis University
Rovira i Virgili University
Back  Top

3-3-26(2015-08-31) Joint Conference PEVOC & MAVEBA 2015, Firenze, Italia

Joint Conference PEVOC & MAVEBA 2015:  August 31 - September 4, 2015, Palazzo degli Affari, Piazza Adua 1, Firenze, Italy
http://pevoc-maveba.dinfo.unifi.it/

Back  Top

3-3-27(2016-00-00) Bids invitation for the conference Speech Prosody 2016 (SProSIG)
The Advisory Board of SProSIG, the Speech Prosody Special Interest Group, invites bids from sites interested in hosting
 its flagship conference Speech Prosody in 2016.  

The bid process will proceed as follows:

(1) Optional: Groups interested in hosting a bid are invited to express interest in an e-mail to the current SProSIG Secretary
 and/or President (e.g., by replying to this e-mail).

(2) Optional: Each group interested in hosting the conference is invited to give a presentation on May 23, 2014 at the Speech 
Prosody 2014 conference in Dublin.

(3) Required: Each bid should then be formalized in a written document, mailed to the secretary of SProSIG by June 15, 2014.  
These documents will be posted at sprosig.isle.illinois.edu for all SProSIG members to read.

(4) Selection of the site for Speech Prosody 2016 will then be conducted using an on-line ballot at http://sprosig.isle.illinois.edu.  
 
Each current member of SProSIG will be allowed to vote.

Bids to host Speech Prosody 2016 should include the following information:

(a) Names and affiliations of members of the organizing committee.

(b) Information about institutional support for the conference if any.

(c) Tentative location of the conference (city and, if possible, venue).  Oral and written presentation of the bid should highlight 
attributes that make both city and site suitable for hosting an international conference, including transportation to/from and
 within the city, lodging and dining options near the proposed venue, facilities in the proposed venue for a 300-person oral
 session and a 40-poster poster session, and any other attributes likely to be of interest to the members of SProSIG.

(d) Tentative dates of the proposed conference (typically four days in late May 2016)

(e) Proposed theme of the conference and/or proposed new session topics that will be included, along with existing session
 topics of the Speech Prosody conference, in the Conference Call for Papers.

Back  Top

3-3-28(2016-00-00) Speech Prosody Boston, USA
Speech Prosody 2016 will be held in Boston, USA.  Congratulations to Dr. Veilleux, Dr. Barnes, Dr. Shattuck-Hufnagel 
and Alejna Brugos for presenting an outstanding bid, and we look forward to an outstanding conference in Boston in 2016!
Back  Top

3-3-29Announcing the Master of Science in Intelligent Information Systems

Carnegie Mellon University

 

degree designed for students who want to rapidly master advanced content-analysis, mining, and intelligent information technologies prior to beginning or resuming leadership careers in industry and government. Just over half of the curriculum consists of graduate courses. The remainder provides direct, hands-on, project-oriented experience working closely with CMU faculty to build systems and solve problems using state-of-the-art algorithms, techniques, tools, and datasets. A typical MIIS student completes the program in one year (12 months) of full-time study at the Pittsburgh campus.  Part-time and distance education options are available to students employed at affiliated companies. The application deadline for the Fall 2013 term is December 14, 2012. For more information about the program, please visit http://www.lti.cs.cmu.edu/education/msiis/overview.shtml

Back  Top

3-3-30CALL FOR PROPOSALS ICASSP 2019
CALL FOR PROPOSALS
ICASSP 2019

 

The IEEE Signal Processing Society is accepting proposals for the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP®). SPS Members are invited to submit a proposal to host ICASSP. If you are interested in submitting a proposal please contact Nicole Allen to get the forms and guidelines.

ICASSP is the world’s largest and most comprehensive technical conference focused on signal processing applications. The series is sponsored by the IEEE Signal Processing Society and has been held annually since 1976. The conference features world-class speakers, tutorials, exhibits, and over 120 lecture and poster sessions. ICASSP is a cooperative effort of the IEEE Signal Processing Technical Committees:

  • Audio and Acoustic Signal Processing
  • Bio Imaging and Signal Processing
  • Design and Implementation of Signal Processing Systems
  • Image, Video, and Multidimensional Signal Processing
  • Information Forensics and Security
  • Machine Learning for Signal Processing
  • Multimedia Signal Processing
  • Sensor Array and Multichannel
  • Signal Processing for Communications and Networking
  • Signal Processing Theory and Methods
  • Speech and Language Processing
  • Standing Committee on Industry DSP Technology

To submit a proposal, send a notice of intent to bid to the Vice President – Conferences and the Conference Service Coordinator using the email addresses as shown below. Include in the notice your contact information and the proposed location. The Signal Processing Society Conference Services staff will issue the proposal submission form and guidlines upon receipt of the letter of intent. The form must be completed and the proposal submitted to the Conference Services staff by 21 March 2014. Proposals will be assessed by the Conference Board Executive Subcommittee. Accepted bidding teams [finalists] will be invited to present at the Conference Board meeting held at ICASSP 2014, May 4-9, 2014 in Florence, Italy.

Professor Wan-Chi Siu
IEEE Signal Processing Society
VP-Conferences (2012 – 2014)
enwcsiu@polyu.edu.hk
Nicole Allen
IEEE Signal Processing Society
Conference Services Coordinator
n.allen@ieee.org
Back  Top

3-3-31Master in linguistics (Aix-Marseille) France

Master's in Linguistics (Aix-Marseille Université): Linguistic Theories, Field Linguistics and Experimentation TheLiTEx offers advanced training in Linguistics. This specialty focuses Linguistics is aimed at presenting in an original way the links between corpus linguistics and scientific experimentation on the one hand and laboratory and field methodologies on the other. On the basis of a common set of courses (offered within the first year), TheLiTEx offers two paths: Experimental Linguistics (LEx) and Language Contact & Typology (LCT) The goal of LEx is the study of language, speech and discourse on the basis of scientific experimentation, quantitative modeling of linguistic phenomena and behavior. It focuses on a multidisciplinary approach which borrows its methodologies to human physical and biological sciences and its tools to computer science, clinical approaches, engineering etc.. Among the courses offered: semantics, phonetics / phonology, morphology, syntax or pragmatics, prosody and intonation, and the interfaces between these linguistic levels, in their interactions with the real world and the individual, in a biological, cognitive and social perspective. Within the second year, a set of more specialized courses is offered such as Language and the Brain and Laboratory Phonology. LCT aims at understanding the world's linguistic diversity, focusing on language contact, language change and variation (European, Asian and African languages, Creoles, sign language, etc.).. This specialty focuses, from a a linguistic and sociolinguistic perspective, on issues of field linguistics and taking into account both the human and socio-cultural dimension of language (speakers, communities). It also focuses on documenting rare and endangered languages and to engage a reflection on linguistic minorities. This path also provides expertise and intervention models (language policy and planning) in order  to train students in the management of contact phenomena and their impact on the speakers, languages and societies More info at: http://thelitex.hypotheses.org/678

Back  Top

3-3-32NEW MASTER IN BRAIN AND COGNITION AT UNIVERSITAT POMPEU FABRA, BARCELONA

NEW MASTER IN BRAIN AND COGNITION AT UNIVERSITAT POMPEU FABRA, BARCELONA

A new, one-year Master in Brain and Cognition will begin its activities in the Academic Year 2014-15 in Barcelona, Spain, organized by the Universitat Pompeu Fabra (http://www.upf.edu/mbc/).

The core of the master's programme is composed of the research groups at UPF's Center for Brain and Cognition  (http://cbc.upf.edu). These groups are directed by renowned scientists in areas such as computational neuroscience, cognitive neuroscience, psycholinguistics, vision, multisensory perception, human development and comparative cognition. Students will  be exposed to the ongoing research projects at the Center for Brain and Cognition and will be integrated in one of its main research lines, where they will conduct original research for their final project.

Application period is now open. Please visit the Master web page or contact luca.bonatti@upf.edu for further information.

Back  Top

3-3-33Research in Interactive Virtual Experiences at USC CA USA

REU Site: Research in Interactive Virtual Experiences

--------------------------------------------------------------------

 

The Institute for Creative Technologies (ICT) offers a 10-week summer research program for undergraduates in interactive virtual experiences. A multidisciplinary research institute affiliated with the University of Southern California, the ICT was established in 1999 to combine leading academic researchers in computing with the creative talents of Hollywood and the video game industry. Having grown to encompass a total of 170 faculty, staff, and students in a diverse array of fields, the ICT represents a unique interdisciplinary community brought together with a core unifying mission: advancing the state-of-the-art for creating virtual reality experiences so compelling that people will react as if they were real.

 

Reflecting the interdisciplinary nature of ICT research, we welcome applications from students in computer science, as well as many other fields, such as psychology, art/animation, interactive media, linguistics, and communications. Undergraduates will join a team of students, research staff, and faculty in one of several labs focusing on different aspects of interactive virtual experiences. In addition to participating in seminars and social events, students will also prepare a final written report and present their projects to the rest of the institute at the end of summer research fair.

 

Students will receive $5000 over ten weeks, plus an additional $2800 stipend for housing and living expenses.  Non-local students can also be reimbursed for travel up to $600.  The ICT is located in West Los Angeles, just north of LAX and only 10 minutes from the beach.

 

This Research Experiences for Undergraduates (REU) site is supported by a grant from the National Science Foundation. The site is expected to begin summer 2013, pending final award issuance.

 

Students can apply online at: http://ict.usc.edu/reu/

Application deadline: March 31, 2013

 

For more information, please contact Evan Suma at reu@ict.usc.edu.

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA