ISCApad #293 |
Tuesday, November 08, 2022 by Chris Wellekens |
3-1-1 | (2023-08-20) Call for Special Sessions/Challenges Interspeech 2023 Dublin, Ireland Call for Special Sessions/Challenges We are delighted to announce the launch of the Call for Special Sessions/Challenges for INTERSPEECH 2023 in Dublin, Ireland in August 2023.
Submissions are encouraged covering interdisciplinary topics and/or important new emerging areas of interest related to the main conference topics. Submissions related to the special focus of the conference’s theme, Inclusive Spoken Language Science and Technology, are particularly welcome. Apart from supporting a particular theme, special sessions may also have a different format from a regular session.
Check out https://www.interspeech2023.org/special-sessions-challenges/ for more information, including how to submit your proposal.
Important Dates Proposals of special sessions/challenges due 9th November 2022 Notification of pre-selection 14th December 2022 Final list of special sessions 17th May 2023 For all updates on INTERSPEECH 2023, refer to our website at https://www.interspeech2023.org/
| |||||
3-1-2 | (2023-08-20) INTERSPEECH 2023 First Call for Papers INTERSPEECH 2023 First Call for Papers
INTERSPEECH is the world’s largest and most comprehensive conference on the science and technology of spoken language processing. INTERSPEECH conferences emphasise interdisciplinary approaches addressing all aspects of speech science and technology, ranging from basic theories to advanced applications.
INTERSPEECH 2023 will take place in Dublin, Ireland, from August 20-24th 2023 and will feature oral and poster sessions, plenary talks by internationally renowned experts, tutorials, special sessions and challenges, show & tell, exhibits, and satellite events.
The theme of INTERSPEECH 2023 is Inclusive Spoken Language Science and Technology – Breaking Down Barriers. Whilst it is not a requirement to address this theme, we encourage submissions that: report performance metric distributions in addition to averages; break down results by demographic; employ diverse data; evaluate with diverse target users; report barriers that could prevent other researchers adopting a technique, or users from benefitting. This is not an exhaustive list, and authors are encouraged to discuss the implications of the conference theme for their own work.
Papers are especially welcome from authors who identify as being under-represented in the speech science and technology community, whether that is because of geographical location, economic status, race, age, gender, sexual orientation or any other characteristic. Paper SubmissionINTERSPEECH 2023 seeks original and innovative papers covering all aspects of speech science and technology. The working language of the conference is English, so papers must be written in English. The paper length is up to four pages in two columns with an additional page for references only. Submitted papers must conform to the format defined in the author’s kit provided on the conference website (https://www.interspeech2023.org/), and may optionally be accompanied by multimedia files. Authors must declare that their contributions are original and that they have not submitted their papers elsewhere for publication. Papers must be submitted electronically and will be evaluated through rigorous peer review on the basis of novelty and originality, technical correctness, clarity of presentation, key strengths, and quality of references. The Technical Programme Committee will decide which papers to include in the conference programme using peer review as the primary criterion, with secondary criteria of addressing the conference theme, and diversity across the programme as a whole. Scientific areas and topics
INTERSPEECH 2022 embraces a broad range of science and technology in speech, language and communication, including – but not limited to – the following topics:
Technical Program Committee ChairsSimon King - University of Edinburgh, UK Kate Knill - University of Cambridge, UK Petra Wagner - University of Bielefeld, Germany ContactFor all queries relating to this call for papers, please email: tpc-chairs@interspeech2023.org Important datesPaper Submission Deadline March 1st, 2023 Paper Update Deadline March 8th, 2023 Paper Acceptance Notification May 17th, 2023 Final Paper Upload and Paper Presenter Registration Deadline June 1st, 2023
| |||||
3-1-3 | (2023-08-20) Interspeech 2023, Dublin, Ireland , ISCA has reached the decision to hold INTERSPEECH-2023 in Dublin, Ireland (Aug. 20-24, 2023)
| |||||
3-1-4 | (2024-07-02) 12th Speech Prosody Conference @Leiden, The Netherlands Dear Speech Prosody SIG Members,
Professor Barbosa and I are very pleased to announce that the 12th Speech Prosody Conference will take place in Leiden, the Netherlands, July 2-5, 2024, and will be organized by Professors Yiya Chen, Amalia Arvaniti, and Aoju Chen. (Of the 303 votes cast, 225 were for Leiden, 64 for Shanghai, and 14 indicated no preference.)
Also I'd like to remind everyone that nominations for SProSIG officers for 2022-2024 are being accepted still this week, using the form at http://sprosig.org/about.html, to Professor Keikichi Hirose. If you are considering nominating someone, including yourself, feel free to contact me or any current officer to discuss what's involved and what help is most needed.
Nigel Ward, SProSIG Chair Professor of Computer Science, University of Texas at El Paso CCSB 3.0408, +1-915-747-6827 nigel@utep.edu https://www.cs.utep.edu/nigel/
| |||||
3-1-5 | (2024-09-01) Interspeech 2024, Jerusalem, Israel. ISCA conference committee has decided Interspeech 2024 will be held in Jerusalem, Israel from September 1 till September 5
| |||||
3-1-6 | ISCA INTERNATIONAL VIRTUAL SEMINARS Now's the time of year that seminar programmes get fixed up.. please direct the attention of whoever organises your seminars to the ISCA INTERNATIONAL VIRTUAL SEMINARS scheme (introduction below). There is now a good choice of speakers: see https://www.isca-speech.org/iscaweb/index.php/distinguished-lecturers/online-seminars ISCA INTERNATIONAL VIRTUAL SEMINARSA seminar programme is an important part of the life of a research lab, especially for its research students, but it's difficult for scientists to travel to give talks at the moment. However, presentations may be given on line and, paradoxically, it is thus possible for labs to engage international speakers who they wouldn't normally be able to afford.
Speakers may pre-record their talks if they wish, but they don't have to. It is up to the host lab to contact speakers and make the arrangements. Talks can be state-of-the-art, or tutorials. If you make use of this scheme and arrange a seminar, please send brief details (lab, speaker, date) to education@isca-speech.org If you wish to join the scheme as a speaker, we need is a title, a short abstract, a 1 paragraph biopic and contact details. Please send them to education@isca-speech.org PS. The online seminar scheme is now up and running, with 7 speakers so far:
Jean-Luc Schwartz, Roger Moore, Martin Cooke, Sakriani Sakti, Thomas Hueber, John Hansen and Karen Livescu.
| |||||
3-1-7 | ISCA Workshop - Remembering Sadaoki Furui ISCA Workshop - Remembering Sadaoki Furui Program: 14:00 - Welcome (Chair: Sebastian Möller) 14:10 - Tatsuya Kawahara (Chair: Sebastian Möller) 14:25 - Audio excerpt from IEEE, plus photo collage (Chair: Isabel Trancoso) 14:30 - Shri Narayanan (Chair: Isabel Trancoso) 14:45 - Video excerpt from Saras Institute (Chair: Julia Hirschberg) 14:50 - Karen Livescu (Chair: Julia Hirschberg) 15:05 - Video excerpt from Maui workshop (Chair: John Hansen) 15:10 - Jean-François Bonastre (Chair: Roger Moore) 15:25 - Panel session open to anyone who registers (Chair: Roger Moore) 16:15 - Closing
Organizing Team: Julia Hirschberg, Columbia University Sebastian Möller, TU Berlin Roger Moore, University of Sheffield Isabel Trancoso, INESC-ID/IST, Univ. Lisbon
| |||||
3-1-8 | Speech Prosody courses Dear Speech Prosody SIG Members,
|
3-2-1 | (2023-01-07) SLT-CODE Hackathon Announcement , Doha, Qatar SLT-CODE Hackathon Announcement
Have you ever asked yourself how your smartphone recognizes what you say and who you are?
Have you ever thought about how machines recognize different languages?
If that is your case, join us for a two-day speech and language technology hackathon. We will answer these questions and build fantastic systems with the guidance of top language and speech scientists in a collaborative environment.
The two-day speech and language technology hackathon will take place during the IEEE Spoken Language Technology (SLT) Workshop in Doha, Qatar, on January 7th and 8th, 2023. This year's Hackathon will be inspiring, momentous, and fun. The goal is to build a diverse community of people who want to explore and envision how machines understand the world's spoken languages.
During the Hackathon, you will be exposed (but not limited) to speech and language toolkits like ESPNet, SpeechBrain, K2/Kaldi, Huggingface, TorchAudio, or commercial APIs like Amazon Lex, etc., and you will be hands-on using this technology.
At the end of the Hackathon, every team will share their findings with the rest of the participants. Selected projects will have the opportunity to be presented at the SLT workshop.
The Hackathon will be at the Qatar Computing Research Institute (QCRI) in Doha, Qatar (GMT+3). In-person participation is preferred; however, remote participation is possible by joining a team with at least one person being local.
More information on how to apply and important dates are available at our website https://slt2022.org/hackathon.php.
Interested? Apply here: https://forms.gle/a2droYbD4qset8ii9 The deadline for registration is September 30th, 2022.
If you have immediate questions, don't hesitate to contact our hackathon chairs directly at hackathon.slt2022@gmail.com.
|
3-3-1 | (2022-11-14) IberSPEECH 2022, Grenada, Spain
| ||||||||||||
3-3-2 | (2022-11-14)) CfP SPECOM 2022, Gurugram, India (updated) ******************************************************************** SPECOM-2022 – CALL FOR PAPERS ******************************************************************** The conference is relocated in India.
********************************************************************
SPECOM-2022 – FINAL CALL FOR PAPERS
********************************************************************
24th International Conference on Speech and Computer (SPECOM-2022)
November 14-16, 2022, KIIT Campus, Gurugram, India
Web: www.specom.co.in
ORGANIZER
The conference is organized by KIIT College of Engineering as a hybrid event both in Gurugram/New Delhi, India and online.
CONFERENCE TOPICS
SPECOM attracts researchers, linguists and engineers working in the following areas of speech science, speech technology, natural language processing, and human-computer interaction:
Affective computing
Audio-visual speech processing
Corpus linguistics
Computational paralinguistics
Deep learning for audio processing
Feature extraction
Forensic speech investigations
Human-machine interaction
Language identification
Multichannel signal processing
Multimedia processing
Multimodal analysis and synthesis
Sign language processing
Speaker recognition
Speech and language resources
Speech analytics and audio mining
Speech and voice disorders
Speech-based applications
Speech driving systems in robotics
Speech enhancement
Speech perception
Speech recognition and understanding
Speech synthesis
Speech translation systems
Spoken dialogue systems
Spoken language processing
Text mining and sentiment analysis
Virtual and augmented reality
Voice assistants
OFFICIAL LANGUAGE
The official language of the event is English. However, papers on processing of languages other than English are strongly encouraged.
FORMAT OF THE CONFERENCE
The conference program will include presentations of invited talks, oral presentations, and poster/demonstration sessions.
SUBMISSION OF PAPERS
Authors are invited to submit full papers of 8-14 pages formatted in the Springer LNCS style. Each paper will be reviewed by at least three independent reviewers (single-blind), and accepted papers will be presented either orally or as posters. Papers submitted to SPECOM must not be under review by any other conference or publication during the SPECOM review cycle, and must not be previously published or accepted for publication elsewhere. The authors are asked to submit their papers using the on-line submission system: https://easychair.org/conferences/?conf=specom2022
PROCEEDINGS
SPECOM Proceedings will be published by Springer as a book in the Lecture Notes in Artificial Intelligence (LNAI/LNCS) series listed in all major international citation databases.
IMPORTANT DATES (extended!)
August 16, 2022 .................. Submission of full papers
September 13, 2022 ........... Notification of acceptance
September 20, 2022 ........... Camera-ready papers
September 27, 2022 ........... Early registration
November 14-16, 2022 .......Conference dates
GENERAL CHAIR/CO-CHAIR
Shyam S Agrawal - KIIT, Gurugram
Amita Dev - IGDTUW, Delhi
TECHNICAL CHAIR/CO-CHAIRS
S.R. Mahadeva Prasanna - IIT Dharwad
Alexey Karpov - SPC RAS
Rodmonga Potapova - MSLU
K. Samudravijaya - KL University
CONTACTS
All correspondence regarding the conference should be addressed to SPECOM 2022 Secretariat
E-mail: specomkiit@kiitworld.in
Web: www.specom.co.in
| ||||||||||||
3-3-3 | (2022-11-17) Journée d’Études intitulée 'A l'épreuve de l'archive : les premières enquêtes phonographiques', Sorbonne Nouvelle, Paris Nous avons le plaisir de vous inviter à la Journée d’Études intitulée 'A l'épreuve de l'archive : les premières enquêtes phonographiques' qui se tiendra le 17 novembre 2022 à partir de 9h, à la Maison de la recherche de l'Université Sorbonne Nouvelle Paris 3 (4, rue des Irlandais, 75005 Paris).
| ||||||||||||
3-3-4 | (2022-11-30) Third workshop on Resources for African Indigenous Language (RAIL), Potchefstroom, South Africa Final call for papers
| ||||||||||||
3-3-5 | (2022-12-13) CfP 18th Australasian International Conference on Speech Science and Technology (SST2022), Canberra, Australia SST2022: CALL FOR PAPERS The Australasian Speech Science and Technology Association is pleased to call for papers for the 18th Australasian International Conference on Speech Science and Technology (SST2022). SST is an international interdisciplinary conference designed to foster collaboration among speech scientists, engineers, psycholinguists, audiologists, linguists, speech/language pathologists and industrial partners. ? Location: Canberra, Australia (remote participation options will also be available) ? Dates: 13-16 December 2022 ? Host Institution: Australian National University ? Deadline for tutorial and special session proposals: 8 April 2022 ? Deadline for submissions: 17 June 2022 ? Notification of acceptance: 31 August 2022 ? Deadline for upload of revised submissions: 16 September 2022 ? Website: www.sst2022.com Submissions are invited in all areas of speech science and technology, including: ? Acoustic phonetics ? Analysis of paralinguistics in speech and language ? Applications of speech science and technology ? Audiology ? Computer assisted language learning ? Corpus management and speech tools ? First language acquisition ? Forensic phonetics ? Hearing and hearing impairment ? Languages of Australia and Asia-Pacific (phonetics/phonology) ? Low-resource languages ? Pedagogical technologies for speech ? Second language acquisition ? Sociophonetics ? Speech signal processing, analysis, modelling and enhancement ? Speech pathology ? Speech perception ? Speech production ? Speech prosody, emotional speech, voice quality ? Speech synthesis and speech recognition ? Spoken language processing, translation, information retrieval and summarization ? Speaker and language recognition ? Spoken dialog systems and analysis of conversation ? Voice mechanisms, source-filter interactions We are inviting two categories of submission: 4-page papers (for oral or poster presentation, and publication in the proceedings), and 1-page detailed abstracts (for poster presentation only). Please follow the author instructions in preparing your submission. We also invite proposals for tutorials, as 3-hour intensive instructional sessions to be held on the first day of the conference. In addition, we welcome proposals for special sessions, as thematic groupings of papers exploring specific topics or challenges. Interdisciplinary special sessions are particularly encouraged. For any queries, please contact sst2022conf@gmail.com.
| ||||||||||||
3-3-6 | (2023-01-04) SIVA workshop @ Waikoloa Beach Marriott Resort, Hawaii, USA. CALL FOR PAPERS: SIVA'23
| ||||||||||||
3-3-7 | (2023-01-04) Workshop on Socially Interactive Human-like Virtual Agents (SIVA'23), Waikoloa, Hawaii CALL FOR PAPERS: SIVA'23 Workshop on Socially Interactive Human-like Virtual Agents From expressive and context-aware multimodal generation of digital humans to understanding the social cognition of real humans Submission (to be openened July, 22 2022): https://cmt3.research.microsoft.com/SIVA2023 SIVA'23 workshop: January, 4 or 5 2023, Waikoloa, Hawaii, https://www.stms-lab.fr/agenda/siva/detail/ FG 2023 conference: January 4-8 2023, Waikoloa, Hawaii, https://fg2023.ieee-biometrics.org/ OVERVIEW Due to the rapid growth of virtual, augmented, and hybrid reality together with spectacular advances in artificial intelligence, the ultra-realistic generation and animation of digital humans with human-like behaviors is becoming a massive topic of interest. This complex endeavor requires modeling several elements of human behavior including the natural coordination of multimodal behaviors including text, speech, face, and body, plus the contextualization of behavior in response to interlocutors of different cultures and motivations. Thus, challenges in this topic are two folds—the generation and animation of coherent multimodal behaviors, and modeling the expressivity and contextualization of the virtual agent with respect to human behavior, plus understanding and modeling virtual agent behavior adaptation to increase human’s engagement. The aim of this workshop is to connect traditionally distinct communities (e.g., speech, vision, cognitive neurosciences, social psychology) to elaborate and discuss the future of human interaction with human-like virtual agents. We expect contributions from the fields of signal processing, speech and vision, machine learning and artificial intelligence, perceptual studies, and cognitive and neuroscience. Topics will range from multimodal generative modeling of virtual agent behaviors, and speech-to-face and posture 2D and 3D animation, to original research topics including style, expressivity, and context-aware animation of virtual agents. Moreover, the availability of controllable real-time virtual agent models can be used as state-of-the-art experimental stimuli and confederates to design novel, groundbreaking experiments to advance understanding of social cognition in humans. Finally, these virtual humans can be used to create virtual environments for medical purposes including rehabilitation and training. SCOPE Topics of interest include but are not limited to: + Analysis of Multimodal Human-like Behavior - Analyzing and understanding of human multimodal behavior (speech, gesture, face) - Creating datasets for the study and modeling of human multimodal behavior - Coordination and synchronization of human multimodal behavior - Analysis of style and expressivity in human multimodal behavior - Cultural variability of social multimodal behavior + Modeling and Generation of Multimodal Human-like Behavior - Multimodal generation of human-like behavior (speech, gesture, face) - Face and gesture generation driven by text and speech - Context-aware generation of multimodal human-like behavior - Modeling of style and expressivity for the generation of multimodal behavior - Modeling paralinguistic cues for multimodal behavior generation - Few-shots or zero-shot transfer of style and expressivity - Slightly-supervised adaptation of multimodal behavior to context + Psychology and Cognition of of Multimodal Human-like Behavior - Cognition of deep fakes and ultra-realistic digital manipulation of human-like behavior - Social agents/robots as tools for capturing, measuring and understanding multimodal behavior (speech, gesture, face) - Neuroscience and social cognition of real humans using virtual agents and physical robots IMPORTANT DATES Submission Deadline September, 12 2022 Notification of Acceptance: October, 15 2022 Camera-ready deadline: October, 31 2022 Workshop: January, 4 or 5 2023 VENUE The SIVA workshop is organized as a satellite workshop of the IEEE International Conference on Automatic Face and Gesture Recognition 2023. The workshop will be collocated with the FG 2023 and WACV 2023 conferences at the Waikoloa Beach Marriott Resort, Hawaii, USA. ADDITIONAL INFORMATION AND SUBMISSION DETAILS Submissions must be original and not published or submitted elsewhere. Short papers of 3 pages excluding references encourage submissions of early research in original emerging fields. Long paper of 6 to 8 pages excluding references promote the presentation of strongly original contributions, positional or survey papers. The manuscript should be formatted according to the Word or Latex template provided on the workshop website. All submissions will be reviewed by 3 reviewers. The reviewing process will be single-blinded. Authors will be asked to disclose possible conflict of interests, such as cooperation in the previous two years. Moreover, care will be taken to avoid reviewers from the same institution as the authors. Authors should submit their articles in a single pdf file in the submission website - no later than September, 12 2022. Notification of acceptance will be sent by October, 15 2022, and the camera-ready version of the papers revised according to the reviewers comments should be submitted by October, 31 2022. Accepted papers will be published in the proceedings of the FG'2023 conference. More information can be found on the SIVA website. DIVERSITY, EQUALITY, AND INCLUSION The format of this workshop will be hybrid online and onsite. This format proposes format of scientific exchanges in order to satisfy travel restrictions and COVID sanitary precautions, to promote inclusion in the research community (travel costs are high, online presentations will encourage research contributions from geographical regions which would normally be excluded), and to consider ecological issues (e.g., CO2 footprint). The organizing committee is committed to paying attention to equality, diversity, and inclusivity in consideration of invited speakers. This effort starts from the organizing committee and the invited speakers to the program committee. ORGANIZING COMMITTEE 🌸 Nicolas Obin, STMS Lab (Ircam, CNRS, Sorbonne Université, ministère de la Culture) 🌸 Ryo Ishii, NTT Human Informatics Laboratories 🌸 Rachael E. Jack, University of Glasgow 🌸 Louis-Philippe Morency, Carnegie Mellon University 🌸 Catherine Pelachaud, CNRS - ISIR, Sorbonne Université
| ||||||||||||
3-3-8 | (2023-01-16) Advanced Language Processing School (ALPS), Grenoble, France
| ||||||||||||
3-3-9 | (2023-04-02) Sixth International Workshop on Narrative Extraction from Texts (Text2Story'23) , Dublin, Ireland ++ CALL FOR PAPERS ++
**************************************************************************** Sixth International Workshop on Narrative Extraction from Texts (Text2Story'23)
Held in conjunction with the 45th European Conference on Information Retrieval (ECIR'23)
April 2nd, 2023 - Dublin, Ireland
Website: https://text2story23.inesctec.pt **************************************************************************** ++ Important Dates ++ - Submission deadline: January 23rd, 2023 - Acceptance Notification Date: March 3rd, 2023 - Camera-ready copies: March 17th, 2023 - Workshop: April 2nd, 2023
++ Overview ++ Recent years have shown a stream of continuously evolving information making it unmanageable and time-consuming for an interested reader to track and process and to keep up with all the essential information and the various aspects of a story. Automated narrative extraction from text offers a compelling approach to this problem. It involves identifying the sub-set of interconnected raw documents, extracting the critical narrative story elements, and representing them in an adequate final form (e.g., timelines) that conveys the key points of the story in an easy-to-understand format. Although, information extraction and natural language processing have made significant progress towards an automatic interpretation of texts, the problem of automated identification and analysis of the different elements of a narrative present in a document (set) still presents significant unsolved challenges
++ List of Topics ++ In the sixth edition of the Text2Story workshop, we aim to bring to the forefront the challenges involved in understanding the structure of narratives and in incorporating their representation in well-established models, as well as in modern architectures (e.g., transformers) which are now common and form the backbone of almost every IR and NLP application. It is hoped that the workshop will provide a common forum to consolidate the multi-disciplinary efforts and foster discussions to identify the wide-ranging issues related to the narrative extraction task. To this regard, we encourage the submission of high-quality and original submissions covering the following topics:
++ Dataset ++ We challenge the interested researchers to consider submitting a paper that makes use of the tls-covid19 dataset (published at ECIR'21) under the scope and purposes of the text2story workshop. tls-covid19 consists of a number of curated topics related to the Covid-19 outbreak, with associated news articles from Portuguese and English news outlets and their respective reference timelines as gold-standard. While it was designed to support timeline summarization research tasks it can also be used for other tasks including the study of news coverage about the COVID-19 pandemic. A script to reconstruct and expand the dataset is available at https://github.com/LIAAD/tls-covid19. The article itself is available at this link: https://link.springer.com/chapter/10.1007/978-3-030-72113-8_33
++ Submission Guidelines ++
We invite two kinds of submissions:
Submissions will be peer-reviewed by at least two members of the programme committee. The accepted papers will appear in the proceedings published at CEUR workshop proceedings (indexed in Scopus and DBLP) as long as they don't conflict with previous publication rights.
++ Workshop Format ++ Participants of accepted papers will be given 15 minutes for oral presentations.
++ Organizing committee ++ Ricardo Campos (INESC TEC; Ci2 - Smart Cities Research Center, Polytechnic Institute of Tomar, Tomar, Portugal) Alípio M. Jorge (INESC TEC; University of Porto, Portugal) Adam Jatowt (University of Innsbruck, Austria) Sumit Bhatia (Media and Data Science Research Lab, Adobe) Marina Litvak (Shamoon Academic College of Engineering, Israel)
++ Proceedings Chair ++ João Paulo Cordeiro (INESC TEC & Universidade da Beira do Interior) Conceição Rocha (INESC TEC)
++ Web and Dissemination Chair ++ Hugo Sousa (INESC TEC & University of Porto) Behrooz Mansouri (Rochester Institute of Technology)
++ Program Committee ++ Álvaro Figueira (INESC TEC & University of Porto) Andreas Spitz (University of Konstanz) Antoine Doucet (Université de La Rochelle) António Horta Branco (University of Lisbon) Arian Pasquali (CitizenLab) Bart Gajderowicz (University of Toronto) Brenda Santana (Federal University of Rio Grande do Sul) Bruno Martins (IST & INESC-ID, University of Lisbon) Daniel Loureiro (Cardiff University) Dennis Aumiller (Heidelberg University) Dhruv Gupta (Norwegian University of Science and Technology) Dyaa Albakour (Signal UK) Evelin Amorim (INESC TEC) Henrique Cardoso (INESC TEC & University of Porto) Ismail Altingovde (Middle East Technical University) João Paulo Cordeiro (INESC TEC & University of Beira Interior) Kiran Bandeli (Walmart Inc.) Luca Cagliero (Politecnico di Torino) Ludovic Moncla (INSA Lyon) Marc Finlayson (Florida International University) Marc Spaniol (Université de Caen Normandie) Moreno La Quatra (Politecnico di Torino) Nuno Guimarães (INESC TEC & University of Porto) Pablo Gamallo (University of Santiago de Compostela) Pablo Gervás (Universidad Complutense de Madrid) Paulo Quaresma (Universidade de Évora) Paul Rayson (Lancaster University) Ross Purves (University of Zurich) Satya Almasian (Heidelberg University) Sérgio Nunes (INESC TEC & University of Porto) Simra Shahid (Adobe's Media and Data Science Research Lab) Udo Kruschwitz (University of Regensburg)
++ Contacts ++ Website: https://text2story23.inesctec.pt For general inquiries regarding the workshop, reach the organizers at: text2story2023@easychair.org
| ||||||||||||
3-3-10 | (2023-06-04) CfP ICASSP 2023, Rhodes Island, Greece
| ||||||||||||
3-3-11 | (2023-06-04) CfSatellite Workshops ICASSP 2023, Rhodes Island, Greece
| ||||||||||||
3-3-12 | (2023-06-12)) 13th International Conference on Multimedia Retrieval, Thessaloniki, Greece ICMR2023 – ACM International Conference on Multimedia Retrieval
| ||||||||||||
3-3-13 | (2023-06-15) JPC (Journées de Phonétique Clinique) 2023, Toulouse, France JPC 2023 - Toulouse du 15 au 17 juin 2023 1e Appel à Communication Depuis leur création en 2005, les Journées de Phonétique Clinique (JPC) ont été régulièrement organisées sur une base bisannuelle. Après une dernière édition organisée en Belgique par nos collègues du Laboratoire de phonétique de l’Université de Mons en 2019 (sous l’égide de l’Institut de Recherche en Sciences et Technologies du Langage), les JPC reviennent en France en 2023 (après annulation en 2021) pour leur 9e édition. Co-organisée par l’Institut de Recherche en Informatique de Toulouse (IRIT), le laboratoire de Neuro-Psycho-Linguistique (LNPL) et le Centre Hospitalo-Universitaire de Toulouse ainsi que par le Laboratoire Informatique d’Avignon (LIA), la manifestation se tiendra à l’Université de Toulouse du 15 au 17 juin 2023.
Rencontre scientifique internationale, les Journées de Phonétique Clinique sont principalement destinées à rassembler et à favoriser les échanges entre chercheurs, cliniciens, informaticiens, ingénieurs, phonéticiens et tout autre professionnel s’intéressant au fonctionnement de la parole, de la voix et du langage. Les JPC accueillent autant les experts que les jeunes chercheurs et les étudiants des domaines cliniques (médecine, orthophonie/logopédie), psychologique, informatique et des sciences du langage. La production et la perception de la parole, de la voix et du langage de l’enfant et de l’adulte, sain ou atteint d’une pathologie, sont les domaines de prédilection des JPC. Ils y sont ainsi abordés selon des points de vue variés, permettant le partage des savoirs et l’ouverture de nouvelles pistes de réflexion, de recherche et de collaboration. Lors de cette neuvième édition, la thématique des mesures de la parole sera mise en avant. Elle s’inscrit dans un cadre conceptuel dont les facettes sont multiples : analyses perceptives, traitement automatique du signal, caractérisations de l’intelligibilité, du trouble de la parole, des di/ysfluences atteignant le débit de la parole, la prosodie… Sa pertinence clinique est essentielle : l’évaluation du trouble, de ses conséquences fonctionnelles et de l’impact sur la qualité de vie est primordiale pour le suivi des patients atteints de pathologies neurologiques, cancérologiques… Trois conférences plénières seront prévues autour du thème des journées. Une table ronde ainsi que des ateliers feront également partie du programme de cette nouvelle édition. Les propositions de communication (résumé de 400 mots, hors titre, auteurs et références) porteront sur les problématiques suivantes (liste non exhaustive) :
Une attention particulière sera portée aux propositions ciblant la thématique autour des mesures de la parole. Dates importantes : - 20 janvier 2023 → Date limite de soumission des résumés via SciencesConf : https://jpc2023.sciencesconf.org/
| ||||||||||||
3-3-14 | (2023-07-15) MLDM 2023 : 18th International Conference on Machine Learning and Data Mining, New York,NY, USA MLDM 2023 : 18th International Conference on Machine Learning and Data Mining
Contact: icphs2023@guarant.cz
| ||||||||||||
3-3-16 | (2024-05-13) 13th International Seminar on Speech Production, Autrans, France 13th International Seminar on Speech Production, 13-17 May 2024, in Autrans, France
It is time for the next International Seminar on Speech Production.
After the launch in 1988 in Grenoble, followed by in Leeds (1990), Old Saybrook (1993), Autrans (1996), Kloster Seeon (2000), Sydney (2003), Ubatatuba (2006), Strasbourg (2008), Montreal (2011), Cologne (2014), Tianjin (2017) and virtually in in 2020, the 13th ISSP will come back (close) to Grenoble.
After a very successful virtual ISSP in 2020 (Haskins Labs), we are ready again for an in-person meeting in a very beautiful location in the mountains of Autrans (of course we will provide an option to attend virtually).
Take your calendars and mark the 13-17 May 2024 for the 13th International Seminar on Speech Production co-organized by several laboratories in France
More information including the website and important dates will be provided soon.
We are looking forward to meeting you in Autrans in 2024!
The organizing committee, Cécile Fougeron & Pascal Perrier together with Jalal Al-Tamimi, Pierre Baraduc, Véronique Boulanger, Mélanie Canault, Maëva Garnier, Anne Hermes, Fabrice Hirsch, Leonardo Lancia, Yves Laprie, Yohann Meynadier, Slim Ouni, Rudolph Sock, Béatrice Vaxelaire
Follow us on twitter @issp2024!
Claire PILLOT-LOISEAU
. Maître de Conférences HDR en Phonétique . Responsable du DU de Phonétique Appliquée à la Langue Française (DUPALF)
Laboratoire de Phonétique et Phonologie UMR 7018 (LPP)
Université Sorbonne Nouvelle, département Institut de Linguistique et de Phonétique Générales et Appliquées (ILPGA)
. 4, rue des Irlandais, 75005 PARIS (Laboratoire)
. 8, Avenue de Saint Mandé, 75012, PARIS (Université)
|