ISCApad #260 |
Monday, February 10, 2020 by Chris Wellekens |
7-1 | IEEE JSTSP Special Issue on Compact Deep Neural Networks with Industrial Applications (updated)IEEE JSTSP Special Issue onCompact Deep Neural Networks with
|
Important Dates
|
|
Back | Top |
ACM Transactions on Internet Technology (TOIT)
Special Section on Computational Modeling and Understanding of Emotions in Conflictual Social Interactions
Call for Papers: https://toit.acm.org/pdf/ACM-ToIT-CfP-ECSI-ext.pdf
Paper Submission Deadline: 31st March 2019
Author Guidelines and Templates: https://toit.acm.org/authors.cfm
Paper Submission Website: https://mc.manuscriptcentral.com/toit
The expression of social, cultural and political opinions in social media often features a strong affective component, especially when it occurs in highly-polarized contexts (e.g., in discussions on political elections, migrants, civil rights, and so on). In particular, hate speech is recognized as an extreme, yet typical, expression of opinion, and it is increasingly intertwined with the spread of defamatory, false stories. Current approaches for monitoring and circumscribing the spread of these phenomena mostly rely on simple affective models that do not account for emotions as complex cognitive, social and cultural constructs behind linguistic behavior.
In particular, moral emotions possess a potential for advancing sentiment analysis in social media, especially since they provide insights on the motivations behind hate speech. Understanding these affective dynamics is important also for modelling human behavior in social settings that involve other people and artificial agents, as well as for designing socially-aware artificial systems.
How can we include finer grained accounts of emotions in computational models of interpersonal and social interactions, with the goal of monitoring and dealing with conflicts in social media and agent interactions? How can we leverage the recent advances in machine learning and reasoning techniques to design more effective computational models of interpersonal and social conflict? We invite contributions that address the foregoing questions by presenting enhanced computational models and processing methods.
INDICATIVE TOPICS OF INTEREST
Computational models of emotions
Moral emotions (e.g. contempt, anger and disgust) in conflictual social interactions
Affective dynamics in human-human and human-agent conflictual interactions
Interplay of emotions in conflictual interactions
Dimensional and categorical emotion models in conflict representation
Automatic processing of affect in polarized debates on social media
Stance and hate speech detection
Affect in online virality and fake news detection
Opinions and arguments on highly controversial topics
Linguistic and multimodal corpora for affect analysis in conflictual interactions
Figurative and rhetorical devices in social contrasts
Applications
Conflict detection and hate speech monitoring in political debates
Conflict-aware and conflict-oriented conversational agents
Integration of social cues in human-agent interaction strategies
Conflict-aware agents in pedagogical and coaching applications
SUBMISSION FORMAT AND GUIDELINES
Author guidelines for preparation of manuscript and submission instructions can be found at: https://toit.acm.org/authors.cfm
Please select ?Computational Modeling and Understanding of Emotions in Conflictual Social Interactions? under Manuscript Type dropdown in the Manuscript Central website.
Submission: 31 Mar 2019
First decision: 1 July 2019
Revision: 15 Aug 2019
Final decision: 1 Oct 2019
Final manuscript: 1 Nov 2019
Publication date: 1 Mar 2020
SPECIAL SECTION EDITORS
Chloé Clavel, Institut-Mines-Telecom, Telecom-ParisTech, LTCI, France
http://clavel.wp.mines-telecom.fr
Rossana Damiano, Università degli Studi di Torino, Italy
http://www.di.unito.it/~rossana/
Viviana Patti, Università degli Studi di Torino, Italy
Paolo Rosso, Universitat Politècnica de València, Spain
http://users.dsic.upv.es/~prosso
ACM TOIT Editor-in-Chief
Ling Liu, Department of Computer Science, Georgia Institute of Technology
CONTACT
Please send any queries about this CfP to ecsi.toit@gmail.com.
Back | Top |
Special Issue on Speech & Dementia,
Computer Speech and Language.
The CfP is online here:
https://www.journals.elsevier.com/computer-speech-and-language/call-for-papers/special-issue-on-speech-dementia
Organizers:
Heidi Christensen (University of Sheffield, GBR),
Frank Rudzicz (University of Toronto, CAN),
Johannes Schröder (University Heidelberg, DEU)
Tanja Schultz (University of Bremen, CDEU)
Back | Top |
(Interdisciplinary work on speech and language)
https://journals.openedition.org/tipa/?lang=en
TIPA is a journal on open access on the online journal platform 'OpenEdition Journals' and free of charge for submission and publication. The Evaluation Procedure is a double-blind evaluation by a scientific committee.
HOW THE BODY CONTRIBUTES TO DISCOURSE AND MEANING?
Coordination: Brahim AZAOUI (Montpellier University, LIRDEF) & Marion TELLIER (Aix-Marseille University, LPL)
Research on the body, taken in a broad sense (gaze, manual gestures, proxemics, etc.), has recently experienced a renewed interest in various fields in human sciences. Since the praxeological shift in linguistics in the 1950s with the theories of speech acts in particular, interactional linguistics (Mondada, 2004, 2007; Kerbrat-Orecchioni, 2004) has given it a certain place in its work. Similarly, didactics has gradually recognized its importance in the teaching and learning process (Sime, 2001, 2006; Tellier, 2014 & 2016) thanks in particular to the numerous studies carried out in social semiotics (Jewitt, 2008; Kress et al, 2001), in education sciences (Pujade-Renaud, 1983), psychology and cognitive sciences (Stam, 2013) or linguistics (Aden, 2017, Colletta, 2004; Tellier 2008, 2014; Azaoui, 2015, 2019; Gullberg, 2010).
However, if this field of study is gaining in interest, as shown by the number of articles, books and PhD dissertations dedicated to it, it must be noted that few French journals have devoted an issue to it.
This issue of TIPA journal seeks to contribute to the understanding and dissemination of this theme by collecting various contributions to answer the following question: how does the body of speakers co-constructs discourse and meaning in didactic speech? The term 'didactic speech' will refer to any situation where the discourse of the interlocutors aims to make somebody know/learn. This conception is inspired by Moirand's work on the notion of didacticity (1993), which makes it possible to distinguish discourses whose primary intention is didactic, such as those produced in school situations, from those which are not didactic but have a didactic intent. Therefore, these speeches can take place in contexts other than the classroom, whether in face-to-face or distant interactions (e. g. videoconferencing) or in asymmetric interactions in which an expert must adapt his or her speech to explain to or convince a non-expert (doctor/patient, parent/child, professional/client, political speech...).
The various articles proposed will pertain to a theoretical framework that considers speech and the body as being in constant interaction, the study and understanding of one makes the functioning of the other explicit, or as part of the same cognitive process (McNeill, 2005; Kendon, 2004). The authors will indicate which of the following three areas their contribution will focus on:
Didactic discourse including interactions outside the school context, to what extent is it possible to consider a continuum of pedagogical gestures (Azaoui, 2014, 2015; Tellier 2015) and to qualify gestures made outside the classroom context as pedagogical (Azaoui, 2015)? The work of McNeill or Kendon has made it possible to highlight the coverbal dimension of certain gestures carried out in an interaction situation, what about other non-verbal phenomena? To what extent could proxemics be described as coverbal (Azaoui, 2019a)? How do facial signals (eyebrow movements, eye or lip movements) contribute to the construction of meaning in an exchange (by reporting understanding or non-understanding for example) (Allwood & Cerrato, 2003)? How does recent work on motion capture shed new light on discourse and the construction of meaning?
Most of our knowledge about the use of gestures and the body in general is the product of work based on the analysis of practices of teachers and other professionals (Tellier & Cadet, 2014; Azaoui, 2016; Mondada, 2013; Saubesty and Tellier, 2015) or learners (Colletta, 2004; Stam, 2013; Gullberg, 2010). The analysis can be considered from the point of view of the recipients of the multimodal discourse by focusing on their verbal comments on the gestures (or the body in general) or on their meta-gestural activity, these gestures made to 'talk' about the observed gestures (Azaoui, 2019b).
Contributions in this perspective of analysis will help to increase our understanding of the link between bodily activity and speech. These proposals could focus in particular on the way in which the body participates in the organisation of exchanges, speaking or moments of meaning construction in explanatory sequences or the resolution of sequences of misunderstanding, for example, by showing how the body's movements are articulated with speech (or not) to explain or to give feedback to an interlocutor on his speech.
The relationship between body and speech can finally be considered from the point of view of training. For the past thirty years or so, a call for training in bodily activity has been made (Calbris & Porcher, 1989) and subsequently adopted by a number of researchers (Cadet & Tellier, 2007 and Tellier & Cadet, 2014; Azaoui, 2014; Tellier and Yerian 2018). If the hands of apprentices are said to be intelligent (Fililettaz, St Georges & Duc, 2008), it seems that – to a certain extent - all professionals use their body to organise or carry out their activity. Therefore, training in and through kinaesthetics is necessary, if only to raise awareness. But can we train in kinaesthetics? If so, how?
Some of the work is based on self-confrontation interviews, leaving room for teachers to verbalize gestural or kinaesthetic practices more generally (Gadoni & Tellier, 2014 and 2015; Azaoui, 2014, 2015). Papers may focus on this process of awareness raising through video (used as stimulated recall). In addition, papers analysing the implementation and/or impact of training schemes for the use of the teaching profession are also included in this issue. These proposals may concern any professional field in which interaction is asymmetrical, such as: doctor-patient relations, communication with young children, communication with people with comprehension difficulties (pathological or not), professional and client relations, etc.
April 1st, 2019: first call for papers
June 3rd, 2019: second call for papers
August 31st, 2019: submission of the paper (version 1)
November 15th, 2019: Notification to authors: acceptance, proposal for amendments (of version 1) or refusal
January 15th, 2020: submission of the amended version (version 2)
March 15th, 2020: Committee feedback (regarding the final version)
April 15th, 2020: publication
Please send 3 files in electronic form to: lpl-tipa@univ-amu.fr, marion.tellier@univ-amu.fr, brahim.azaoui@umontpellier.fr
- a .doc file containing, in addition to the body of the article, the title, name and affiliation of the author(s)
- two anonymous files, one in .doc format and the other in .pdf format.
For more details, please visit the 'instructions to authors' page at https://journals.openedition.org/tipa/222
Azaoui, B. (2015). Polyfocal classroom interactions and teaching gestures. An analysis of non verbal orchestration. Proceedings “Gestures and speech in interaction (GESPIN)”, Nantes, 2-4 septembre 2015.
Azaoui, B. (2019b). Ce que les élèves voient et disent du corps de leur enseignant: analyse multimodale de leur discours. Dans V. Rivière & N. Blanc (dirs.), Observer l’activité multimodale en situations éducatives : circulations entre recherche et formation. Lyon : ENS Editions.
Calbris, G. & Porcher, L. (1989). Geste et communication. Paris : Didier.
Colletta, J.-M. (2004). Le développement de la parole chez l’enfant âge de 6 à 11 ans. Liège : Mardaga.
Fililettaz, L. ; St Georges, I. & Duc, B. (dirs., 2008). Cahiers de la section des sciences de l’éducation, no 117, « Vos mains sont intelligentes ! Interactions en formation professionnelle initiale ». Université de Genève : Faculté de psychologie et des sciences de l’éducation.
Jewitt, T. (2008). Multimodality and literacy in school classrooms. Review of research in education, 32, 241–267.
Kendon, A. (2004). Gesture. Visible action as utterance. Cambridge: Cambridge University Press
McNeill, D. (2005). Gesture and thought. Chicago, USA: University of Chicago Press.
Mondada, L. (2013). Embodied and Spatial Resources for Turn-Taking in Institutional Multi-Party Interactions: Participatory Democracy Debates. Journal of Pragmatics, 46, 39-68.
Stam, G. (2013). Second language acquisition and gesture. In C. A. Chapelle (Ed.), The encyclopedia of applied linguistics. Oxford, England: Blackwell.
Tellier, M. & Cadet, L. (2014). Le corps et la voix de l’enseignant : théorie et pratique. Paris : Maison des langues.
Tellier, M. & Yerian, K. (2018). Mettre du corps à l’ouvrage : Travailler sur la mise en scène du corps du jeune enseignant en formation universitaire. Les Cahiers de l’APLIUT, n°37(2).
Back | Top |
Special issue of Language and Speech
The goal of this special issue of Language and Speech is to highlight recent work exploring sociolinguistic variation in prosody. The papers will be based in part on talks and poster presentations from the recent “Experimental and Theoretical Advances in Prosody” conference (ETAP4, etap4.krisyu.org) which featured a special theme entitled: “Sociolectal and dialectal variability in prosody.” Note, however, that this is an open call and submissions are not restricted to papers presented at the conference.
As in many language fields, studies of prosody have focused on majority languages and dialects and on speakers who hold power in social structures. The goal of this special issue is to diversify prosody research in terms of the languages and dialects being investigated, as well as the social structures that influence prosodic variation. The issue brings together prosody researchers and researchers exploring sociological variation in prosody, with a focus on the prosody of marginalized dialects and prosodic differences based on gender and sexuality.
We invite proposals for papers that will:
• Establish the broad questions in sociolinguistics that would especially benefit from prosodic research
• Address the theoretical and methodological challenges and opportunities that come with studying sociolinguistic prosodic variation
• Share best practices for engaging in prosodic research of understudied languages and social groups to address linguistic bias
We especially encourage proposals for papers that focus on the prosody of marginalized dialects and prosodic differences based on gender and sexuality.
The editors of the special issue will be Meghan Armstrong-Abrami, Mara Breen, Shelome Gooden, Erez Levon, and Kristine Yu. The editors will review submitted paper proposals and then invite authors of selected proposals to submit full papers. Each invited paper will be handled by one of the editors and reviewed by 2-3 additional reviewers. Based on the reviews, the editors plan to accept 10-12 papers for publication in this special issue. Please note that publication is not guaranteed in this special issue. Invited papers will undergo the standard Language and Speech review process and be subject to Language and Speech author guidelines (https://us.sagepub.com/en-us/nam/journal/language-and-speech#submission-guidelines). Invited papers can be short reports or full reports (see author guidelines, section 1.2).
Paper proposals in PDF file format must be sent to etap4umass@gmail.com by May 15, 2019. Proposals must not be more than 1 page (with an additional page allowed for figures, examples, and references). Authors of selected proposals will be invited to submit their full papers, which will be due by 12/15/19. Acceptance/rejection notifications will be sent in early 2020 with editorial comments. We are hoping to submit our final proofs to the journal by summer 2020.
Back | Top |
IT-Information Technology
Call for papers
Special Issue:
Affective Computing, Deep Learning & Health
Scope of the Journal: IT - Information Technology is a strictly peer-reviewed scientific journal. It is the oldest German journal in the field of information technology. Today, the major aim of IT - Information Technology is highlighting issues on ongoing newsworthy areas in information technology and informatics and their application. It aims at presenting the topics with a holistic view It addresses scientists, graduate students, and experts in industrial research and development.
Aim of the Special Issue: Analysis of human behaviours and emotions based on affective computing techniques have received considerable attention in the relevant literature in recent years. The main aim of this interest is to endow computers with the human traits of adequately recognising and responding to emotion or affect. One particularly interesting field of applying affective computing technologies is in healthcare scenarios. In clinical psychology and psychotherapy settings, affective computing can be used to provide objective diagnostic information, accurately track changes in patients? mood or emotion regulations in therapy, or enable Virtual Therapists to have the ability to empathise and appropriately respond to their patients' needs. As in most areas based heavily on Artificial Intelligence, deep learning solutions are the pre-eminent approach in many affective computing applications.
This special issue aims to solicit papers which contribute ideas, methods and case studies for how affective computing technologies can aid healthcare. In particular, these include, but are not limited to, solutions utilising:
Authors are asked to kindly submit their manuscript online at: http://www.editorialmanager.com/itit/.
Back | Top |
Call for papers: Pluricentric Languages in Speech Technology
Pluricentric languages (PLCLs) are a common type among the languages of the world. Presently 43 languages have been identified to belong to this category. Languages like English, Spanish, Portuguese, Bengali, Hindi, Urdu etc. fall into this category. A language is identified as pluricentric if it is being used in at least two nations where it is also having an official function and if it is forming national varieties of their own with specific linguistic and pragmatic features. In addition to the variation on the level of national standard varieties, there is also so called “second level variation” on a regional and local level that is often being used in diglossic speech situations where code switching is a salient feature with two or more varieties being used within the same utterance. The amount of linguistic variation in pluricentric languages is considerable and poses a challenge for speech recognition in particular and human language technology in general.
The topic of pluricentric languages overlaps in some aspects with the topic of low-resourced languages. In contrast to “low-resourced” languages, pluricentric languages may already have plenty of resources (e.g., English, French, German), but variant sensitive or variant-independent technology is likely to be absent. In contrast to activities in the field of dialect recognition, the “non-dominant” varieties of pluricentric languages are the standard language in the respective countries and thus are also printed and spoken in media, parliaments and juridical texts.
The motivation for this special issue is the observation that pluricentric languages have so far mainly been described linguistically but not sufficiently been dealt with in the field of speech technology. This is particularly the case with the so-called “non-dominant varieties”. Given the current state of research in the field, we are especially interested in contributions which:
– investigate methods for creating speech and language resources, with a special focus on “non-dominant varieties” (e.g., Scots, Saami, Karelian Finnish, Tadczik, Frisian as well as diverse American and African languages: Aymara, Bamabara, Fulfulde, Tuareg, etc.).
– develop speech technologies such as speech recognition, text-to-speech and speech-to-speech for the national varieties of pluricentric languages; on the level of standard varieties and on the level of so-called “informal speech”.
– investigate novel statistical methods for speech and language technology needed to deal with small data sets.
– study the (automatic) processing of speech for code-switched speech in national varieties of pluricentric languages.
– investigate methods on how to use speech technology to aid sociolinguistic studies.
– present empirical perception and production studies on the phonetics and phonology of national varieties of pluricentric languages.
– present empirical perception and production studies on learning a pluricentric language as a second language and on developing computer aided language learning (CALL) tools for pluricentric languages.
– study effects on speech technology on language change for pluricentric languages (e.g., compare developments of non-dominant varieties in comparison of dominant varieties for which speech and language technologies are available).
This special issue is inspired by the Sattelite Workshop of Interspeech “Pluricentric Languages in Speech Technology” to be held in Graz on September 14, 2019 (http://www.pluricentriclanguages.org/ndv-interspeech-workshop-graz-2019/?id=0). The special issue invites contributions from participants of the workshop as well as from others working in related areas. Papers of interdisciplinary nature are especially welcome! Manuscript submission to this Virtual Special Issue is possible between December 1, 2019 and November 30, 2020.
Editors:
Rudolf Muhr (University of Graz, Austria), rudolf.muhr@uni-graz.at
Barbara schuppler (Graz University of Technology, Austria), b.schuppler@tugraz.at
Tania Habib (University of Engineering and Technology Lahore, Pakistan), tania.habib@uet.edu.pk
Back | Top |
Call for Papers: Special issue on Advances in Automatic Speaker Verification Anti-spoofing
Back | Top |
We would like to announce the Call for Papers for a Special Issue on Vocal Accommodation
in Speech Communication in Journal of Phonetics, co-edited by Jennifer Pardo, Elisa
Pellegrino, Volker Dellwo and Bernd Möbius.
We especially invite contributions which:
- examine and ideally compare instances of vocal accommodation in human-human and
human-computer interactions according to their underlying mechanism (e.g. automatic
perception production link) and social functions (e.g. to signal social closeness or
distance; to become more intelligible; to sound dominant, trustworthy or attractive);
- investigate the effect of task-specific and talker-specific characteristics (gender,
age, personality, linguistic and cultural background, role in interaction) in degree and
direction of convergence towards human and computer interlocutors;
- integrate articulatory and/or perceptual/neurocognitive/multimodal data to the analysis
of vocal accommodation in interactive and non-interactive speech tasks;
- investigate the contribution of short/long-term accommodation in human-human and
human-computer interactions to the diffusion of linguistic innovation and ultimately
language variation and change;
- explore the implications of accommodation for human and machine speaker recognition,
language learning technologies, and speech rehabilitation.
Important Dates and Timeline
Deadline for submission of 1-page abstract: 31 July 2019
Invitation for full paper submission: 31 August 2019
Deadline for submission of full paper: 31 December 2019
Further information
https://www.journals.elsevier.com/journal-of-phonetics/call-for-papers/call-for-papers-vocal-accommodation-in-speech-communication With best wishes from the
editors
Jennifer Pardo (pardoj@montclair.edu <mailto:pardoj@montclair.edu>), Elisa Pellegrino
(elisa.pellegrino@uzh.ch <mailto:elisa.pellegrino@uzh.ch>), Volker Dellwo
(volker.dellwo@uzh.ch <mailto:volker.dellwo@uzh.ch>), Bernd Möbius
(moebius@coli.uni-saarland.de <mailto:moebius@coli.uni-saarland.de>)
Back | Top |
In the past years, thanks to the disruptive advances in deep learning, significant progress has been made in speech processing, language processing, computer vision, and applications across multiple modalities. Despite the superior empirical results, however, there remain important issues to be addressed. Both theoretical and empirical advancements are expected to drive further performance improvements, which in turn would generate new opportunities for in-depth studies of emerging novel learning and modeling methodologies. Moreover, many problems in artificial intelligence involve more than one modality, such as language, vision, speech and heterogeneous signals. Techniques developed for different modalities can often be successfully cross-fertilized. Therefore, it is of great interest to study multimodal modeling and learning approaches across more than one modality. The goal of this special issue is to bring together a diverse but complementary set of contributions on emerging deep learning methods for problems across multiple modalities. The topics of this special issue include but not limit to the following:
Topics of interest in this special issue include (but are not limited to):
Prospective authors should follow the instructions given on the IEEE JSTSP webpages and submit their manuscript to the web submission system.
Important Dates
|
|
Back | Top |
Optimization is now widely reckoned as an indispensable tool in signal processing and machine learning. Although convex optimization remains a powerful, and is by far the most extensively used, paradigm for tackling signal processing and machine learning applications, we have witnessed a shift in interest to non-convex optimization techniques over the last few years. On one hand, many signal processing and machine learning applications-such as dictionary recovery, low-rank matrix recovery, phase retrieval, and source localization-give rise to well-structured non-convex formulations that exhibit properties akin to those of convex optimization problems and can be solved to optimality more efficiently than their convex reformulations or approximations.
On the other hand, for some contemporary signal processing and machine learning applications-such as deep learning and sparse regression-the use of convex optimization techniques may not be adequate or even desirable. Given the rapidly-growing yet scattered literature on the subject, there is a clear need for a special issue that introduces the essential elements of non-convex optimization to the broader signal processing and machine learning communities, provides insights into how structures of the non-convex formulations of various practical problems can be exploited in algorithm design, showcases some notable successes in this line of study, and identifies important research issues that are motivated by existing or emerging applications. This special issue aims to address the aforementioned needs by soliciting tutorial-style articles with pointers to available software whenever possible.
|
|
White paper due: August 1, 2019
Invitation notification: September 1, 2019
Manuscript due: November 1, 2019
First review to authors: January 1, 2020
Guest Editors:
Back | Top |
Call for submission : https://tal-61-1.sciencesconf.org/
TAL Journal: regular issue
2020 Volume 61-1
Editors : Cécile Fabre, Emmanuel Morin, Sophie Rosset and Pascale Sébillot
Deadline for submission: 15/11/2019
--
TOPICS
The TAL journal launches a call for papers for an open issue of the
journal. We invite papers in any field of natural language processing,
including:
- lexicon, syntax, semantics, discourse and pragmatics;
- morphology, phonology and phonetics;
- spoken and written language analysis and generation;
- logical, symbolic and statistical models of language;
- information extraction and text mining;
- multilingual processing, machine translation and translation tools;
- natural language interfaces and dialogue systems;
- multimodal interfaces with language components;
- language tools and resources;
- system evaluation;
- terminology, knowledge acquisition from texts;
- information retrieval;
- corpus linguistics;
- use of NLP tools for linguistic modeling;
- computer assisted language learning;
- applications of natural language processing.
Whatever the topic, papers must stress the natural language processing
aspects.
'Position statement' or 'State of the art' papers are welcome.
LANGUAGE
Manuscripts may be submitted in English or French. Submissions in
English are accepted only if one of the co-authors is a non French-speaking
person.
THE JOURNAL
TAL (http://www.atala.org/revuetal - Traitement Automatique des Langues
/ Natural Language Processing) is an international journal
published by ATALA (French Association for Natural Language Processing) since 1960
with the support of CNRS (National Centre for Scientific Research). It
has moved to an electronic mode of publication, with printing on
demand.
IMPORTANT DATES
Deadline for submission: 15/11/2019
Notification to authors after first review: 29/02/2020
Notification to authors after second review: 2/05/2020
Publication: September 2020
FORMAT SUBMISSION
Papers should strictly be between 20 and 25 pages long.
TAL performs double-blind review: it is thus necessary to anonymise the manuscript and the name of the pdf file and to avoid self references.
Style sheets are available for download on the Web site of the journal
(http://www.atala.org/content/instructions-aux-auteurs-feuilles-de-style-0).
Authors who intend to submit a paper are encouraged to upload your
contribution via the menu 'Paper submission' (PDF format). To do so, you
will need to have an account on the sciencesconf platform. To create an
account, go to the site http://www.sciencesconf.org and click on 'create
account' next to the 'Connect' button at the top of the page. To
submit, come back to the page (soon available) http://tal-61-1.sciencesconf.org/,
connect to you account and upload your submission.
Back | Top |
Extended deadline to September 15th
In the past years, thanks to the disruptive advances in deep learning, significant progress has been made in speech processing, language processing, computer vision, and applications across multiple modalities. Despite the superior empirical results, however, there remain important issues to be addressed. Both theoretical and empirical advancements are expected to drive further performance improvements, which in turn would generate new opportunities for in-depth studies of emerging novel learning and modeling methodologies. Moreover, many problems in artificial intelligence involve more than one modality, such as language, vision, speech and heterogeneous signals. Techniques developed for different modalities can often be successfully cross-fertilized. Therefore, it is of great interest to study multimodal modeling and learning approaches across more than one modality. The goal of this special issue is to bring together a diverse but complementary set of contributions on emerging deep learning methods for problems across multiple modalities. The topics of this special issue include but not limit to the following:
Topics of interest in this special issue include (but are not limited to):
Prospective authors should follow the instructions given on the IEEE JSTSP webpages and submit their manuscript to the web submission system.
Back | Top |
Back | Top |