ISCApad #243 |
Sunday, September 23, 2018 by Chris Wellekens |
7-1 | Special issue on Multimodal Interaction in Automotive Applications , Springer Journal on Multimodal Interaction Multimodal Interaction in Automotive Applications =================================================
With the smartphone becoming ubiquitous, pervasive distributed computing is becoming a reality. Increasingly, aspects of the internet of things find their way into many aspects of our daily lives. Users are interacting multimodally with their smartphones and expectations with regard to natural interaction have increased dramatically in the past years. Even more, users have started to project these expectations towards all kind of interfaces encountered in their daily lives. Currently, these expectations are not yet fully met by car manufacturers since the automotive development cycles are still much longer compared to software industry. However, the clear trend is that manufacturers add technology to cars to deliver on their vision and promise of a safer drive. Multiple modalities are already available in today?s dashboards, including haptic controllers, touch screens, 3D gestures, voice, secondary displays, and gaze. In fact, car manufacturers are aiming for a personal assistant with deep understanding of the car and an ability to meet driving-related demands and non-driving-related needs to get the job done. For instance, such an assistant can naturally answer any question about the car and help schedule service when needed. It can find the preferred gas station along the route, or even better ? plan a stop and ensure to arrive in time for a meeting. It understands that a perfect business meal involves more than finding a sponsored restaurant, and includes unbiased reviews, availability, budget, trouble-free parking and notifies all invitees of the meeting time and location. Moreover, multimodality can be a source for fatigue detection. The main goal for multimodal interaction and driver assistance systems is on ensuring that the driver can focus on his primary task of a safe drive.
This is why the biggest innovations in today?s cars happened in the way we interact with the integrated devices such as the infotainment system. For instance, it has been shown that voice based interaction is less distractive than interaction with visual haptic interface, but it is only one piece in the way we interact multimodally in today?s cars, shifting away from the GUI as the only source of interaction. This also leads to additional efforts to establish a mental model for the user. With the plethora of available modalities requiring multiple mental maps, learnability decreased considerably. Multimodality may also help here to decrease distraction. In the special issue we will present the challenges and opportunities of multimodal interaction to help reducing cognitive load and increase learnability as well as current research that has the potential to be employed in tomorrow?s cars. In this special issue, we especially invite researchers, scientists, and developers to submit contributions that are original and unpublished and have not been submitted to any other journal, magazine, or conference. We expect at least 30% of novel content. We are soliciting original research related to multimodal smart and interactive media technologies in areas including - but not limited to - the following: * In-vehicle multimodal interaction concepts * Multimodal Head-Up Displays (HUDs) and Augmented Reality (AR) concepts * Reducing driver distraction and cognitive load and demand with multimodal interaction * (pro-active) in-car personal assistant systems * Driver assistance systems * Information access (search, browsing etc) in the car * Interfaces for navigation * Text input and output while driving * Biometrics and physiological sensors as a user interface component * Multimodal affective intelligent interfaces * Multimodal automotive user-interface frameworks and toolkits * Naturalistic/field studies of multimodal automotive user interfaces * Multimodal automotive user-interface standards * Detecting and estimating user intentions employing multiple modalities
Guest Editors ============= Dirk Schnelle-Walka, Harman International, Connected Car Division, Germany Phil Cohen, Voicebox, USA Bastian Pfleging, Ludwig-Maximilians-Universität München, Germany
Submission Instructions =======================
1-page abstract submission: Feb 5, 2018 Invitation for full submission: March 15, 2018 Full Submission: April 28, 2018 Notification about acceptance: June 15, 2018 Final article submission: July 15, 2018 Tentative Publication: ~ Sept 2018
Companion website: https://sites.google.com/view/multimodalautomotive/
Authors are requested to follow instructions for manuscript submission to the Journal of Multimodal User Interfaces (http://www.springer.com/computer/hci/journal/12193) and to submit manuscripts at the following link: https://easychair.org/conferences/?conf=mmautomotive2018.
| ||||||||||
7-2 | Special issue of JSTSP on Far-Field Speech Processing in the Era of Deep Learning Speech enhancement, Separation and RecognitionSpecial Issue onFar-Field Speech Processing in the Era of Deep Learning
|
Top |
Summary: Acoustic source localization is a well-studied topic in signal processing, but most traditional methods incorporate simplifying assumptions such as a point source, free-field propagation of the sound wave, static acoustic sources, time-invariant sensor constellations, and simple noise fields. However, these assumptions may be seriously violated in a range of emerging applications, such as audio recording with mobile devices (e.g. cell phones, extreme cameras, and robots), video conferencing on the go, and recording for 3D reproduction and virtual reality. In these applications, the environment is extremely challenging, with spatially distributed sources, reverberation, complex noise fields, multiple concurrent speakers, interferences, and time-varying sources and sensors positions.
The proposed special issue aims to present recent advances in the development of signal processing methods for localization and tracking of acoustic sources and the associated theory and applications. To address the challenges raised by the real-life environment, novel methods that use modern array processing, speech processing and data inference tools, become a necessity.
As these challenges involve both audio processing and sensor arrays, this proposal is timely and relevant to researchers from both acoustic signal processing domain and array processing domain. The guest editors are therefore coming from both communities. It comprises of current and past chairs of the respective technical committees (Audio and Acoustic Signal Processing ? AASP and Sensor and Sensor Array and Multichannel ? SAM TCs). This special issue proposal follows successful special sessions in major conferences: 'Learning-based Sound Source Localization and Spatial Information Retrieval' ICASSP2016), 'Speaker localization in dynamic real-life environments? (ICASSP2017), ?Acoustic Scene Analysis and Signal Enhancement using Microphone Array? (EUSIPCO2017), ?Acoustical Signal Processing for Hearables? (EUSIPCO2017).
Prospective authors should follow the instructions given on the IEEE JSTSP webpage, and submit their manuscript through the web submission system.
|
|
|
Top |
Audio signal processing is currently undergoing a paradigm change, where data-driven machine learning is replacing hand-crafted feature design. This has led some to ask whether audio signal processing is still useful in the 'era of machine learning.' There are many challenges, new and old, including the interpretation of learned models in high dimensional spaces, problems associated with data-poor domains, adversarial examples, high computational requirements, and research driven by companies using large in-house datasets that is ultimately not reproducible.
This special issue aims to promote progress, systematization, understanding, and convergence of applying machine learning in the area of audio signal processing. Specifically, we are interested in work that demonstrates novel applications of machine learning techniques in the area of sound and music signal processing, as well as methodological considerations of merging machine learning with audio signal processing. We seek contributions in, but not limited to, the following topics:
This call addresses audio signal processing for speech, acoustic scenes, and music.
Prospective authors should follow the instructions given on the IEEE JSTSP webpages and submit their manuscript with the web submission system.
Top |
Topical Issue on Intelligent Methods for Textual Information Retrieval
============================
Call for Papers: www.degruyter.com/page/1831
Submission deadline: 20th February 2019
============================
EDITED BY
Guest Editors:
Adrian-Gabriel Chifu, Aix-Marseille Université, France
Sébastien Fournier, Aix-Marseille Université, France
Advisory Editor:
Patrice Bellot, Aix-Marseille Université / CNRS, France
============================
DESCRIPTION
Machine learning approaches for intelligent text mining and retrieval are actively
studied by researchers in natural language processing, information retrieval and other
related fields. While supervised methods usually attain much better performance than
unsupervised methods, they also require annotated data which is not always available or
easy to obtain. Hence, we encourage the submission of supervised, unsupervised or hybrid
methods for intelligent text retrieval tasks. Methods studying alternative learning
paradigms, e.g. semi-supervised learning, weakly-supervised learning, zero-shot learning,
but also transfer learning, are very welcome as well.
This thematic special issue covers three research areas: natural language processing,
computational linguistics and information retrieval. The submissions may address, but are
not limited to, the following topics:
? information retrieval
? information extraction
? query processing
? word sense disambiguation/discrimination
? machine learning in NLP
? sentiment analysis and opinion mining
? contradiction and controversy detection
? social media
? summarization
? text mining
? text categorization and clustering
The submitted papers will undergo peer review process before they can be accepted.
Notification of acceptance will be communicated as we progress with the review process.
============================
HOW TO SUBMIT
Before submission authors should carefully read the Instruction for Authors:
www.degruyter.com/view/supplement/s22991093_Instruction_for_Authors.pdf
Manuscripts can be written in TeX, LaTeX (strongly recommended) - the journal?s LATEX
template. Please note that we do not accept papers in Plain TEX format. Text files can be
also submitted as standard DOCUMENT (.DOC) which is acceptable if the submission in LATEX
is not possible. For an initial submission, the authors are strongly advised to upload
their entire manuscript, including tables and figures, as a single PDF file.
All submissions to the Topical Issue must be made electronically via online submission
system Editorial Manager:
www.editorialmanager.com/opencs/
All manuscripts will undergo the standard peer-review procedure (single blind, at least
two independent reviews). When entering your submission via online submission system
please choose the article type ?TI on Information Retrieval?.
The deadline for submission is 20th February 2019, but individual papers will be reviewed
and published online on an ongoing basis.
Contributors to the Topical Issue will benefit from:
? NO submission and publication FEES
? indexation by Clarivate Analytics - Web of Science (ESCI) and SCOPUS
? fair and constructive peer review provided by experts in the field
? no space constraints
? convenient, web-based paper submission and tracking system ? Editorial Manager
? free language assistance for authors from non-English speaking regions
? fast online publication upon completing the publishing process
? better visibility due to Open Access
? long-term preservation of the content (articles archived in Portico)
? extensive post-publication promotion for selected papers
In case of any questions please contact Dr. Justyna ?uk, Managing Editor,
Justyna.Zuk@degruyteropen.com.We are looking forward to your submission.
Top |