ISCA - International Speech
Communication Association


ISCApad Archive  »  2017  »  ISCApad #229  »  Journals

ISCApad #229

Monday, July 10, 2017 by Chris Wellekens

7 Journals
7-1CfP Neurocomputing: Special Issue on Machine Learning for Non-Gaussian Data Processing
 
Neurocomputing: Special Issue on Machine Learning for Non-Gaussian Data Processing 
 
With the widespread explosion of sensing and computing, an increasing number of industrial applications and an ever-growing amount of academic research generate massive multi-modal data from multiple sources. Gaussian distribution is the probability distribution ubiquitously used in statistics, signal processing, and pattern recognition. However, not all the data we are processing are Gaussian distributed. It has been found in recent studies that explicitly utilizing the non-Gaussian characteristics of data (e.g., data with bounded support, data with semi-bounded support, and data with L1/L2-norm constraint) can significantly improve the performance of practical systems. Hence, it is of particular importance and interest to make thorough studies of the non-Gaussian data and the corresponding non-Gaussian statistical models (e.g., beta distribution for bounded support data, gamma distribution for semi-bounded support data, and Dirichlet/vMF distribution for data with L1/L2-norm constraint).

In order to analyze and understand such kind of non-Gaussian data, the developments of related learning theories, statistical models, and efficient algorithms become crucial. The scope of this special issue is to provide theoretical foundations and ground-breaking models and algorithms to solve this challenge.

We invite authors to submit articles to address the aspects ranging from case studies of particular problems with non-Gaussian distributed data to novel learning theories and approaches, including (but not limited to):
  • Machine Learning for Non-Gaussian Statistical Models
  • Non-Gaussian Pattern Learning and Feature Selection
  • Sparsity-aware Learning for Non-Gaussian Data
  • Visualization of Non-Gaussian Data
  • Dimension Reduction and Feature Selection for Non-Gaussian Data
  • Non-Gaussian Convex Optimization
  • Non-Gaussian Cross Domain Analysis
  • Non-Gaussian Statistical Model for Multimedia Signal Processing
  • Non-Gaussian Statistical Model for Source and/or Channel Coding
  • Non-Gaussian Statistical Model for Biomedical Signal Processing
  • Non-Gaussian Statistical Model for Bioinformatics
  • Non-Gaussian Statistical Model in Social Networks
  • Platforms and Systems for Non-Gaussian Data Processing
Timeline
SUBMISSION DEADLINE: Oct 15, 2016
ACCEPTANCE DEADLINE: June 15, 2017
EXPECTED PUBLICATION DATE: Sep 15, 2017

Guest Editors

Associate Professor
Zhanyu Ma
Beijing University of Posts and Telecommunications (BUPT)

Professor
Jen-Tzung Chien
National Chiao Tung University (NCTU)

Associate Professor
Zheng-Hua Tan
Aalborg University (AAU)

Senior Lecture
Yi-Zhe Song
Queen Mary University of London (QMUL)

Postdoctoral Researcher
Jalil Taghia
Stanford University

Associate Professor
Ming Xiao
KTH ? Royal Institute of Technology
Top

7-2CFP: Machine Translation Journal/Special Issue on Spoken Language Translation( updated)
******* CFP: Machine Translation Journal ********

** Special Issue on Spoken Language Translation **

http://www.springer.com/computer/artificial/journal/10590
 

Guest editors:

Alex Waibel (Carnegie Mellon University / Karlsruhe Institute of Technology)

Sebastian Stüker (Karlsruhe Institute of Technology)

Marcello Federico (Fondazione Bruno Kessler)

Satoshi Nakamura (Nara Institute of Science and Technology)

Hermann Ney (RWTH Aachen University)

Dekai Wu (The Hong Kong University of Science and Technology)

 ---------------------------------------------------------------------------

Spoken language translation (SLT) is the science of automatic translation of spoken language.  It may be tempting to view spoken language as nothing more than language (as in text) with an added spoken verbalization preceding it.  Translation of speech could then be achieved by simply applying automatic speech recognition (ASR or ?speech-to-text?) before applying traditional machine translation (MT).

Unfortunately, such an overly simplistic approach does not address the complexities of the problem.  Not only do speech recognition errors compound with errors in machine translation, but spoken language also differs considerably in form, structure and style, so as to render the combination of two text-based components as ineffective.  Moreover, automatic spoken language translation systems serve different practical goals than voice interfaces or text translators, so that integrated systems and their interfaces have to be designed carefully and appropriately (mobile, low-latency, audio-visual, online/offline, interactive, etc.) around their intended deployment.

Unlike written texts, human speech is not segmented into sentences, does not contain punctuation, is frequently ungrammatical, contains many disfluencies, or sentence fragments.  Conversely, spoken language contains information about the speaker, gender, emotion, emphasis, social form and relationships and ?in the case of dialog- there is discourse structure, turn-taking, back-channeling across languages to be considered.

SLT systems, therefore, need to consider a host of additional concerns related to integrated recognition and translation performance, use of social form and function, prosody, suitability and (depending on deployment) effectiveness of human interfaces, and task performance under various speed, latency, context and language resource constraints.

Due to continuing improvements in underlying spoken language ASR and MT components as well as in the integrated system designs, spoken language systems have become increasingly sophisticated and can handle increasingly complex sentences, more natural environments, discourse and conversational styles, leading to a variety of successful practical deployments.

In the light of 25 years of successful research and transition into practice, the MT Journal dedicates a special issue to the problem of Spoken Language Translation.  We invite submissions of papers that address issues and problems pertaining to the development, design and deployment of spoken language translation systems.  Papers on component technologies and methodology as well as on system designs and deployments of spoken language systems are both encouraged.
 
---------------------------------------------------------------------------

Submission guidelines:

- Authors should follow the 'Instructions for Authors' available on the MT Journal website:

http://www.springer.com/computer/artificial/journal/10590

- Submissions must be limited to 25 pages (including references)

- Papers should be submitted online directly on the MT journal's submission website: http://www.editorialmanager.com/coat/default.asp, indicating this special issue in ?article type?
 

Important dates (Modified)

- Paper submission: August 30th 2016.

- Notification to authors: October 15th 2016.

- Camera-ready*: January 15th 2017.

* tentative - depending on the number of review rounds required



 
Top

7-3CfP IEEE JSTSP Special Issue on Spoofing and Countermeasures for Automatic Speaker Verification (extended deadline)


 Call for Papers
IEEE Journal of Selected Topics in Signal Processing
Special Issue on
Spoofing and Countermeasures for Automatic Speaker Verification

Automatic speaker verification (ASV) offers a low-cost and flexible biometric solution to
person authentication. While the reliability of ASV systems is now considered sufficient
to support mass-market adoption, there are concerns that the technology is vulnerable to
spoofing, also referred to as presentation attacks. Replayed, synthesized and converted
speech spoofing attacks can all project convincing, high-quality speech signals that are
representative of other, specific speakers and thus present a genuine threat to the
reliability of ASV systems.

Recent years have witnessed a movement in the community to develop spoofing
countermeasures, or presentation attack detection (PAD) technology to help protect ASV
systems from fraud. These efforts culminated in the first standard evaluation platform
for the assessment of spoofing and countermeasures of automatic speaker verification ?
the Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof) ?
which was held as a special session at Interspeech 2015.

This special issue is expected to present original papers describing the very latest
developments in spoofing and countermeasures for ASV. The focus of the special issue
includes, but is not limited to the following topics related to spoofing and
countermeasures for ASV:

- vulnerability analysis of previously unconsidered spoofing methods;
- advanced methods for standalone countermeasures;
- advanced methods for joint ASV and countermeasure modelling;
- information theoretic approaches for the assessment of spoofing and countermeasures;
- spoofing and countermeasures in adverse acoustic and channel conditions;
- generalized and speaker-dependent countermeasures;
- speaker obfuscation, impersonation, de-identification, disguise, evasion and adapted
countermeasures;
- analysis and comparison of human performance in the face of spoofing;
- new evaluation protocols, datasets, and performance metrics for the assessment of
spoofing and countermeasures for ASV;
- countermeasure methods using other modality or multimodality that are applicable to
speaker verification

Also invited are submissions of exceptional quality with a tutorial or overview nature.
Creative papers outside the areas listed above but related to the overall scope of the
special issue are also welcome. Prospective authors can contact the Guest Editors to
ascertain interest on such topics.

Prospective authors should visit
http://www.signalprocessingsociety.org/publications/periodicals/jstsp/ for submission
information. Manuscripts should be submitted at
http://mc.manuscriptcentral.com/jstsp-ieee and will be peer reviewed according to
standard IEEE processes.

Important Dates:
- Manuscript submission due: August 15, 2016 (extended)
- First review completed: October 15, 2016
- Revised manuscript due: December 1, 2016
- Second review completed: February 1, 2017
- Final manuscript due: March 1, 2017
- Publication date: June, 2017

Guest Editors:
Junichi Yamagishi, National Institute of Informatics, Japan, email: jyamagis@nii.ac.jp
Nicholas Evans, EURECOM, France, email: evans@eurecom.fr
Tomi Kinnunen, University of Eastern Finland, Finland, email: tomi.kinnunen@uef.fi
Phillip L. De Leon, New Mexico State University & VoiceCipher, USA, email:
pdeleon@nmsu.edu
Isabel Trancoso, INESC-ID, Portugal, email: Isabel.Trancoso@inesc-id.pt

Top

7-4Special Issue on Biosignal-based Spoken Communication in the IEEE/ACM Transactions on Audio, Speech, and Language Processing
Call for Papers 
Special Issue on Biosignal-based Spoken Communication 
in the IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP) 

Speech is a complex process emitting a wide range of biosignals, including, but not limited to, acoustics. These biosignals ? stemming from the articulators, the articulator muscle activities, the neural pathways, or the brain itself ? can be used to circumvent limitations of conventional speech processing in particular, and to gain insights into the process of speech production in general. Research on biosignal-based speech capturing and processing is a wide and very active field at the intersection of various disciplines, ranging from engineering, electronics and machine learning to medicine, neuroscience, physiology, and psychology. Consequently, a variety of methods and approaches are thoroughly investigated, aiming towards the common goal of creating biosignal-based speech processing devices and applications for everyday use, as well as for spoken communication research purposes. We aim at bringing together studies covering these various modalities, research approaches, and objectives in a Special issue of the IEEE Transactions on Audio, Speech, and Language Processing entitled Biosignal-based Spoken Communication. 

For this purpose we will invite papers describing previously unpublished work in the following broad areas: 
  • Capturing methods for speech-related biosignals: tracing of articulatory activity (e.g. EMA, PMA, ultrasound, video), electrical biosignals (e.g. EMG, EEG, ECG, NIRS), acoustic sensors for capturing whispered / murmured speech (e.g. NAM microphone), etc. 
  • Signal processing for speech-related biosignals: feature extraction, denoising, source separation, etc. 
  • Speech recognition based on biosignals (e.g. silent speech interface, recognition in noisy environment, etc.). 
  • Mapping between speech-related biosignals and speech acoustics (e.g. articulatory-acoustic mapping) 
  • Modeling of speech units: articulatory or phonetic features, visemes, etc. 
  • Multi-modality and information fusion in speech recognition 
  • Challenges of dealing with whispered, mumbled, silently articulated, or inner speech 
  • Neural Representations of speech and language 
  • Novel approaches in physiological studies of speech planning and production 
  • Brain-computer-interface (BCI) for restoring speech communication 
  • User studies in biosignal-based speech processing 
  • End-to-end systems and devices 
  • Applications in rehabilitation and therapy 

Submission Deadline: November 2016 
Notification of Acceptance: January 2017 
Final Manuscript Due: April 2017 
Tentative Publication Date: First half of 2017 

Editors: 
Tanja Schultz (Universität Bremen, Germany) tanja.schultz@uni-bremen.de (Lead Guest Editor) 
Thomas Hueber (CNRS/GIPSA-lab, Grenoble, France) thomas.hueber@gipsa-lab.fr 
Dean J. Krusienski (ASPEN Lab, Old Dominion University) dkrusien@odu.edu 
Jonathan Brumberg (Speech-Language-Hearing Department, University of Kansas) brumberg@ku.edu
Top

7-5CSL special issue 'Recent advances in speaker and language recognition and characterization'

    Computer Speech and Language Special Issue on 
  Recent advances in speaker and language recognition and characterization

                            Call for Papers

The goal of this special issue is to highlight the current state of research 
efforts on speaker and language recognition and characterization. New ideas
about features, models, tasks, datasets or benchmarks are growing making this
a particularly exciting time. 

In the last decade, speaker recognition (SR) has gained importance in the
field of speech science and technology, with new applications beyond forensics,
such as large-scale filtering of telephone calls, automated access through
voice profiles, speaker indexing and diarization, etc. Current challenges
involve the use of increasingly short signals to perform verification,
the need for algorithms that are robust to all kind of extrinsic variabilities,
such as noise and channel conditions, but allowing for a certain amount of
intrinsic variability (due to health issues, stress, etc.) and the development
of countermeasures against spoofing and tampering attacks. On the other hand,
language recognition (LR) has also witnessed a remarkable interest
from the community as an auxiliary technology for speech recognition,
dialogue systems and multimedia search engines, but specially for large-scale
filtering of telephone calls. An active area of research specific to LR is
dialect and accent identification. Other issues that must be dealt with
in LR tasks (such as short signals, channel and environment variability, etc.)
are basically the same as for SR.

The features, modeling approaches and algorithms used in SR and LR
are closely related, though not equally effective, since these two tasks
differ in several ways. In the last couple of years, and after
the success of Deep Learning in image and speech recognition,
the use of Deep Neural Networks both as feature extractors
and classifiers/regressors is opening new exciting research horizons. 

Until recently, speaker and language recognition technologies were mostly
driven by NIST evaluation campaigns: Speaker Recognition Evaluations (SRE)
and Language Recognition Evaluations (LRE), which focused on large-scale
verification of telephone speech. In the last years, other initiatives
(such as the 2008/2010/2012 Albayzin LRE, the 2013 SRE in Mobile Environment,
the RSR2015 database or the 2015 Multi-Genre Broadcast Challenge)
have widened the range of applications and the research focus.
Authors are encouraged to use these benchmarks to test their ideas.

This special issue aims to cover state-of-the-art works; however,
to provide readers with a state-of-the-art background on the topic, 
we will invite one survey paper, which will undergo peer review.
Topics of interest include, but are not limited to: 

o Speaker and language recognition, verification, identification
o Speaker and language characterization
o Features for speaker and language recognition
o Speaker and language clustering
o Multispeaker segmentation, detection, and diarization
o Language, dialect, and accent recognition
o Robustness in channels and environment
o System calibration and fusion
o Speaker recognition with speech recognition
o Multimodal speaker recognition
o Speaker recognition in multimedia content
o Machine learning for speaker and language recognition
o Confidence estimation for speaker and language recognition
o Corpora and tools for system development and evaluation
o Low-resource (lightly supervised) speaker and language recognition
o Speaker synthesis and transformation
o Human and human-assisted recognition of speaker and language
o Spoofing and tampering attacks: analysis and countermeasures
o Forensic and investigative speaker recognition
o Systems and applications

Note that all papers will go through the same rigorous review process
as regular papers, with a minimum of two reviewers per paper.
 
Guest Editors

Eduardo Lleida             University of Zaragoza, Spain
Luis J. Rodríguez-Fuentes  University of the Basque Country, Spain

Important dates

Submission deadline:                September 16, 2016
Notifications of final decision:    March 31, 2017
Scheduled publication:              April, 2017


Top

7-6IEEE CIS Newsletter on Cognitive and Developmental Systems
 
Dear colleagues,

I am happy to announce the release of the latest issue of the IEEE CIS Newsletter on Cognitive and Developmental Systems (open access).
This is a biannual newsletter addressing the sciences of developmental and cognitive processes in natural and artificial organisms, from humans to robots, at the crossroads of cognitive science, developmental psychology, machine intelligence and neuroscience. 

It is available at: http://goo.gl/KBA9o6

Featuring dialog:
=== 'Moving Beyond Nature-Nurture: a Problem of Science or Communication?'
== Dialog initiated by John Spencer, Mark Blumberg and David Shenk
with responses from: Bob McMurray, Scott Robinson, Patrick Bateson, Eva Jablonka, Stephen Laurence and Eric Margolis, Bart de Boer, Gert Westermann, Peter Marshall, Vladimir Sloutsky, Dan Dediu, Jedebiah Allen and Mark Bickhard, Rick Dale and Anne Warlaumont and Michael Spivey.
== Topic: In spite of numerous scientific discoveries supporting the view of development as a complex multi-factored process, the discussions of development in several scientific fields and in the general public are still strongly organized around the nature/nurture distinction. Is this because there is not yet sufficient scientific evidence, or is this because the simplicity of the nature/nurture framework is much easier to communicate (or just better communicated by its supporters)? Responses show a very stimulating diversity of opinions, ranging from defending the utility of keeping the nature/nurture framing to arguing that biology has already shown its fundamental weaknesses for several decades.

Call for new dialog:
=== 'What is Computational Reproducibility?'
== Dialog initiated by Olivia Guest and Nicolas Rougier
==  This new dialog initiation explores questions and challenges related to openly sharing computational models, especially when they target to advance our understanding of natural phenomena in cognitive, biological or physical sciences: What is computational reproducibility? How shall codebases be distributed and included as a central element of mainstream publication venues? How to ensure computational models are well specified, reusable and understandable? Those of you interested in reacting to this dialog initiation are welcome to submit a response by November 10th, 2016. The length of each response must be between 600 and 800 words including references (contact pierre-yves.oudeyer@inria.fr).

Let me remind you that all issues of the newsletter are all open-access and available at: http://icdl-epirob.org/cdsnl

I wish you a stimulating reading!

Best regards,

Pierre-Yves Oudeyer,

Editor of the IEEE CIS Newsletter on Cognitive and Developmental Systems
Chair of the IEEE CIS AMD Technical Committee on Cognitive and Developmental Systems
Research director, Inria
Head of Flower project-team
Inria and Ensta ParisTech, France
Top

7-7CfP *MULTIMEDIA TOOLS AND APPLICATIONS* Special Issue on 'Content Based Multimedia Indexing'


                      Call for Papers

           *MULTIMEDIA TOOLS AND APPLICATIONS*

  Special Issue on 'Content Based Multimedia Indexing' **********************************************************

Multimedia indexing systems aim at providing user-friendly, fast and accurate access to
large multimedia repositories. Various tools and techniques from different fields such as
data indexing, machine learning, pattern recognition, and human computer interaction have
contributed to the success of multimedia systems. In spite of significant progress in the
field, content-based multimedia indexing systems still show limits in accuracy,
generality and scalability. The goal of this special issue is to bring forward recent
advancements in content-based multimedia indexing. In addition to multimedia and social
media search and retrieval, we wish to highlight related and equally important issues
that build on content-based indexing, such as multimedia content management, user
interaction and visualization, media analytics, etc. The special issue will also feature
contributions on application domains, e.g., multimedia indexing for health or for
e-learning.

Topics of interest include, but are not limited to, the following:

    Audio, visual and multimedia indexing;
    Multimodal and cross-modal indexing;
    Deep learning for multimedia indexing;
    Visual content extraction;
    Audio (speech, music, etc) content extraction;
    Identification and tracking of semantic regions and events;
    Social media analysis;
    Metadata generation, coding and transformation;
    Multimedia information retrieval (image, audio, video, text);
    Mobile media retrieval;
    Event-based media processing and retrieval;
    Affective/emotional interaction or interfaces for multimedia retrieval;
    Multimedia data mining and analytics;
    Multimedia recommendation;
    Large scale multimedia database management;
    Summarization, browsing and organization of multimedia content;
    Personalization and content adaptation;
    User interaction and relevance feedback;
    Multimedia interfaces, presentation and visualization tools;
    Evaluation and benchmarking of multimedia retrieval systems;
    Applications of multimedia retrieval, e.g., medicine, lifelogs, satellite imagery,
video surveillance.

Submission guidelines

All the papers should be full journal length versions and follow the guidelines set out
by Multimedia Tools and Applications: http://www.springer.com/journal/11042.

Manuscripts should be submitted online at https://www.editorialmanager.com/mtap/ choosing
'CBMI 2016' as article type. When uploading your paper, please ensure that your
manuscript is marked as being for this special issue.

Information about the manuscript (title, full list of authors, corresponding author?s
contact, abstract, and keywords) should be also sent to the corresponding editors (see
information below).

All the papers will be peer-reviewed following the MTAP regular paper reviewing
procedures and ensuring the journal high standards.

Important dates

    *Manuscript Due: now postponed to September 30, 2016*
    First Round Decisions: November 1, 2016
    Revisions Due: December 20, 2016
    Final Round Decisions: Feb. 1, 2017
    Publication: First semester 2017

Guest editors
  Guillaume Gravier, IRISA & Inria Rennes, CNRS, France
  Yiannis Kompatsiaris, Information Tech. Institute, CERTH, Greece

Top

7-8Special issue CSL on Recent advances in speaker and language recognition and characterization

             Computer Speech and Language Special Issue on 
  Recent advances in speaker and language recognition and characterization

                            Call for Papers

            -----------------------------------------------
            SUBMISSION DEADLINE EXTENDED TO OCTOBER 9, 2016
            -----------------------------------------------

The goal of this special issue is to highlight the current state of research 
efforts on speaker and language recognition and characterization. New ideas
about features, models, tasks, datasets or benchmarks are growing making this
a particularly exciting time. 

In the last decade, speaker recognition (SR) has gained importance in the
field of speech science and technology, with new applications beyond forensics,
such as large-scale filtering of telephone calls, automated access through
voice profiles, speaker indexing and diarization, etc. Current challenges
involve the use of increasingly short signals to perform verification,
the need for algorithms that are robust to all kind of extrinsic variabilities,
such as noise and channel conditions, but allowing for a certain amount of
intrinsic variability (due to health issues, stress, etc.) and the development
of countermeasures against spoofing and tampering attacks. On the other hand,
language recognition (LR) has also witnessed a remarkable interest
from the community as an auxiliary technology for speech recognition,
dialogue systems and multimedia search engines, but specially for large-scale
filtering of telephone calls. An active area of research specific to LR is
dialect and accent identification. Other issues that must be dealt with
in LR tasks (such as short signals, channel and environment variability, etc.)
are basically the same as for SR.

The features, modeling approaches and algorithms used in SR and LR
are closely related, though not equally effective, since these two tasks
differ in several ways. In the last couple of years, and after
the success of Deep Learning in image and speech recognition,
the use of Deep Neural Networks both as feature extractors
and classifiers/regressors is opening new exciting research horizons. 

Until recently, speaker and language recognition technologies were mostly
driven by NIST evaluation campaigns: Speaker Recognition Evaluations (SRE)
and Language Recognition Evaluations (LRE), which focused on large-scale
verification of telephone speech. In the last years, other initiatives
(such as the 2008/2010/2012 Albayzin LRE, the 2013 SRE in Mobile Environment,
the RSR2015 database or the 2015 Multi-Genre Broadcast Challenge)
have widened the range of applications and the research focus.
Authors are encouraged to use these benchmarks to test their ideas.

This special issue aims to cover state-of-the-art works; however,
to provide readers with a state-of-the-art background on the topic, 
we will invite one survey paper, which will undergo peer review.
Topics of interest include, but are not limited to: 

o Speaker and language recognition, verification, identification
o Speaker and language characterization
o Features for speaker and language recognition
o Speaker and language clustering
o Multispeaker segmentation, detection, and diarization
o Language, dialect, and accent recognition
o Robustness in channels and environment
o System calibration and fusion
o Speaker recognition with speech recognition
o Multimodal speaker recognition
o Speaker recognition in multimedia content
o Machine learning for speaker and language recognition
o Confidence estimation for speaker and language recognition
o Corpora and tools for system development and evaluation
o Low-resource (lightly supervised) speaker and language recognition
o Speaker synthesis and transformation
o Human and human-assisted recognition of speaker and language
o Spoofing and tampering attacks: analysis and countermeasures
o Forensic and investigative speaker recognition
o Systems and applications

Note that all papers will go through the same rigorous review process
as regular papers, with a minimum of two reviewers per paper.
 
Guest Editors

Eduardo Lleida             University of Zaragoza, Spain
Luis J. Rodríguez-Fuentes  University of the Basque Country, Spain

Important dates

Submission deadline (EXTENDED!!!):    OCTOBER 9, 2016
Notifications of final decision:      March 31, 2017
Scheduled publication:                April, 2017

Top

7-9Spêcial issue of Advances in Multimedia on EMERGING CHALLENGES AND SOLUTIONS FOR MULTIMEDIA SECURITY

SPECIAL ISSUE -- CALL FOR PAPERS

EMERGING CHALLENGES AND SOLUTIONS FOR MULTIMEDIA SECURITY

Special Issue of Hindawi's Advances in Multimedia, indexed in Web of Science **

Today's world's societies are becoming more and more dependent on open networks such as the Internet, where commercial activities,
business transactions, government services, and entertainment services are realized. This has led to the fast development of new
cyber threats and numerous information security issues which are exploited by cyber criminals. The inability to provide trusted
secure services in contemporary computer network technologies could have a tremendous socioeconomic impact on global enterprises as
well as on individuals.

In the recent years, rapid development in digital technologies has been augmented by the progress in the field of multimedia
standards and the mushrooming of multimedia applications and services penetrating and changing the way people interact, communicate,
work, entertain, and relax. Multimedia services are becoming more significant and popular and they enrich humans' everyday life.
Currently, the term multimedia information refers not only to text, image, video, or audio content but also to graphics, flash, web,
3D data, and so forth. Multimedia information may be generated, processed, transmitted, retrieved, consumed, or shared in various
environments. The lowered cost of reproduction, storage, and distribution, however, also invites much motivation for large-scale
commercial infringement.

The above-mentioned issues have generated new challenges related to protection of multimedia services, applications, and digital
content. Providing multimedia security is significantly different from providing typical computer information security, since
multimedia content usually involves large volumes of data and requires interactive operations and real-time responses. Additionally,
ensuring digital multimedia security must also signify safeguarding of the multimedia services. Different services require different
methods for content distribution, payment, interaction, and so forth. Moreover, these services are also expected to be 'smart' in
the environment of converged networks, which means that they must adapt to different network conditions and types as multimedia
information can be utilized in various networked environments, for example, in fixed, wireless, and mobile networks. All of these
make providing security for multimedia even harder to perform.

This special issue intends to bring together diversity of international researchers, experts, and practitioners who are currently
working in the area of digital multimedia security. Researchers both from academia and industry are invited to contribute their work
for extending the existing knowledge in the field. The aim of this special issue is to present a collection of high-quality research
papers that will provide a view on the latest research advances not only on secure multimedia transmission and distribution but also
on multimedia content protection.

Potential topics include, but are not limited to:

- Emerging technologies in digital multimedia security
- Digital watermarking
- Fingerprinting in multimedia signals
- Digital media steganology (steganography and steganalysis)
- Information theoretic analysis of secure multimedia systems
- Security/privacy in multimedia services
- Multimedia and digital media forensics
- Quality of Service (QoS)/Quality of Experience (QoE) and their relationships with security
- Security of voice and face biometry
- Multimedia integrity verification and authentication
- Multimedia systems security
- Digital rights management
- Digital content protection
- Tampering and attacks on original information
- Content identification and secure content delivery
- Piracy detection and tracing
- Copyright protection and surveillance
- Forgery detection
- Secure multimedia networking
- Multimedia network protection, privacy, and security
- Secure multimedia system design, trusted computing, and protocol security

Authors can submit their manuscripts via the Manuscript Tracking System at http://mts.hindawi.com/submit/journals/am/adms/.

Manuscript Due:            Friday, 2 December 2016
First Round of Reviews:        Friday, 24 February 2017
Publication Date:            Friday, 21 April 2017

Lead Guest Editor:

Wojciech Mazurczyk, Warsaw University of Technology, Warsaw, Poland

Guest Editors:

Artur Janicki, Warsaw University of Technology, Warsaw, Poland

Hui Tian, National Huaqiao University, Xiamen, China

Honggang Wang,
University of Massachusetts Dartmouth, Dartmouth, USA

Top

7-10Cfp Speech Communication Virtual Special Issue: Multi-laboratory evaluation of forensic voice comparison systems under conditions reflecting those of a real forensiccase (forensic_eval_01)

CALL FOR PAPERS:
 
Speech Communication
Virtual Special Issue
 
Multi-laboratory evaluation of forensic voice comparison systems under conditions reflecting those of a real forensic case (forensic_eval_01)
 
Guest Editors: Geoffrey Stewart Morrison & Ewald Enzinger
 
There is increasing pressure on forensic laboratories to validate the performance of forensic analysis systems before they are used to assess strength of evidence for presentation in court (including pressure from the recently released report by the President’s Council of Advisors on Science and Technology, PCAST). In order to validate a system intended for use in casework, a forensic laboratory needs to evaluate the degree of validity and reliability of the system under forensically realistic conditions.
 
This Speech Communication Virtual Special Issue will consist of papers reporting on the results of testing forensic voice comparison systems under conditions reflecting those of one actual forensic voice comparison case. A set of training and test data representative of the relevant population and reflecting the conditions of this particular case has been released, and operational and research laboratories are invited to use these data to train and test their systems.
 
Details are provided in the introductory paper, which is available on the Virtual Special Issue webpage: http://www.sciencedirect.com/science/journal/01676393/vsi/10KTJHC7HNM

Top

7-11Journal of Ambient Intelligence and Smart Environments (JAISE) - Thematic Issue on: Human-centred AmI: Cognitive Approaches, Reasoning and Learning (HCogRL)

===========================
CALL FOR PAPERS
===========================


JAISE-HCogRL Thematic Issue Proposal

 

Journal of Ambient Intelligence and Smart Environments (JAISE) - Thematic Issue on:

Human-centred AmI: Cognitive Approaches, Reasoning and Learning (HCogRL)

 

 

SCOPE 

This thematic issue is focused on the design and development of Ambient Intelligent systems which take into account cognitive issues when modelling the users, apply human understandable representations or reasoning or learn users' preferences and adapt to them. This call is expecting contributions from an international audience on recent developments and experiments that:

  • Present innovative and state-of-the-art designs,
  • Describe real-world or virtual/simulated deployments,
  • Share experiences, insights, best-practices and lessons-learned,
  • Report the results of technical and social evaluations,
  • Discuss and highlight the key challenges and future developments within the domain.

In terms of real applications, proactive, adaptive and real-time solutions for the final user are a big challenge, especially when complex activities and multiple-users are considered.  The quality, cost, efficiency and reliability of the systems are important but also their usability, in which taking into account cognitive issues is fundamental.

In this thematic issue we want to put together Ambient Intelligent systems (AmI) centred in humans which use and apply cognitive approaches in their design. We are keen to encourage the submission of papers from a wide variety of backgrounds and perspectives that relate designs and developments in both research and industry.

 

The topics of interest include, but are not limited to: 

-       Human-Computer Interaction

-       Emotion detection and/or prediction

-       Human-centred interfaces

-       Handling preferences of the individual users and user groups

-       Logic and Reasoning

-       Automated reasoning: deductive, probabilistic, diagnostic, causal, qualitative reasoning, etc.

-       Automated learning (i.e. Support Vector Machines, Neural Networks, etc.)

-       Knowledge Representation and Cognition (e.g. Qualiative representations, Ontologies and representation of common sense etc.)

-       Computational Linguistics, Natural Language Processing & Understanding

-       Embodied vs. ambient intelligence

-       Context awareness 

-       Mobile and wearable computing

-       Ubiquitous computing

-       Computational Creativity

-       Decision Support Systems

-       Applications in health care: assisted living, fall detection, elderly care, patient monitoring

-       Applications in smart homes: home safety, energy efficiency, entertainment, ambience, multimedia

-       Applications in smart buildings, smart cities, living-labs, smart classrooms, smart cars, in safety, energy efficiency, services

-       Robotics applied to Smart Environments

-       Intuitive user interface design

-       Usability of AmI systems

-       Evaluation of cognitively driven AmI systems

-       Methodological open questions on AmI and Cognition

 

SUBMISSION 

Papers should be of 12 pages or longer (papers of up to 30 pages can be accepted, but authors should aim for between 15 and 20 pages) and adhere to the guidelines given on the Web page: http://www.mstracker.com/submit1.php?jc=jaise 

 

When submitting, authors should highlight that the submission is for this particular special issue (HCogRL). 

 

 

Tentative DATES

 

- 1 November 2016: submission deadline
- 1 March 2017: notifications to the authors
- 1 June 2017: final camera ready version submission

 

Publication Date September 2017

 

 

EDITORS

 

  • Zoe Falomir (Guest Editor), Bremen Spatial Cognition Centre (BSCC), Universität Bremen, Germany, zfalomir@informatik.uni-bremen.de
  • Juan Antonio Ortega (Guest Editor), Universidad de Sevilla, Spain, jortega@us.es
  • Natividad Martínez (Guest Editor), Reutlingen University, natividad.martinez@reutlingen-university.de
  • Hans Guesgen (Editorial Board), Massey University, H.W.Guesgen@massey.ac.nz

 

We encourage potential authors to contact the editors to notify them of their intention to submit a manuscript for the special issue or for any questions they might have regarding scope of special issue or the editorial schedule.

Top

7-12CfP Special Issue of Hindawi's Advances in Multimedia: EMERGING CHALLENGES AND SOLUTIONS FOR MULTIMEDIA SECURITY

EMERGING CHALLENGES AND SOLUTIONS FOR MULTIMEDIA SECURITY

Special Issue of Hindawi's Advances in Multimedia, indexed in Web of Science

Today's world's societies are becoming more and more dependent on open networks such as
the Internet, where commercial activities,
business transactions, government services, and entertainment services are realized. This
has led to the fast development of new
cyber threats and numerous information security issues which are exploited by cyber
criminals. The inability to provide trusted
secure services in contemporary computer network technologies could have a tremendous
socioeconomic impact on global enterprises as
well as on individuals.

In the recent years, rapid development in digital technologies has been augmented by the
progress in the field of multimedia
standards and the mushrooming of multimedia applications and services penetrating and
changing the way people interact, communicate,
work, entertain, and relax. Multimedia services are becoming more significant and popular
and they enrich humans' everyday life.
Currently, the term multimedia information refers not only to text, image, video, or
audio content but also to graphics, flash, web,
3D data, and so forth. Multimedia information may be generated, processed, transmitted,
retrieved, consumed, or shared in various
environments. The lowered cost of reproduction, storage, and distribution, however, also
invites much motivation for large-scale
commercial infringement.

The above-mentioned issues have generated new challenges related to protection of
multimedia services, applications, and digital
content. Providing multimedia security is significantly different from providing typical
computer information security, since
multimedia content usually involves large volumes of data and requires interactive
operations and real-time responses. Additionally,
ensuring digital multimedia security must also signify safeguarding of the multimedia
services. Different services require different
methods for content distribution, payment, interaction, and so forth. Moreover, these
services are also expected to be 'smart' in
the environment of converged networks, which means that they must adapt to different
network conditions and types as multimedia
information can be utilized in various networked environments, for example, in fixed,
wireless, and mobile networks. All of these
make providing security for multimedia even harder to perform.

This special issue intends to bring together diversity of international researchers,
experts, and practitioners who are currently
working in the area of digital multimedia security. Researchers both from academia and
industry are invited to contribute their work
for extending the existing knowledge in the field. The aim of this special issue is to
present a collection of high-quality research
papers that will provide a view on the latest research advances not only on secure
multimedia transmission and distribution but also
on multimedia content protection.

Potential topics include, but are not limited to:

- Emerging technologies in digital multimedia security
- Digital watermarking
- Fingerprinting in multimedia signals
- Digital media steganology (steganography and steganalysis)
- Information theoretic analysis of secure multimedia systems
- Security/privacy in multimedia services
- Multimedia and digital media forensics
- Quality of Service (QoS)/Quality of Experience (QoE) and their relationships with
security
- Security of voice and face biometry
- Multimedia integrity verification and authentication
- Multimedia systems security
- Digital rights management
- Digital content protection
- Tampering and attacks on original information
- Content identification and secure content delivery
- Piracy detection and tracing
- Copyright protection and surveillance
- Forgery detection
- Secure multimedia networking
- Multimedia network protection, privacy, and security
- Secure multimedia system design, trusted computing, and protocol security

Authors can submit their manuscripts via the Manuscript Tracking System at
http://mts.hindawi.com/submit/journals/am/adms/.

Manuscript Due: Friday, 2 December 2016
First Round of Reviews: Friday, 24 February 2017
Publication Date: Friday, 21 April 2017

Lead Guest Editor:

Wojciech Mazurczyk, Warsaw University of Technology, Warsaw, Poland

Guest Editors:

Artur Janicki, Warsaw University of Technology, Warsaw, Poland
Hui Tian, National Huaqiao University, Xiamen, China
Honggang Wang, University of Massachusetts Dartmouth, Dartmouth, USA

Top

7-13CfP Special Issue of Speech Communication on *REALISM IN ROBUST SPEECH AND LANGUAGE PROCESSING*
Speech Communication
 
Special Issue on *REALISM IN ROBUST SPEECH AND LANGUAGE  PROCESSING*
 
*Deadline: May 31st, 2017*   (For further information see attached)
 
How can you be sure that your research has actual impact in real-world applications? This is one of the major challenges currently faced in many areas of speech processing, with the migration of laboratory solutions to real-world applications, which is what we address by the term 'Realism'. Real application scenarios involve several acoustic, speaker and language variabilities which challenge the robustness of systems. As early evaluations in practical targeted scenarios are hardly feasible, many developments are actually based on simulated data, which leaves concerns for the viability of these solutions in real-world environments.
 
Information about which conditions are required for a dataset to be realistic and experimental evidence about which ones are actually important for the evaluation of a certain task is sparsely found in the literature. Motivated by the growing importance of robustness in commercial speech and language processing applications, this Special Issue aims to provide a venue for research advancements, recommendations for best practices, and tutorial-like papers about realism in robust speech and language processing.
 
Prospective authors are invited to submit original papers in areas related to the problem of realism in robust speech and language processing, including: speech enhancement, automatic speech, speaker and language recognition, language modeling, speech synthesis and perception, affective speech processing, paralinguistics, etc. Contributions may include, but are not limited to:
 
 -   Position papers from researchers or practitioners for best practice recommendations and advice regarding different kinds of real and simulated setups for a given task
 -   Objective experimental characterization of real scenarios in terms of acoustic conditions (reverberation, noise, sensor variability, source/sensor movement, environment change, etc)
 -   Objective experimental characterization of real scenarios in terms of speech characteristics (spontaneous speech, number of speakers, vocal effort, effect of age, non-neutral speech, etc)
 -   Objective experimental characterization of real scenarios in terms of language variability 
 -   Real data collection protocols
 -   Data simulation algorithms
 -   New datasets suitable for research on robust speech processing
 -   Performance comparison on real vs. simulated datasets for a given task and a range of methods
 -   Analysis of advantages vs. weaknesses of simulated and/or real data, and techniques for addressing these weaknesses
 
Papers written by practitioners and industry researchers are especially welcomed. If there is any doubt about the suitability of your paper for this special issue, please contact us before submission.
 
 
*Submission instructions: *
Manuscript submissions shall be made through EVISE at https://www.evise.com/profile/#/SPECOM/login
Select article type 'SI:Realism Speech Processing'
 
 
*Important dates: *
March 1, 2017: Submission portal open 
May 31, 2017: Paper submission
September 30, 2017: First review
November 30, 2017: Revised submission
April 30, 2018: Completion of revision process
 
 
*Guest Editors: *
Dayana Ribas, CENATAV, Cuba
Emmanuel Vincent, Inria, France
John Hansen, UTDallas, USA
Top

7-14CfP IEEE Journal of Selected Topics in Signal Processing: Special Issue on End-to-End Speech and Language Processing

CALL FOR PAPERS

IEEE Journal of Selected Topics in Signal Processing

Special Issue on End-to-End Speech and Language Processing

End-to-end (E2E) systems have achieved competitive results compared to conventional hybrid Hidden Markov-deep neural

network model-based automatic speech recognition (ASR) systems. Such E2E systems are attractive because they do not require

initial alignments between input acoustic features and output graphemes or words. Very deep convolutional networks and recurrent

neural networks have also been very successful in ASR systems due to their added expressive power and better generalization.

ASR is often not the end goal of real-world speech information processing systems. Instead, an important end goal is information

retrieval, in particular keyword search (KWS), that involves retrieving speech documents containing a user-specified query from a

large database. Conventional keyword search uses an ASR system as a front-end that converts the speech database into a finitestate

transducer (FST) index containing a large number of likely word or sub-word sequences for each speech segment, along with

associated confidence scores and time stamps. A user-specified text query is then composed with this FST index to find the

putative locations of the keyword along with confidence scores. More recently, inspired by E2E approaches, ASR-free keyword

search systems have been proposed with limited success. Machine learning methods have also been very successful in Question-

Answering, parsing, language translation, analytics and deriving representations of morphological units, words or sentences.

Challenges such as the Zero Resource Speech Challenge aim to construct systems that learn an end-to-end Spoken Dialog (SD)

system, in an unknown language, from scratch, using only information available to a language learning infant (zero linguistic

resources). The principal objective of the recently concluded IARPA Babel program was to develop a keyword search system that

delivers high accuracy for any new language given very limited transcribed speech, noisy acoustic and channel conditions, and

limited system build time of one to four weeks. This special issue will showcase the power of novel machine learning methods not

only for ASR, but for keyword search and for the general processing of speech and language.

Topics of interest in the special issue include (but are not limited to):

Novel end-to-end speech and language processing

Query-by-example search

Deep learning based acoustic and word representations

Query-by-example search

Question answering systems

Multilingual dialogue systems

Multilingual representation learning

Low and zero resource speech processing

Deep learning based ASR-free keyword search

Deep learning based media retrieval

Kernel methods applied to speech and language processing

Acoustic unit discovery

Computational challenges for deep end-to-end systems

Adaptation strategies for end to end systems

Noise robustness for low resource speech recognition systems

Spoken language processing: speech to speech translation,

speech retrieval, extraction, and summarization

Machine learning methods applied to morphological,

syntactic, and pragmatic analysis

Computational semantics: document analysis, topic

segmentation, categorization, and modeling

Named entity recognition, tagging, chunking, and parsing

Sentiment analysis, opinion mining, and social media analytics

Deep learning in human computer interaction

Dates:

Manuscript submission: April 1, 2017

First review completed: June 1, 2017

Revised Manuscript Due: July 15, 2017

Second Review Completed: August 15, 2017

Final Manuscript Due: September 15, 2017

Publication: December. 2017

Guest Editors:

Nancy F. Chen, Institute for Infocomm Research (I2R), A*STAR, Singapore

Mary Harper, Army Research Laboratory, USA

Brian Kingsbury, IBM Watson, IBM T.J. Watson Research Center, USA

Kate Knill, Cambridge University, U.K.

Bhuvana Ramabhadran, IBM Watson, IBM T.J. Watson Research Center, USA

Top

7-15Travaux Interdisciplinaires sur la Parole et le Langage, TIPA

L'équipe de rédaction des TIPA a le plaisir de vous annoncer la parution du dernier numéro de la revue sur Revues.org :

Travaux Interdisciplinaires sur la Parole et le Langage, TIPA n° 32 I 2016 :
Conflit en discours et discours en conflit : approches linguistiques et communicatives

sous la direction de  Tsuyoshi Kida et Laura-Anca Parepa
http://tipa.revues.org

Ce numéro sera complété par le n° 33 I 2017 qui portera sur la même thématique :
Conflit en discours et discours en conflit : approches interdisciplinaires

sous la direction de Laura-Anca Parepa et de Tsuyoshi Kida

Top

7-16CfP IEEE Journal of Selected Topics in Signal Processing/ Special Issue on End-to-End Speech and Language Processing

Call for Papers

IEEE Journal of Selected Topics in Signal Processing
Special Issue on End-to-End Speech and Language Processing

End-to-end (E2E) systems have achieved competitive results compared to conventional hybrid Hidden Markov-deep neural network model-based automatic speech recognition (ASR) systems. Such E2E systems are attractive because they do not require initial alignments between input acoustic features and output graphemes or words. Very deep convolutional networks and recurrent neural networks have also been very successful in ASR systems due to their added expressive power and better generalization. ASR is often not the end goal of real-world speech information processing systems. Instead, an important end goal is information retrieval, in particular keyword search (KWS), which involves retrieving speech documents containing a user-specified query from a large database. Conventional keyword search uses an ASR system as a front-end that converts the speech database into a finite-state transducer (FST) index containing a large number of likely word or sub-word sequences for each speech segment, along with associated confidence scores and time stamps. A user-specified text query is then composed with this FST index to find the putative locations of the keyword along with confidence scores. More recently, inspired by E2E approaches, ASR-free keyword search systems have been proposed with limited success. Machine learning methods have also been very successful in Question-Answering, parsing, language translation, analytics and deriving representations of morphological units, words or sentences. Challenges such as the Zero Resource Speech Challenge aim to construct systems that learn an end-to-end Spoken Dialog (SD) system, in an unknown language, from scratch, using only information available to a language learning infant (zero linguistic resources). The principal objective of the recently concluded IARPA Babel program was to develop a keyword search system that delivers high accuracy for any new language given very limited transcribed speech, noisy acoustic and channel conditions, and limited system build time of one to four weeks. This special issue will showcase the power of novel machine learning methods not only for ASR, but for keyword search and for the general processing of speech and language.

Topics of interest in the special issue include (but are not limited to):

  • Novel end-to-end speech and language processing
  • Deep learning based acoustic and word representations
  • Query-by-example search
  • Question answering systems
  • Multilingual dialogue systems
  • Multilingual representation learning
  • Low and zero resource speech processing
  • Deep learning based ASR-free keyword search
  • Deep learning based media retrieval
  • Kernel methods applied to speech and language processing
  • Acoustic unit discovery
  • Computational challenges for deep end-to-end systems
  • Adaptation strategies for end to end systems
  • Noise robustness for low resource speech recognition systems
  • Spoken language processing: speech retrieval, speech to speech translation, extraction, and summarization
  • Machine learning methods applied to morphological, syntactic, and pragmatic analysis
  • Computational semantics: document analysis, topic segmentation, categorization, and modeling
  • Named entity recognition, tagging, chunking, and parsing
  • Sentiment analysis, opinion mining, and social media analytics
  • Deep learning in human computer interaction

Prospective authors should follow the instructions given on the IEEE JSTSP webpages: https://signalprocessingsociety.org/publications-resources/ieee-journal-selected-topics-signal-processing, and submit their manuscript with the web submission system at: https://mc.manuscriptcentral.com/jstsp-ieee.

Important Dates:
- Manuscript submission: April 1, 2017
- First review completed: June 1, 2017
- Revised Manuscript Due: July 15, 2017
- Second Review Completed: August 15, 2017
- Final Manuscript Due: September 15, 2017
- Publication: December 2017

Guest Editors:
- Nancy F. Chen, Institute for Infocomm Research (I2R), A*STAR, Singapore
- Mary Harper, Army Research Laboratory, USA
- Brian Kingsbury, IBM Watson, IBM T.J. Watson Research Center, USA
- Kate Knill, Cambridge University, U.K.
- Bhuvana Ramabhadran, IBM Watson, IBM T.J. Watson Research Center, USA

Top

7-17La langue des signes, c'est comme ça. Revue TIPA

Revue TIPA n°34, 2018
 
Travaux interdisciplinaires sur la parole et le langage

 http://tipa.revues.org/



LA LANGUE DES SIGNES, C?EST COMME ÇA



 

 

Langue des signes : état des lieux, description, formalisation, usages

 

Éditrice invitée
Mélanie Hamm,

Laboratoire Parole et Langage, Aix-Marseille Université


 

« La langue des signes, c?est comme ça » fait référence à l?ouvrage « Les sourds, c?est comme ça » d?Yves Delaporte (2002). Dans ce livre, le monde des sourds, la langue des signes française et ses spécificités sont décrits. Une des particularités de la langue des signes française est le geste spécifique signifiant COMME ÇA[1], expression fréquente chez les sourds, manifestant une certaine distance, respectueuse et sans jugement, vis-à-vis de ce qui nous entoure. C?est avec ce même regard ? proche de la probité scientifique simple et précise ? que nous tenterons d?approcher les langues signées.

 

Même si nous assistons à des avancées en linguistique des langues signées en général et de la langue des signes française en particulier, notamment depuis les travaux de Christian Cuxac (1983), de Harlan Lane (1991) et de Susan D. Fischer (2008), la linguistique des langues des signes reste un domaine encore peu développé. De plus, la langue des signes française est une langue en danger, menacée de disparition (Moseley, 2010 et Unesco, 2011). Mais quelle est cette langue ? Comment la définir ? Quels sont ses « mécanismes » ? Quelle est sa structure ? Comment la « considérer », sous quel angle, à partir de quelles approches ? Cette langue silencieuse met à mal un certain nombre de postulats linguistiques, comme l?universalité du phonème, et pose de nombreuses questions auxquelles il n?y a pas encore de réponses satisfaisantes. En quoi est-elle similaire et différente des langues orales ? N?appartient-elle qu?aux locuteurs sourds ? Doit-elle être étudiée, partagée, conservée, documentée comme toute langue qui appartient au patrimoine immatériel de l?humanité (Unesco, 2003) ? Comment l?enseigner et avec quels moyens ? Que raconte l?histoire à ce sujet ? Quel avenir pour les langues signées ? Que disent les premiers intéressés ? Une somme de questions ouvertes et très contemporaines?

 

Le numéro 34 de la revue Travaux Interdisciplinaires sur la Parole et le langage se propose de faire le point sur l?état de la recherche et des différents travaux relatifs à cette langue si singulière, en évitant de l?« enfermer » dans une seule discipline. Nous sommes à la recherche d?articles inédits sur les langues des signes et sur la langue des signes française en particulier. Ils proposeront description, formalisation ou encore aperçu des usages des langues signées. Une approche comparatiste entre les différentes langues des signes, des réflexions sur les variantes et les variations, des considérations sociolinguistiques, sémantiques et structurelles, une analyse de l?étymologie des signes pourront également faire l?objet d?articles. En outre, un espace sera réservé pour d?éventuels témoignages de sourds signeurs.

 

Les articles soumis à la revue TIPA sont lus et évalués par le comité de lecture de la revue. Ils peuvent être rédigés en français ou en anglais et présenter des images, photos et vidéos (voir « consignes aux auteurs » sur https://tipa.revues.org/222). Une longueur entre 10 et 20 pages est souhaitée pour chacun des articles, soit environ 35 000 / 80 000 caractères ou 6 000 / 12 000 mots. La taille moyenne recommandée pour chacune des contributions est d?environ 15 pages. Les auteurs sont priés de fournir un résumé de l?article dans la langue de l?article (français ou anglais ; entre 120 et 200 mots) ainsi qu?un résumé long d?environ deux pages (dans l?autre langue : français si l?article est en anglais et vice versa), et 5 mots-clés dans les deux langues (français-anglais). Les articles proposés doivent être sous format .doc (Word) et parvenir à la revue TIPA sous forme électronique aux adresses suivantes : tipa@lpl-aix.fr et melanie.hamm@lpl-aix.fr.

                                                                       

 

Références bibliographiques 

COMPANYS, Monica (2007). Prêt à signer. Guide de conversation en LSF. Angers : Éditions Monica Companys.

CUXAC, Christian (1983). Le langage des sourds. Paris : Payot.

DELAPORTE, Yves (2002). Les sourds, c?est comme ça. Paris : Maison des sciences de l?homme.

FISCHER, Susan D. (2008). Sign Languages East and West. In : Piet Van Sterkenburg, Unity and Diversity of Languages. Philadelphia/Amsterdam : John Benjamins Publishing Company.

LANE, Harlan (1991). Quand l?esprit entend. Histoire des sourds-muets. Traduction de l?américain par Jacqueline Henry. Paris : Odile Jacob.

MOSELEY, Christopher (2010). Atlas des langues en danger dans le monde. Paris : Unesco.

UNESCO (2011). Nouvelles initiatives de l?UNESCO en matière de diversité linguistique : http://fr.unesco.org/news/nouvelles-initiatives-unesco-matiere-diversite-linguistique.

UNESCO (2003). Convention de 2003 pour la sauvegarde du patrimoine culturel immatériel : http://www.unesco.org/culture/ich/doc/src/18440-FR.pdf.

           

 

Echéancier

 

Avril 2017 : appel à contribution

            Septembre 2017 : soumission de l?article (version 1)

Octobre-novembre 2017 : retour du comité ; acceptation, proposition de modifications (de la version 1) ou refus

            Fin janvier 2018 : soumission de la version modifiée (version 2)

Février 2018 : retour du comité (concernant la version 2)

            Mars / juin 2018 : soumission de la version finale

Mai / juin 2018 : parution

 

Instructions aux auteurs

 

Merci d?envoyer 3 fichiers sous forme électronique à : tipa@lpl-aix.fr et melanie.hamm@lpl-aix.fr :
            - un fichier en .doc contenant le titre, le nom et l?affiliation de l?auteur (des auteurs)

- deux fichiers anonymes, l?un en format .doc et le deuxième en .pdf,

Pour davantage de détails, les auteurs pourront suivre ce lien : http://tipa.revues.org/222       

 

 

 


[1] Voir par exemple l?image 421, page 334, dans Companys, 2007 ou photo ci-dessus.

 

                  

Top

7-18Speech and Language Processing for Behavioral and Mental Health Research and Applications', Computer Speech and Language (CSL).

                                                                         Call for Papers

                     Special Issue of COMPUTER SPEECH AND LANGUAGE

Speech and Language Processing for Behavioral and Mental Health Research and Applications

The promise of speech and language processing for behavioral and mental health research and clinical applications is profound. Advances in all aspects of speech and language processing, and their integration—ranging from speech activity detection, speaker diarization, and speech recognition to various aspects of spoken language understanding and multimodal paralinguistics—offer novel tools for both scientific discovery and creating innovative ways for clinical screening, diagnostics, and intervention support. Credited to the potential for widespread impact, research sites across all continents are actively engaged in this societally important research area, tackling a rich set of challenges including the inherent multilingual and multicultural underpinnings of behavioural manifestations. The objective of this Special Issue on Speech and Language Processing for Behavioral and Mental Health Applications is to bring together and share these advances in order to shape the future of the field. It will focus on technical issues and applications of speech and language processing for behavioral and mental health applications. Original, previously unpublished submissions are encouraged within (not limited to) the following scope: 

  • Analysis of mental and behavioral states in spoken and written language 
  • Technological support for ecologically- and clinically-valid data collection and pre-processing
  • Robust automatic recognition of behavioral attributes and mental states 
  • Cross-cultural, cross linguistic, cross-domain mathematical approaches and   applications 
  • Subjectivity modelling (mental states perception and behavioral annotation) 
  • Multimodal paralinguistics (e.g., voice, face, gesture) 
  • Neural-mechanisms, physiological response, and interplay with expressed behaviours 
  • Databases and resources to support study of speech and language processing for mental health 
  • Applications: scientific mechanisms, clinical screening, diagnostics, & therapy/treatment support 
  • Example Domains: Autism spectrum disorders, addiction, family and relationship studies, major depressive disorders, suicidality, Alzheimer’s disease

Important Dates

  • Manuscript Due October 31, 2017
  • First Round of Reviews January 15, 2018
  • Second Round of Reviews     April 15, 2018
  • Publication Date June 31, 2018


Guest Editors

  • Chi-Chun Lee, National Tsing Hua University, Taiwan, cclee@ee.nthu.edu.tw
  • Julien Epps, University of New South Wales, Australia, j.epps@unsw.edu.au
  • Daniel Bone, University of Southern California, USA, dbone@usc.edu
  • Ming Li, Sun Yat-sen University, China, liming46@mail.sysu.edu.cn
  • Shrikanth Narayanan, University of Southern California, USA, shri@sipi.usc.edu

 

Submission Procedure

Authors should follow the Elsevier Computer Speech and Language manuscript format described at the journal site https://www.elsevier.com/journals/computer-speech-and-language/0885-2308/guidefor-authors#20000. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://www.evise.com/evise/jrnl/CSL When submitting your papers, authors must select VSI:SLP-Behavior-mHealth as the article type.

 

Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA