|Author Name||SO Mirai (Keio University) / TAKEBAYASHI Yoshitake (Fukushima Medical University) / SEKIZAWA Yoichi (Senior Fellow, RIETI) / SHIMOJI Takaaki (Smart Medical, Inc.)|
|Creation Date/NO.||September 2016 16-J-054|
|Research Project||Research Project on Mental Health from the Perspective of Human Capital 2|
|Download / Links|
Background: Technologies to identify human emotions from their voices have been recently developed and commercialized. The present study examines whether this type of technology can be applicable to the diagnosis of depression.
Method: Approximately 2,000 participants were required to record their voices and answer PHQ-9 at three points (T1, T2, and T3) with a two month interval respectively. Seven parameters such as pitch, gain, and power were extracted from the available voice data. We estimated the diagnostic accuracy of depression from analyzing such voice parameters, defining a participant as suffering from depression when the PHQ-9 score was 10 or above. After combining the data derived at T1 and T2, we randomly extracted 70% of such combined data that were processed by means of the synthetic minority over-sampling technique (SMOTE) algorithm. Then, we generated potential models through ensemble learning in which three types of models (bagging, random forest, and boosting) competed with each other. The remaining 30% of the abovementioned data and data gained at T3 were combined and used for examination. In addition to sensitivity and specificity, the area under the curve (AUC) was used as the primary criteria for accuracy.
Result: The random forest model showed the best estimation. Although AUC available from only the demographic data indicated moderate accuracy, AUC from only voice or both voice and demographic data showed high accuracy. However, AUC available from analysis by using T3 data did not show sufficient accuracy.
Interpretation: Although voice cognition technology has high potential for diagnosis of depression, further innovation seems to be required.