Course:COGS200/2017W1/Group33

From UBC Wiki

E.M.I.I. : Empathetic Machine Intelligence Interface

Abstract

The undeniable link between mental and physical health urges us to consider what can be done within our healthcare system to further improve patients mental health. As reported by the American Journal of Medical Quality, the preexisting nursing shortage is expected to continue to grow and reach 260,000 by 2025. The shortage motivates us to consider implementing advanced technology to free up nurse resources. While multiple studies have examined the benefits of human/computer interaction, we provide an in depth look at how this could be done in hospitals to augment the hospital environment. We hypothesize implementing these empathetic AI units into hospital rooms will increase patients sense of control, comfort, and general happiness with their stay in the hospital. This will come to be tested through a three phase experimental design intended to objectively measure patients satisfaction with their hospital experience.

Introduction

"Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity" (World Health Organization, 2001). In accordance with the WHO, we believe health is much more complex than simply the lack of physical ailment. There are always things to be done to better maximize the quality of life for humanity. While our healthcare system is the most effective it’s ever been, rapidly evolving technology continues to provide ways in which we can improve upon it. A niche that remains unaddressed for the most part is the emotional need of patients. While this may seem extracurricular, the heavy link between mental and physical health proves otherwise. As stated by the World Health Organization, “there is no health without mental health”. Because of this, investing time and money into mental health is a wise investment for the long run both socially and financially (Mikolajczak, M., Van Bellegem, S. (2017). Those who have experienced extended stays in hospitals know how lonely and draining it can get. Not everyone can have family and friends making frequent visits, and nurses are far too busy to spend sufficient time with patients. Especially for those suffering from chronic illnesses, getting lonely is almost inevitable with the amount of time spent in solidarity. Having something that responds to you and your requests--in any way--can help one feel less alone.

Larson, P. J. (2012) found that cancer patients reported that the nurse “being accessible” was among the most important aspects of care. This is an issue easily addressed by AI, whether it is to notify a nurse that attention is needed, or even bypass that and provide direct assistance with it’s wide range of knowledge and array of functions. In addition, keeping in contact with the outside world is crucial to preventing loneliness. Not all patients have the physical and or mental capacity to be reaching out often, and having to wait for a human to assist can be both tedious and demotivating. Humans benefit from consistent feedback and company. thus staying in solitude for extended periods of time can wreak havoc on the human mind and body. Due to the heavy interaction between physical and mental health, it is illogical for health care to not take both measures into account when caring for patients. Our paper aims to examine the potential of using an emotionally intelligent AI in hospital rooms to create a healthier, more pleasant hospital experience that improves all aspects of health. Conscious of the fact that human to human interaction heavily differs from human to computer interaction, we hope to create an emotionally intelligent AI that is sensitive to fragile mental and physical conditions many hospital patients are often in. In doing so, we aim not to replace human interaction, but augment it with technological support. We define emotional intelligence in this context as our AI responding reasonably and empathetically to our patients, taking into account emotionally charged stimuli such as facial movements, voice tone, and other human indicators of emotion. While further research is necessary in order to claim that having an AI present will aid with these issues, we hypothesize that having an emotionally intelligent AI present in one’s hospital room will improve that person’s satisfaction with their hospital stay along with their mental, thus physical health.

Our paper draws on multiple disciplines in order to gain a holistic viewpoint on our mission. Although Computer Science and Psychology have the strongest and most prominent impact, we also utilized information from Linguistics as well. Obviously, Computer Science is required for the physical implementation of our project. In order to program our AI so that it is emotionally responsive, the field of psychology becomes of equal importance. This is specifically understanding how emotions are most commonly conveyed through measures like facial features, voice tone and volume (also of concern to linguists), and much more. Linguistics is crucial for both the machine's language processing and its ability to respond in the most natural way possible.

Objectives

This objective of this research is to create an emotionally sensitive AI system which will be able to provide assistance to patients in a hospital. The link among mental and physical health urges us to create the best environment for patients as possible. We believe that by providing an AI that is responsive to human emotional cues will aid in the patients willingness and desire to interact with the interface. In doing this, we intend to learn how people classify empathy and how we could possibly implement that quality into our interface. One smaller objective is to collect data on how people feel about communicating with an AI, and what would ease this transition. There is no denying that these machines could prove useful in a myriad of environments, making this important information to collect. Another objective of this study is to create an AI that can detect emotion across all cultures. While there are some universal signals of emotion, some are specific to certain cultures. We aim for our interface to work with all of these unique facial movements.

Literature Review

Emotional Intelligence

This journal article from (Freshwater, D., & Stickley, T. 2004) discusses the importance of Emotional Intelligence (EI) in nurse positions--specifically, what this looks like and why it should be considered a higher significance when selecting nurses. In order for us to successfully mimic an emotionally intelligent nurse, it is imperative to understand what that entails and the common issues they experience. This is crucial for us in understanding the types of EI assistants should ideally have when programming our emotional AI.

Larson, P. J. (2012) provides critical data on what cancer patients found most important in nurse care. This was excellent input for how to program our AI considering these are patients with a chronic illness who have detailed what can be done to make their experience most pleasant. While a large portion of the data isn’t yet applicable to our AI, some added solutions directly or indirectly to improve healthcare with an AI. Focusing on what patients feel nurses lack is the perfect start to creating out AI.

Mental/Physical Health Links

The report "Promoting mental health : concepts, emerging evidence, practice" from the World Health Organization provides a framework for the link between mental and physical health. A primary purpose of the report is to bring to the light the relevance of considering mental health in health care. The report details widely accepted definitions of many important terms for our research, and goes into depth about what they are and how to improve them. This serves as a crucial source for aligning our healthcare thesis with a unified body of knowledge, created by many globally renowned health professionals.

Caring Behaviour

An itemized description of caring behaviours. (Yeakel et al., 2003)

Talseth et al. (2009) made an interesting finding in suicidal psychiatric patients. The study involved interviewing 9 men and 12 women in a psychiatric institution who thought about taking their own life at some point. The interviewees were taken from several wards to make a random sample. Tape recorded interviews were carried out in which questions about two specific themes: comforting and lack of comforting, relevant sub-themes included “having time for patient”, “listening to patient without prejudice”, “accepting patient’s feelings”. It was found that the patients felt a lot better and less afraid when nurses were around them and talked to them. It was acknowledged that nurses usually have limited time to interact with every single patient. The latter expressed that when they were left in isolation, they felt hopeless and had more thoughts about taking their own lives. Having a nurse to give them hope by talking and listening to them contributed a lot in making the patients feel valued and wanted.

A similar study was carried out by Yeakel et al. (2003). They investigated nurse caring behaviours and patient satisfaction. An intervention was carried out whereby nurses were trained in several aspects such as: “provision of a formal education session”, “staff identification of goals”, “incorporation of goals into performance management”.... A total of 477 patients were admitted in the hospital before and after the intervention. The objective was to measure patient satisfaction before and after the intervention. Six satisfaction items were used as shown in the diagram (see diagram below) and a score system was established on a scale of 1 to 6. The results were that patients admitted post-intervention found the nurses to be more caring that patients admitted pre-intervention. Patient satisfaction was hence higher for nurses which showed caring behaviours including allowing the patient to talk about their feelings and disease.

This paper from (Bickmore, T. W., Pfiefer, L. M., Jack, B. W. 2009) explores the use of an animated, empathetic nurse interface in patients with poor health literacy. It’s results support our hypothesis, as they found most patients preferred receiving their discharge information from the agent over their doctor or nurse. While this study only involved those with inadequate health literacy, we believe the results would replicate when expanded to wider ranges of patients. The general nurse interface isn’t too far off from what we would like to dowith our project. Hearing of a success story in this industry will help us to convince investors and get our AI functioning faster.

Designing Systems to Simulate Empathy

A possible framework exists in the form of Caffe for deep learning neural networks (Jia et al., 2014). Jia et al. (2014) mention that the Caffe framework was designed with image recognition in mind, but has further applicability to speech processing and neuroscience. If this is the case, Caffe can streamline the programming process in multiple languages (C++, Python, MatLab) by giving the neural network a common backbone to work out of and add parts to.

Attempts have already been made to explore the possibility of artificial intelligences recognizing emotion multi-modally (Lim & Okuno, 2014). Lim and Okuno (2014), in splitting their representation into voice input, gestures, and vocal “musicality”, demonstrate that the consolidation of sensory data provides a better image of the intended emotion to communicate. This is a consideration to be made for constructing any empathetic being.

Cheng et al. (2016) from Google pilot the notion of “wide and deep learning”. TensorFlow is their answer to this. Wide learning is used to “exploit the correlation available in historical data”, which works a lot like memorizing rules with direct responses (Cheng et al., 2016). Deep learning here is then used for generalizing from data, building a representation using past learnings to recognize diverse inputs and handle those sensitively (Cheng et al., 2016). Cheng et al. (2016) assert that this has applicability to recommendation systems, which they have tested in the Google Play Store. The implications of this are that, if this optimizes how people receive information they search for, then it might optimize receiving the appropriate responses to all kinds of user queries.

Vu, Adel, and Schütze (2016) examine the idea of combining convolutional and recurrent neural networks for classifying relations. The basis of this comes from the idea that a neural network can feed its output back into itself many times to observe patterns or observe the same patterns across the whole input, find that this optimizes a voting scheme they use as an example. Careful consideration must be taken for how this will allow other systems to classify their inputs across space and time.

Methods

Procedure

Our approach to address this problem is creating an AI with emotional capabilities in order to enhance the user-experience for patients in hospitals. But before we explain how we are going to do this we need to clarify what we mean by emotional capabilities: We base our understanding on the concepts of Emotional Intelligence and Empathy. In Psychology, the concept of Emotional Intelligence is described as someone’s ability to understand his feelings, to listen to other and to feel them, and to express his emotions in a productive manner. Goleman, who introduced the concept first in 1995, argues that in the context of effective communication having a high Emotional Intelligence Quotient (EQ) is more powerful than having a high Intelligence Quotient (IQ). According to him this concept involves five elements: self-awareness, empathy, handling relationships, managing feelings and motivation.

At this point it is critical to mention that this concept is far too broad and complex to completely integrate and apply it to our AI system because it offers a more holistic approach on the human intelligence. But at the same time it is fundamental to understand this for our approach. That is why our research is focused on the factor of empathy. Empathy is the capacity to share and understand another’s “state of mind” or emotion or the ability to “put oneself into another’s shoes.” It is a powerful communication skill that is often misunderstood and an important element for communicating with humans. Empathy can also be seen as having an active understanding of the emotions attached to the words used by the patient (Ioannidou et al., 2008).

To implement that in the AI system we want to create, the AI has to be responsive to emotional cues and tones from the patient in a way that makes the system seemingly empathetic for the patient. Implementing actual empathy in an AI system requires a complete different level of engineering and exceeds the extent of our research.

In this section we talk about how we are addressing the problem and which methods we use. For our research we set up three stages: In stage one we gather data with self-reported surveys of hospital patients to get better understanding of what shapes empathy and which factors are critical in that context to create empathy. The second stage focuses on the design and development of the AI system based on the insights of the first stage. Finally the system will be tested in an experimental design setting in the real-life hospital environment where patients would be randomly assigned to an only-human treatment (with nurses) or a mixed treatment (with nurses and AI systems). This process with its 3 stages will be repeated in order to backpropagate recovery outcomes and user feedback until we have a functioning, emotionally intelligent AI.

Stage 1

In order to bring emotional intelligence into an AI, we need to look closer at the factors that are critical for the communication between humans and the technology. Therefore it is key to identify how people currently feel about human-machine interaction in the context of a hospital environment, to what people pay attention when they communicate with an AI and what common issues arise in patient-machine interaction. For that purpose we aim to set up a survey of 200 patients in a hospital to gather data and find out more about target group and the environment. Another purpose of this is to get an insight on the patient’s general view of empathy and what factors are important as well as which emotions are coming up in the patient’s situation.

The questionnaire we use contains different clusters of questions which asked about the (1) patient’s current situation and how they feel about it, (2) about how pleased they are with the care in the hospital and in which moments do patients need help, (3) about where and how they would use a virtual assistant in form of an emotional intelligent AI and how they would feel about that, (4) about how would they interact with such a system and what are critical factors for the human-machine interaction, and (5) about how people are looking at empathy and what do people see as empathetic. In order to make the questionnaire as effective as possible we set up an initial phase to create valuable questions and to do a pretest in order to properly form the questionnaire.

For the design of the AI system it is essential that it reliably can determine and process information. That’s why we base our method on physical parameters, the so called human based feature extraction method. After defining the target group which are patients in a hospital, a pilot survey asks for more detailed information. After analyzing the data, we then classify the different emotion spaces people are in.

For the analysis of the questionnaire we rely on the help of a research assistant. With this method we hope to get valuable insights and helpful data for the effective creation of an emotional intelligent AI system.

Stage 2

The primary concern of Stage 2 is the construction of our empathetic intelligence with careful consideration of the results from Stage 1. To proceed in building a model that seems to “understand” human emotional states and responds appropriately, we will construct a neural network.

The representation of the patient’s current emotional state will involve consolidating various sensory data with previous learned representations. We will require that the network considers a number of large amount of inputs, chief among these being꞉ patient facial expressions, linguistic data (e.g. tone, volume, meaning), and health data (monitored by existing hospital equipment and healthcare records in the system). Additional sensors will be considered given the results of Stage 1’s initial insights and the capability of the hospital, considering both how humans best represent others’ mental states and what additional insights a machine can take into account. Integrating all of these “senses” will help build a more comprehensive computer model of human emotion, according to Lim and Okumo (2014). To illustrate an extreme example, we can picture the difference between a patient undergoing a heart attack and a patient in hysterics due to a joke. Both might exhibit a dramatic change in facial expression and exclaim “I’m dying!”. We want to avoid the false-positive of calling for help when none is needed, and especially avoid the false-negative of not alerting staff when it is absolutely necessary. A whole picture of the patient’s well-being is required to understand best how to respond. Considering the leap in heart rate and the correlation between the tone of voice and the facial expression is key to identifying which situation should raise more concern; we avoid identifying these inputs discretely so as to build a holistic representation of the patient’s well-being. The design of the software should then follow in a way that best represents this.

A visual representation of simple wide, wide and deep, and deep learning networks (Cheng et al., 2016).

The neural network, when put in software terms, will consist of many blocks of machine learning networks. The same machine learning methods are used for recommender systems (systems that predict a user’s rating of a product) and are piloted in smart assistants not unlike Apple's Siri or Amazon's Alexa. Google’s answer to these methods is TensorFlow, an open-source software that uses both “wide” and “deep” learning. These both use layers of McCulloch-Pitts neurons, connected in different ways for different purposes. “Wide” learning looks something like memorized rules as humans use them, whereas “deep” learning looks more like building a representation to generalize from (Cheng et al., 2016). If a patient’s vitals are critical, for example, there is no need to make a careful consideration of other data--they are simply at risk and requiring staff attention. A conversation, on the other hand, might require a careful consideration of tone, previous topics of conversation, and other, more subtle, measures of emotional state.

The network will be supra-modal, as it will combine many sensory measures into one “super sense”. To even speak of measuring facial responses and the like, we must first consider what certain emotions look like on people’s faces and by their language usage, and how these might differ given different health conditions. Thankfully, this too can be relegated to previous research. Landowska, Brodny, and Wrobel (2017) make assertions that placement of sensors effects how accurately data is processed, for example. This and Jia et al. (2014)'s article on the deep learning network Caffe for use in computer vision and language processing will be taken into account when constructing the software and setting up hardware. Clearly many computer vision models, natural language processing models, and other models combining sensory data with computer data exist and are already trained. The question then is one of obtaining them, which will involve downloading open-source software or licensing it otherwise. From there, these work as building blocks in the network. They then adjust each other’s respective weights in the networks and contribute to the holistic picture we require.

The output of this system will be connected to various hospital systems, such as the lights and a speaker. This is yet another natural continuation of the technology used in smart assistants on phones or smart homes. A software engineer will work with a hardware engineer to make sure all of these systems are reconciled with each other.

Stage 3

This stage has the purpose of testing our AI system in the real-life environment of a hospital in two different settings: the human-only treatment, in which only nurses treat the patients, versus the the mixed treatment, in which nurses treat patients with the help of the AI system. In each setting 15 patients will be tested. For the test we set up an experimental design which is targeted to long-term patients who stay in hospital for at least 4 weeks. This time frame is needed to observe the long-term effects of interacting with the system. Within this time frame we initialize different phases to test certain parts of the system. The AI system will be tested in different settings and compared to the human-only treatment.

Besides analyzing the data we gather from the system itself through its sensor and the algorithm we conduct focused one-on-one interviews with the patients in order to gather qualitative data. That makes it possible to find out about certain functions and issues that came up in the experimental design. With the insights of this stage we hope to get valuable information on which we could improve the AI system. By repeating this process with its three stages and different conditions we hope to continuously improve the functions and the emotional intelligence of the AI.

Equipment and Budget

The proposed cost breakdown for the implementation of the research will be as follows:

Cost per unit/$ No. of units Total/$
Equipment
Sensors* 700 15 10 500
Smart Speaker 100 15 1500
Survey Questionnaires - - 50
Human Resource
Research Assistant 15/hr 6 months (part-time) 4500
Hardware Engineer 20/hr 0.5 months 800
Software Development
Software Developers (X2) 22/hr 3 months (full-time) 9000
Development Machines 800 2 1600
Grand Total 27950

*Multiple sensors will be needed but were grouped as a whole

The Research Assistant will have as role to collect and analyse the data from the surveys, summarize the focussed interviews and other research related activities.

The Hardware Engineer will be responsible for assembling and setting up the equipment ranging from smart speaker setup to installing the sensors on the rooms

The Software Developers will need to code the software for the AI System and test it once it is set up.


Tasks Timeline

The Gantt chart provided shows the proposed timeline for the activities to be carried out for the research.

A timeline for work on EMII.

Discussion

We setup our research in three stages in order to first (1) research and get data about the topic, second (2) to design the AI system based on current research and the data from phase one, and third (3) to test the system we designed.

In phase One we draw on a survey because it is an effective and reasonable way of probing what emotions the patients are feeling in this environment and what emotions are important for the caring context. Even though we focus on the aspect of empathy in the concept of emotional intelligence, the concept of empathy in itself is very broad. That’s why we tailor it down to the specific use case in the hospital which can be tested by surveys very effectively.

In Phase Two, the choice of machine learning was a simple one. There is an abundance of research to draw on and there exist many similar projects to draw parallels to. Smartphones, smart homes, and recommendation systems already use the principles of machine learning to enhance the user experience, so a neural network seems the most feasible solution to simulating empathy. Because empathetic responses require a holistic approach to the situation, it is only natural that we use Lim & Okumo (2014)’s idea of multi-modal representation. The integration of multiple “senses” follows from that. From there, its usefulness is either manually improved or self-improved through patient responses.

In Phase Three it is key to test the system in the real-life environment in the hospital with patients to see how it functions there and what influence it has. For evaluating the study, focused one-on-one interviews are the best way to gather qualitative data to figure out specifically which functions can be improved and which issues came up in the human-computer interaction.

Expected Results

We expect that patients will be more satisfied with the mixed treatment than the nurse-only treatment. The gap to fill between hospital staff duties and patient social and assistance needs is a large one, and one hospital only has so many resources. If the room itself becomes an intelligent resource, then this improves healthcare outcomes through the channel of mental health. The project is marketable and adaptable, so in continued iterations of our phases we hope to see improving patient satisfaction and possibly even adoption by hospitals as it improves.

Limitations, Risks and Mitigating Measures

Power Requirement

The AI system is meant to be connected to a sort of smart speaker, but will necessitate input from several sensors. All these data will be integrated to provide a response. Using this kind of technology in every single room of a hospital can use up a lot of energy and will require a lot of power computationally. Hospitals usually need to deal with large energy consumption in more important respects such as MRI machines or surgeries. Having an additional consequent energy and data storage demand can be quite problematic for them. A possible mitigating strategy will be to use cloud computing. Storing the data on the cloud will reduce the memory consumption and operations of the servers in the hospital. This would help to slightly reduce the energy consumption of the system.

Empathy of AI System

AI systems are perceived to be unable to empathize. In this research, the problem is emphasized as to how will an AI system be able to empathize if it cannot be diseased itself and feel the associated pain. Humans have the ability to make use of simulation theory and have the ability to feel illness which make them able to express empathy. AI systems are unable to do so. However, this research relies on expressing empathy and not understanding it. In a study involving suicidal psychiatric patients, Talseth et al. (1999) found that the patients were sensitive to nurse behaviours such as “listening without prejudice”, “communicating hope to patients….”. It can be argued that the nurses may not have totally believed in what they said. The perception of caring was enough to make the patients feel better.

Privacy

Having an AI system which captures patients’ data introduces privacy issues. The most important issue would be if patients will feel safe knowing that their personal information is being manipulated by an artificial system. Also, for how long will this information be held. These are common concerns that arise with the use of technology. A possible mitigating strategy would be to ask the patient’s consent before allowing the AI system to be used. Furthermore, if the latter is used, the data will be retained for a period of 5 years if ever the patient returns to the hospital. After that point, the patient's information will be archived and deleted.

Constant Monitoring

Patients are constantly monitored with regards to their health in hospitals. This AI system will also be recording their voices and facial expressions. This can be a concern for the patients since they might not want that their “moves” and “words” to be recorded. It would create an environment of non-privacy. A possible mitigating strategy would be to ask for the patient’s consent before using the system in their room. A more interesting option, though, would be to make the system engage in a way to build trust with the patient.

Equipment Requirements

The full set of equipment required for the system to function is quite expensive. Each room will need to be equipped with audiovisual sensors, body temperature sensors on beds and equipment to integrate existing hospital devices such as oximeters to the system. In addition to that, an interface will be needed to integrate all the various inputs and to produce output. A smart speaker can be used to achieve this but providing every room with one can become costly.

Semantic Limits

Another concern or risk is where are the limits of the AI system. Its integration is already a big change in hospital care. The question that arises is whether these systems will be designed to do more than just assist the patient one day. This includes regulating or administering medication to patients when in need or performing emergency shots when a patient is in a crisis situation. To mitigate those risks, this technology will be implemented and tested slowly. Improvements will only be implemented when the technology will become more safe and controllable.

Integration of the AI system in hospital rooms

A major risk will be the integration of the AI system in care. If the system contains bugs, patients can be easily irritated, which will worsen their health instead of improving it. This AI system is meant to make their stay more comfortable and enjoyable. Errors in response can even have serious implications if not dealt with. To mitigate this risk, the AI system will have a seamless integration. It can be implemented into smaller incremental phases while testing it at each of those phases. This would ensure that errors and bugs will be caught before it degenerates into more serious effects.

Conclusion

Going through all of the steps necessary to write this research proposal required us to learn about and incorporate numerous subjects, ranging from optimal characteristics in nurses to neural networks. In doing this, we learned how large of a field empathetic AI is and much is needed to create it properly. We also gained knowledge of many previous attempts and studies of similar things, acting as guiding hands for our proposal. While there is still a vast mount of information to acquire, our group transitioned from simply thinking this would be a cool project idea to having a legitimate knowledge base of this field. After formally drafting a research design, we remain confident that implementing this system into hospitals will have a positive effect on hospital patients. It is a large and ambitious task requiring consideration of many moving parts, but the scholarly puzzle pieces for the project exist and the technological requirement is feasible. This in combination with our research linking mental and physical health shows how this project could ultimately cut healthcare costs by helping patients get healthy as fast as possible.

Bibliography

Bickmore, T. W., Pfeifer, L. M., & Jack, B. W. (2009). Taking the time to care. Proceedings of the 27th international conference on Human factors in computing systems - CHI 09. doi:10.1145/1518701.1518891

Cheng, H., Ispir, M., Anil, R., Haque, Z., Hong, L., Jain, V., . . . Chai, W. (2016). Wide & Deep Learning for Recommender Systems. Proceedings of the 1st Workshop on Deep Learning for Recommender Systems - DLRS 2016. doi:10.1145/2988450.2988454

Freshwater, D., & Stickley, T. (2004). The heart of the art: emotional intelligence in nurse education. Nursing Inquiry, 11(2), 91-98. doi:10.1111/j.1440-1800.2004.00198.x

Goleman D. (1995), Emotional Intelligence. Now York City, NY: Batnam Books.

Huang, C. L., Hsiao, S., Hwu, H., & Howng, S. (2012). The Chinese Facial Emotion Recognition Database (CFERD): A computer-generated 3-D paradigm to measure the recognition of facial emotional expressions at different intensities. Psychiatry Research, 200(2-3), 928-932. doi:10.1016/j.psychres.2012.03.038

Huang, C. L., Hsiao, S., Hwu, H., & Howng, S. (2012). The Chinese Facial Emotion Recognition Database (CFERD): A computer-generated 3-D paradigm to measure the recognition of facial emotional expressions at different intensities. Psychiatry Research, 200(2-3), 928-932. doi:10.1016/j.psychres.2012.03.038

Hyun, K. H., Kim, E. H., & Kwak, Y. K. (n.d.). Improvement of emotion recognition by bayesian classifier using non-zero-pitch concept. ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005. doi:10.1109/roman.2005.1513797

Hyun, K. H., Kim, E. H., & Kwak, Y. K. (2007). Emotional Feature Extraction Based On Phoneme Information for Speech Emotion Recognition. RO-MAN 2007 - The 16th IEEE International Symposium on Robot and Human Interactive Communication. doi:10.1109/roman.2007.4415195

Ioannidou, F; Konstantikaki (2008), “Empathy and emotional intelligence: What is it really about?“, V. International Journal of Caring Sciences; Nicosia Vol. 1, Iss. 3, pp. 118-123.

Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., . . . Darrell, T. (2014). Caffe. Proceedings of the ACM International Conference on Multimedia - MM 14. doi:10.1145/2647868.2654889

Kostov, V., & Fukuda, S. (2003). Computer Mediated Emotional Intelligence. Journal of Japan Society for Fuzzy Theory and Intelligent Informatics, 15(4), 391-400. doi:10.3156/jsoft.15.391

Lim, A., & Okuno, H. G. (2014). The MEI Robot: Towards Using Motherese to Develop Multimodal Emotional Intelligence. IEEE Transactions on Autonomous Mental Development, 6(2), 126-138. doi:10.1109/tamd.2014.2317513

Matthews, G., Zeidner, M., & Roberts, R. D. (2004). Emotional intelligence: science and myth. Cambridge: MIT Press.Mikolajczak, M., & Bellegem, S. V. (2017). Increasing emotional intelligence to decrease healthcare expenditures: How profitable would it be? Personality and Individual Differences, 116, 343-347. doi:10.1016/j.paid.2017.05.014

Oneill, D. (2011). Loneliness. The Lancet, 377(9768), 812. doi:10.1016/s0140-6736(11)60307-3

Promoting Mental Health - World Health Organization. (n.d.). Retrieved November 28, 2017, from http://www.who.int/mental_health/evidence/en/promoting_mhh.pdf

Smith, M. C., Turkel, M. C., & Wolf, Z. R. (2013). Caring in nursing classics: an essential resource. New York: Springer.

Talseth, Lindseth, Jacobsson, & Norberg. (1999). The meaning of suicidal psychiatric in-patients experiences of being cared for by mental health nurses. Journal of Advanced Nursing, 29(5), 1034-1041. doi:10.1046/j.1365-2648.1999.00990.x

Vu, N. T., Adel, H., Gupta, P., & Schütze, H. (2016). Combining Recurrent and Convolutional Neural Networks for Relation Classification. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. doi:10.18653/v1/n16-1065

Yeakel, S., Maljanian, R., Bohannon, R. W., & Coulombe, K. H. (2003). Nurse Caring Behaviors and Patient Satisfaction. JONA: The Journal of Nursing Administration, 33(9), 434-436. doi:10.1097/00005110-200309000-00002