Privacy concerns regarding AI chatbots
Artificial Intelligence, mainly deep learning and machine learning have evolved rapidly and there is no doubt that they possess a crucial role in different aspects of human life. With advances in deep learning, different NLP (natural language processing) applications such as chatbots have been developed to support people in their struggles with medical issues (physical and mental health).
To enhance performance, AI applications require a vast amount of personal information from customers to assist them with their goals. According to the Chao legal [1], European General Data Protection Regulation (GDPR) has defined a specific limit to the necessary information that applications can collect. Moreover, sensitive data should not be collected unless there is a legal issue. Specifically, chatbots that support those with medical issues (physical and mental health) should consider the importance of privacy and security of their patients' information. Furthermore, users of these apps must be aware of who is accessing their information and where this information is saved.
Also, health chatbots should comply with regulations regarding collecting sensitive information from their patients. For example, Canadian health care privacy legislation is comprised of 14 government jurisdictions (the Federal Government, 10 Provinces, and 3 Territories) each with its legislative framework for protecting the privacy of personal information ("PI"), or personal health information ("PHI"). Most jurisdictions except for Quebec and Nunavut have legislation in place specifically dealing with the health sector and the protection of PHI. But, in some provinces, privacy legislation is similar to the Personal Information Protection and Electronic Documents Act ("PIPEDA") and takes precedence over PIPEDA for health information activity in those jurisdictions.[2]
If you are considering using chatbots for your business, first make sure to secure your application based on the security principles and then follow the privacy regulations. Conversely, if you are a customer who wants to use these bots, you need to be familiar with privacy laws in your country and ensure the confidentiality and integrity of communication of chatbots.
Typically, chatbots employ the following security methods:
- identity authentication using login credentials (username and password).
- two-factor authentication (identity verification through more than one means).
- encryption (encoding messages to ensure that they cannot be accessed and modified).
- self-destructing messages (information that contains sensitive data is destroyed after a certain period).
If any vulnerability exists in the mentioned techniques, it will lead to the disclosure of sensitive and personal information.[3]
Conclusion:
AI impersonates an inevitable role in our lives. Chatbots are helping different businesses to serve their customers 24/7. But, the privacy of customers is a critical issue that should be acknowledged. We as the role of both customer and business providers should understand the importance of privacy concerns for chatbots and effectively perform our part.provider