In a recent development, Apple bans ChatGpt and followed suit with other companies in banning the internal use of ChatGPT and similar AI products. Interestingly, this decision comes at a time when OpenAI has introduced its ChatGPT chatbot in the form of a mobile app for iOS users.

According to an internal Apple document reviewed by The Wall Street Journal, the company has expressed concerns about potential risks associated with AI systems like ChatGPT, particularly the possibility of sensitive internal information being disclosed. Apple has also reportedly placed restrictions on GitHub’s automated coding tool, Copilot. Speculation has been circulating about Apple’s own plans in the field of AI, including the potential development of a language model to compete with ChatGPT and Google Bard.

Apple Bans ChatGPT and Joined Rival Samsung

Apple bans ChatGPT and joins a growing list of companies, including Amazon and several prominent banks like JPMorgan Chase, Bank of America, and Citigroup, that have chosen to prohibit the use of ChatGPT. These decisions reflect the increasing scrutiny and caution surrounding the adoption of AI technologies in sensitive environments.

Samsung, a rival of Apple, has also implemented a ban on using ChatGPT internally, not once but twice, following incidents that raised concerns. The initial ban was lifted in March, but it was soon discovered by Korean media that Samsung employees had sought assistance from ChatGPT for various tasks such as resolving source code bugs, fixing software issues, and transforming meeting notes into minutes.

In response to these incidents, Samsung reinstated the ban earlier this month to prevent similar situations from occurring in the future. The use of ChatGPT, Google Bard, and LLM bots poses risks due to the training process, where the data provided to these models can inadvertently disclose confidential business information if similar questions are asked, as warned by GCHQ, the UK’s spy agency.

Furthermore, bot providers like OpenAI and Google have visibility into the queries and content fed to their language models, which introduces the potential for exposure of closely-guarded corporate secrets as they review the data. Additionally, there have been instances of software bugs affecting ChatGPT’s privacy. OpenAI acknowledged a bug in the open-source library redis-py in March, which led to portions of users’ conversations being visible to others. This serves as a reminder that LLM chat bots do not offer real privacy to users, as emphasized by Vlad Tushkanov, the lead data analyst at Kaspersky.

Hence, the incidents and risks associated with the use of ChatGPT and similar chat bots highlight concerns regarding the protection of confidential information and the need for improved software development and user privacy.

Last month, OpenAI introduced a new feature for ChatGPT that allows users to disable chat history. When this feature is enabled, the chat conversations are not only hidden from the interface’s sidebar but also excluded from being used to train OpenAI’s models. OpenAI clarified that although conversations with history disabled will be retained for a period of 30 days, they will be reviewed by OpenAI if necessary to prevent abuse, and then permanently deleted.

Additionally, OpenAI announced its plans to launch a business version of ChatGPT, aimed at providing businesses with greater control over their data. OpenAI specified that conversations in ChatGPT Business would not be utilized for training their language models (LLMs). We reached out to OpenAI for further details regarding ChatGPT Business, including the accessibility of chats to OpenAI staff and the release timeline, and will update this article with any response received.

Author

Write A Comment