SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image
Overcoming the data challenges associated with gen AI deployments
Wed, 3rd Apr 2024

Interest in the potential of generative AI tools such as ChatGPT is rising rapidly as business leaders come to understand the power and potential of the technology.

Able to create outputs that were previously the domain of human creativity and intelligence, the tools have the potential to transform many business processes and become valuable assistants for everyone, from writers and creators to programmers and analysts.

The ability many of the tools have to interact using natural language makes them particularly interesting. Users can pose questions or make requests as though they were talking to a human assistant.

Meanwhile, the democratisation of AI through open-source initiatives and APIs has made generative AI accessible to a broader audience and sparked innovation across different industries. It also has the potential to streamline processes and reduce operational costs.

Initial response
Interestingly, the response by different businesses to generative AI vary. Some are embracing it enthusiastically for its potential to innovate, enhance efficiency, and gain a competitive edge.

Their excitement stems from the promise of generative AI to revolutionise various aspects of business. It presents opportunities for cost savings, streamlined operations, and creative collaboration.

Meanwhile, others are taking a more cautious approach due to concerns surrounding data privacy, ethical dilemmas, and regulatory uncertainties. Apprehensions over data privacy, ethical considerations like deepfake generation and misinformation, and the potential for AI to perpetuate biases loom large.

The pace of adoption
Despite these reservations, the adoption of generative AI is gaining pace. Many businesses are leveraging it for content generation and marketing, automating tasks like writing blog posts and social media updates.

At the same time, there are customer service benefits being achieved by the use of chatbots and virtual assistants powered by generative AI that offer round-the-clock support. In the healthcare sector, generative AI is aiding in medical imaging interpretation and improving diagnostic accuracy.

Adoption is also growing rapidly among financial institutions. Here, the technology is being used in areas such as risk assessment, fraud detection, and algorithmic trading. Generative AI’s capabilities are also being harnessed for data analysis.

The education sector is also becoming a rapid adopter of AI where it is being used for activities such as the generation of personalised learning content and AI-driven tutoring. This, in turn, is delivering better levels of support to students that is likely to be reflected in improved results.

The challenges posed by generative AI
While it has much to offer the world, generative AI does come with its own set of challenges. Some models may have inherited biases from their training data, resulting in discriminatory or unfair outcomes.

At the same time, making use of personal data to train AI models also raises concerns about data privacy and the potential misuse of sensitive information. The ability to create convincing fake content, including deepfakes, also introduces ethical dilemmas concerning misinformation, impersonation, and privacy violations.

Also, because generative AI tools can automate a range of tasks, there are concerns about potential job displacement in various industries. Generative AI can also be exploited for malicious purposes, such as generating cyberattacks or fraudulent content, posing security risks. 

Designing a secure data architecture for AI
All AI models require large volumes of data during both their training and operational phases. For this reason, businesses must work to design a secure and safe data architecture. This can be achieved by considering these industry best practices:

  • Minimise data collection: Only gather and make use of necessary information. Misuse of information could lead to unintended problems and privacy breaches.
  • Implement anonymisation:  Where possible, remove personally identifiable data to protect individual identities while at the same time preserving data utility.
  • Enforce robust data encryption:  This should be undertaken for data both at rest and in transit to guard against unauthorised access.
  • Establish stringent access controls:  These controls should be designed to limit who can access and modify the data used for AI training and ongoing operations.
  • Adhere to data residency and jurisdiction-specific laws: These need to be followed both in the country of operation and internationally, if that is where customers are located, to ensure legal compliance.
  • Conduct regular auditing: Thorough auditing and monitoring will enable security teams to promptly detect and respond to unusual activities or security breaches.
  • Implement data retention policies: These policies should include the regular removal of data that is no longer necessary data to reduce the risk of data breaches.
  • Conduct employee education: Employees need to be educated about data security and privacy best practices to create a culture of awareness and responsibility.

There is no doubt that generative AI-powered tools have a lot to offer businesses of all sizes. Through careful deployment and effective management of data, these benefits can be achieved, and the challenges minimised.