SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image

GitLab reveals AI concerns create an adoption dilemma

Wed, 6th Sep 2023
FYI, this story is more than a year old

GitLab's findings show that approximately 80% of IT experts are concerned about AI tools accessing private data or intellectual property.

GitLab has released the findings of its 9th Global DevSecOps Report: The State of AI in Software Development. GitLab surveyed more than 1,000 global senior technology executives, developers, and security and operations professionals on their successes, challenges, and priorities for AI adoption. 

The report finds organisations are optimistic about AI, but adoption requires attention to privacy and security, productivity, and training. 

David DeSanto, Chief Product Officer of GitLab, says: "The transformational opportunity with AI goes way beyond creating code."

"According to the GitLab Global DevSecOps Report, only 25% of developers' time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60% of developers' day-to-day work."

"To realise AI's full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software, not just developers, to benefit from the efficiency boost." 

"GitLab's AI-powered DevSecOps platform delivers a privacy-first, single application to help teams deliver secure software faster," says DeSanto.  

Although organisations are enthusiastic about implementing AI, data privacy and intellectual property are key priorities when adopting new tools. 

The survey found that 95% of senior technology executives prioritise privacy and intellectual property protection when selecting an AI tool. Moreover, 32% of respondents were "very" or "extremely" concerned about introducing AI into the software development lifecycle.

Of those, 39% cited they are concerned that AI-generated code may introduce security vulnerabilities, and 48% said they are concerned that AI-generated code may not be subject to the same copyright protection as human-generated code.

Security professionals worry that AI-generated code could result in more security vulnerabilities, making more work for security professionals. GitLab found only 7% of developers' time is spent identifying and mitigating security vulnerabilities, and 11% is spent on testing code. 

48% of developers surveyed were significantly more likely to identify faster cycle times as a benefit of AI than 38% of security professionals. Overall, 51% of respondents already see productivity as a key benefit of AI implementation. 

While respondents remain optimistic about their company's use of AI, the data indicates a discrepancy between organisations' and practitioners' satisfaction with AI training resources. 

Despite 75% of respondents saying their organisation provides training and resources for using AI, a roughly equal proportion also said they find resources independently, suggesting that the available resources and training may be insufficient. 

Highlighting the importance of training, 81% of surveyors cited that they require training to successfully use AI in their daily work. Moreover, 65% who use or plan to use AI for software development said their organisation hired or will hire new talent to manage AI implementation. 

When asked what types of resources are used to build AI skills, the top responses were: 49% utilise books, articles, and online videos, 49% watch educational courses, 47% practice with open-source projects, and 47% learn from peers and mentors.

Alexander Johnston, Research Analyst in the Data, AI & Analytics Channel at 451 Research, a part of S&P Global Market Intelligence, says: "Enterprises are seeking out platforms that allow them to harness the power of AI while addressing potential privacy and security risks. There is industry demand for privacy-first, sustainably adopted AI."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X