2 minutes to read - Jun 28, 2023

More developers are coding with AI than you think, Stack Overflow survey finds

VISIT
More developers are coding with AI than you think, Stack Overflow survey finds
With 90,000 responses to the company's annual survey, here's where developers are leaning in the generative AI debate.

One of ChatGPT's biggest claims to fame was the chatbot's ability to code. Shortly after being released, people quickly noticed ChatGPT could perform advanced coding tasks such as debugging code. 

As a result, developers, whose job heavily relies on coding, have adopted the technology. 

Out of the 90,000 respondents, 70% of all respondents use AI tools in their development process or plan to use AI tools this year. Only 29.4% of the respondents said they don't use it and they don't plan to. 

The insights from the previous survey question also showed that developers learning to code are more likely to use AI tools than professional developers (82% compared to 70%). This highlights that ChatGPT's value includes coding but is not limited to it, since professional developers are still using it. 

In terms of personal sentiments towards AI, 77% of all respondents expressed that they have either a favorable or very favorable stance on using AI tools as part of their development workflow. 

The developers delineated that the top use cases for using AI tools in their workflow include writing code (83%), debugging and getting help (49%), documenting code (35%), learning about a codebase (30%), and testing code (24%).

Despite the positive sentiments and widespread implementation, many of the developers show hesitancy regarding the accuracy of these AI tools. 

Only 42% of the polled respondents trust the accuracy of the output while 31% are on the fence and 27% either somewhat distrust or highly distrust it. 

This distrust regarding results is likely rooted in the hallucinations that AI models are prone to. These hallucinations refer to the incorrect output or misinformation that AI models can generate at times. 

The consequences of these hallucinations can be as small as outputting an incorrect answer or significant enough to get OpenAI sued. To get the most out of AI assistance while still maintaining accuracy, it is best to have a human work in tangent with the AI.


Article source
loading...