Codex vs Programmers: Can the Text Generator Kill Coders?

Codex vs Programmers: Can the Text Generator Kill Coders?

As stated by the Open AI developers, Codex is not as efficient at understanding code as generating it.

Codex DNA, a pioneer in automated synthetic biology systems could encode DNA sequences digitally, and retrieve the stored information accurately afterward. Sounds like a great achievement though, the Codex technology is sending jitters among coders for its ability to code all by itself. This is AI powered coding, that can create a coding miracle at a cue. In the year 2021, Open AI released Codex, a new system that writes code only with simple prompts given as input in simple language. However, experts feel the time is far when programmers will be rendered redundant just because a system is smart enough to generate code.

A developer's job is not confined to writing code:

Typically writing code takes less than 20% of a developer's time. In a paper named 'Evaluating Large Language Models Trained on Code', OpenAI reveals many interesting facts which should be enough to put all the unreasonable aspersions of programmers at rest. In the paper, they say, "engineers don't spend their full day writing code. Instead, they spend much of their time on tasks like conferring with the colleagues, writing design specifications, and upgrading existing software stacks." It goes on to say that in a way it can help coders develop good code by letting systems do grunt coding work. This shouldn't come as a surprise because developing a project requires so much trivial and repetitive coding. When it comes to job loss, around 20% of programmers may become redundant if at all Codex succeeds in generating genuine code. It will only happen the day when a non-coder can collaborate with codex to come up with the spec-sheet and develop a working piece of software. Experts do not see this day in any near future and there are many reasons why they think so.

Is Codex really a programming application?

Codex is a direct descendant of GPT-3 model developed for generating code using few and easy inputs. The deep learning models are as good as the data fed to them. And ironically, GPT-3's datasets didn't contain any coding samples. Therefore, it is highly illogical to consider Codex as a programming application complete in itself. And further, as stated by the Open AI developers themselves, Codex is not as efficient at understanding code as generating it. Codex, like any other deep-learning language models, just captures statistical correlations between code fragments. It has also been observed that the efficiency of the deep learning model goes down with the increase of the number of variables it is fed with. Further elaborating on its inability to understand the very basic program structure, the paper states, "It can recommend syntactically incorrect or undefined codes and invoke variables and functions from outside the codebase." At times it may even stitch up the pieces of code even if they don't fit together. Moreover, the developers themselves stated Codex is successful in only 37% of the cases.

Can programmers and Codex co-exist?

Though OpenAI's CTO and co-founder Greg Brockman is optimistic about Codex's inclusivity, in seeing it as a tool to multiply programmers, experts see the entire picture from a different vantage point. In addition to assisting programmers in generating quality code, it will create a new breed of programmers called 'prompt engineers.' A prompt engineer is one who develops the appropriate prompt for the Codex application to generate the code. Daniel Jeffries, a tech-podcaster in future technologies opines that Codex might create hybrids between humans and AIs, called 'centaurs', like in a chess game, and do something faster and better together which either alone cannot.

More Trending StoriesĀ 

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net