A slightly shorter version of this article first appeared in the Times of India on September 24, 2025.
Much ink has been spilled discussing what the career prospects are for a CS graduate in this day and age where generative AI has upturned our world. Much of that ink has predicted a gloomy outlook. Here I cast a decidedly more optimistic look, but realizing the positive outcomes will take some action, for educators and learners.
To state the obvious, the skill of writing “simple” software modules is no longer useful; that has been easily automated away by general-purpose LLMs like ChatGPT or Gemini or specialized software development tools like Cursor or Amazon’s Code Whisperer. So a student may be tempted to skip the foundations of software programming and software engineering and go after the shiny toys like prompt engineering or LLM fine tuning. But that would be a mistake if you want to build a meaningful lasting career in the technology sector.
What is the trouble with AI-generated code?
While plain code blocks can be AI generated, it is still a human enterprise to create reasonably complex software by piecing together these code blocks, while keeping the overall software readable and therefore maintainable. I have examined enough large software packages generated by the latest AI tools to know that such software is the epitome of “spaghetti code”. The interface design is not clean and the flow between the code modules is not intuitive, and software like that ages poorly. If you change a specification one little bit, can you revise the software? Maybe you can with that one change, but if the software has to last, such changes build up and that’s where the spaghetti code comes to bite us.
A legitimate question is what is the problem with unreadable code if it is going to revised by AI tools only; no human needs to read the code. The fact is, we are far away from trusting such code in production settings as I have gathered by talking to many colleagues in the software business. So for the foreseeable future, we will have humans verifying software, for their functionality, reliability, and security. And the process of translating ambiguous or imperfectly defined software specifications into something rigorous that software can be written to still remains a messy, human affair.
What can you do in college to future proof your career?
So if you buy my argument above, learn the software fundamentals in college because they will be the foundation in this frenetic race where new models and architectures seemingly come out every day. Use AI tools generously in your college work (of course, check what is allowable by your course policy). But be on the lookout for errors that AI makes, errors that compromise some functionality or violate some reliability or security property. This skill is supremely valuable in the industry today and picking it up in college is the way to go.
The second aspect is contributing to open source software development. If you can contribute to fine-tuning of some open-source ML model or a method to annotate data samples for training say, that would be a valuable skill to highlight. This would also sharpen and showcase your skill in integrating your code with that of others, which as we have argued above is important.
The third aspect is software testing and performance optimization. Pick these skills up as testing to find blindspots in AI-generated code will remain in demand. Ditto for skills to optimize the AI-generated software for performance. This needs holistic understanding of algorithms, software infrastructure (like compilers or runtime environments), and hardware, again likely to remain a human endeavor, for the near future at least.
The outlook — cloudy or sunny?
Like most things in life, the answer depends on you. The power of AI will render the mediocre software developer obsolete, as it has already started doing. So, there will be a greater premium on the higher-skilled software professional who has the skills that machine does not. There will be an even greater incidence of “winner take all” where a few software (synonymously, AI) companies will become fabulously successful, at the expense of many others. And, as an individual, the pressure to be “above average” will become even more emphasized. But, everyone becoming above average is a mathematical impossibility, isn’t it?