Skip to main content

AI impact to software engineering jobs

· 2 min read
Amaresh Tripathy
Co-Founder and Managing Partner

Anecdotes are great but what does the data say? Let us see some interesting insights that come out of analyzing 20M job postings over 16 months.

AI Impact to Software Engineering Jobs

Key Findings

  1. Overall software engineering job postings have remained stable
  2. There is a significant shift in required skills
  3. AI/ML skills are increasingly in demand

Detailed Analysis

  • AI and ML Engineers: Showing strongest growth in demand, leading the market
  • Front-end Engineers and Data Engineers: Experiencing significant decline in demand
  • Data Scientists: Demonstrating resilience with stable demand levels

Salary Insights

  • Salary ranges remain relatively flat when adjusted for inflation
  • Current market supply suggests limited potential for significant salary increases in the near term

Most In-Demand Skills

  1. NLP and LLM Technologies

    • Natural Language Processing emerges as the most desired skillset
    • LLM-related skills, particularly chatbot development, showing exponential growth
  2. Programming Languages

    • Rust: Gaining significant momentum in the market
    • React: Taking substantial market share from Angular in front-end development
    • Python: Maintains its position as the de facto language for ML development

Tech Company Hiring Patterns

  • Large tech companies that previously conducted layoffs are now actively hiring again
  • Hiring patterns show balanced recruitment across all roles, not just AI positions
  • Evidence suggests focus on talent upgrade and correction of previous hiring practices from 2021

Key Takeaway

The data presents a clear message for both new graduates and experienced software engineers: incorporating AI skills into your toolkit is becoming increasingly important for career growth and marketability.

Source: Analysis based on 20M job postings over a 16-month period. Original data

The Data Platforms Battle

· 2 min read
Amaresh Tripathy
Co-Founder and Managing Partner

The enterprise data stack is going through an inflection point and the battle lines are being redrawn - which happens every decade or so. This time, to no surprise, the AI agenda in Enterprise is going to determine the winners and losers.

Data Platforms Battle

The Changing Landscape

The battleground for data platforms has shifted dramatically, now centering on enabling AI applications. Success in this space requires excellence in four critical areas:

  1. Semantic Layer: Understanding the nuances of business data across the enterprise
  2. Application Development Tools: Providing low-code/no-code toolsets for building applications
  3. Orchestration Simplicity: Delivering workflows and interoperability that simplifies ecosystem management
  4. AI Capabilities: Offering powerful AI frameworks that deliver results

The Major Players

Four tech giants are aggressively positioning themselves in this space, each making significant moves in AI capabilities over the last twelve months:

Databricks

  • Strong position with Unity Catalog for data context understanding
  • Introduced AI Playground and DBRX
  • Application toolset strategy remains unclear

Microsoft

  • Leverages strong OpenAI partnership
  • Microsoft Fabric provides excellent orchestration
  • Strong application development tools
  • Semantic layer remains a weakness

Salesforce

  • Comprehensive solution for customer applications with Einstein AI
  • Challenge: Limited to Salesforce data ecosystem
  • Faces hurdles with external data integration

Snowflake

  • Under competitive pressure but has comprehensive offerings
  • Cortex AI shows promise
  • Benefits from walled garden approach for orchestration
  • Needs to balance user experience with economics
  • Working to expand app building capabilities for business users

Enterprise Challenges

A significant organizational challenge exists: data teams and application teams typically operate under separate leadership with limited collaboration. As AI applications become the central battleground, this could drive:

  1. New winners and losers in the Data Platform space
  2. Fundamental changes to enterprise IT operating models

Three Things to Get Right When Scaling Your Data Analytics Operation

· 3 min read
Amaresh Tripathy
Co-Founder and Managing Partner

Earlier this year, I discussed how the urgent need for quick, accurate decision-making during the pandemic catapulted data analytics into the spotlight. Suddenly, business leaders demanded more—and faster.

Today, we find ourselves at a pivotal inflection point. The focus is shifting from the art and craft of data analytics to operating at scale across the entire organization.

While innovative experiments have their place, the real challenge now is to demonstrate that these strategies can work seamlessly throughout your business and support a long-term vision.

Scaling Data Analytics

Achieving scalable data analytics requires a harmonious blend of talent, technology, and processes. Let’s explore these three critical factors in detail.


The Three Critical Factors

1. Talent: Beyond Checking Boxes

Building a high-performing analytics team is akin to assembling a special ops unit—only the best make the cut, and every member is meticulously trained for their role.

  • Selective Recruitment: Much like top-tier universities maintain small class sizes to nurture excellence, your goal should be to attract individuals who not only excel individually but also enhance the overall team dynamic.
  • Focused Training: Invest in ongoing learning so that every team member understands their responsibilities and contributes effectively to collective success.
  • Adaptability: Recognize that data analytics is a relatively new field. Although the talent pool is still growing, identifying and nurturing potential is key to building a cohesive and capable team.

2. Technology Stack: Engineered for Scale

Your technology stack must be designed with scalability at its core. Here are three key considerations:

  1. Embrace Flexibility: Accept that there isn’t a one-size-fits-all solution. As your business scales, your tech environment will evolve. Embrace the learning process—including the inevitable mistakes and unexpected insights.
  2. Learn from Industry Leaders: Look to successful companies like Netflix. Despite being known for entertainment, Netflix is a data-driven powerhouse. Their strategy of partnering with experts (like Amazon for cloud services) enables them to manage exponential growth while focusing on their core strengths.
  3. Prioritize Scalability: Even without a neatly defined roadmap, ensure that every tech decision is rooted in the principle of scale. This approach will help you accommodate growth and drive continuous progress.

3. Networked Decision-Making: Breaking Down Silos

Effective scaling of data analytics cannot occur in isolation. Your organization must operate as an interconnected network rather than a collection of silos.

  • Collaborative Culture: Move away from rigid, top-down hierarchies. Encourage open communication and collaboration across all departments.
  • Cross-Functional Integration: Consider how Netflix’s marketing, tech, production, and support teams work together. Such integration ensures that every new initiative is backed by the necessary expertise—from cloud support to content creation.
  • Empower Through Data: It’s not enough to simply provide access to data; teams must also have the skills to interpret and leverage that information. Data analytics leaders will increasingly serve as facilitators, enabling robust, networked decision-making across the organization.

By honing in on these three critical areas—talent, technology, and networked processes—you’ll be well-equipped to scale your data analytics operations and unlock sustained business value.

Building AI Applications - Learnings over the past year

· 2 min read
Amaresh Tripathy
Co-Founder and Managing Partner
Varun Sharma
Managing Director, AI Engineering

Our team at AuxoAI has been deep in building enterprise AI applications for the last twelve months. They span across various functions and here are our key learnings.

Building AI Applications

Key Learnings

  1. Integration Challenges
  2. User Adoption Patterns
  3. Performance Optimization
  4. Security Considerations
  5. Process Automation and AI-Native Workflows

Process Automation Evolution

Celonis popularized the concept of 'happy path' - essentially how a process (and the software) is supposed to work. In reality, the processes look something like the picture below. It is all over the place with lots of exceptions.

Process Flow Complexity

Most of the steps in the less happy path are a result of policy guardrails that require manual research and summarization or upstream data quality issues. This intermediate research step creates bottlenecks and results in a lot of non-value-added work in the enterprise.

The Promise of GenAI in Workflow Optimization

The promise of GenAI models is that eventually we will see more AI-enabled workflows that: a) Automate the research step more effectively than traditional RPA bots b) Suggest the right guardrails and calibrate the system performance c) Nudge to correct the upstream data challenges

For instance, in creating a sales order, there is a guardrail of credit checks in certain instances which creates friction in the process and needs manual research and intervention.

AI-Native Approaches

Alternatively, and this is where things get more interesting - you can think of the process itself as dynamic/adaptive. The workflow is in fact a series of decisions, and depending on the type of customer or order, you will take different paths that optimize for process outcome metrics. Not all orders should need the standard sales order generation process. Maybe there is a 'click to buy' equivalent of Amazon's buying experience for certain types of orders or customers. The probabilistic approach to process execution will be the foundation of a new breed of AI-native software.

Implementation Challenges

Every enterprise software company is going to incorporate these models in the workflow, but the challenge will be how to think about AI-enabled vs AI-native approaches, what is easier for enterprises to adopt, and what is the willingness to pay for it.

AI Enabled vs AI Native

· 2 min read
Amaresh Tripathy
Co-Founder and Managing Partner

Celonis popularized the concept of 'happy path'. Essentially how a process (and the software) is supposed to work. In reality, the processes often deviate significantly.

AI Enabled vs AI Native

Understanding the Difference

  1. AI-Enabled Systems

    • Traditional systems with AI features added
    • Limited integration
    • Bolt-on AI capabilities
  2. AI-Native Systems

    • Built from ground up with AI
    • Deep integration
    • Natural AI workflows

Key Learnings from Building Enterprise AI Applications

At AuxoAI, we've gained valuable insights from twelve months of building enterprise AI applications across various functions and industries. Here are our key learnings from developing probabilistic software systems that integrate data analytics, modeling, UX, and deep engineering:

1. Not Everything Needs GenAI

Zero or very low tolerance of error systems, especially those without human oversight, aren't ideal starting points for AI implementation. Traditional automation and ML models often suffice - there's no need to force-fit novel technology.

2. POCs Often Miss the Mark

Many Proof of Concepts focus too narrowly on technical validation rather than delivering real business value. The tendency to default to simple document Q&A systems overlooks opportunities to impact critical enterprise metrics.

3. Process Reimagination is Key

The most significant value from AI comes from reimagining processes entirely. This requires:

  • Future Back thinking aligned with Today Forward execution
  • Breaking down the reimagined process vision into manageable steps
  • Regular value delivery while managing organizational change

4. Beyond Simple GenAI

Complex AI applications require:

  • Multiple agents working in concert
  • Sophisticated information exchange systems
  • Advanced engineering beyond basic prompt engineering
  • Reliable integration of multiple components

5. Importance of Auditability

Particularly in RAG (Retrieval-Augmented Generation) implementations, transparency is crucial:

  • Clear audit trails
  • Visible processing steps
  • Direct connections to source documents

6. Document Complexity

While basic RAG implementations are straightforward, challenges arise with:

  • Rich formatted documents
  • Images
  • Nested tables
  • Complex document structures

7. Cross-Document Intelligence

High-value use cases often require:

  • Information retrieval from multiple documents
  • Multi-step processing of retrieved information
  • Integration with historical conversation context

Pitfalls and Promises - AI in Law

· 2 min read
Amaresh Tripathy
Co-Founder and Managing Partner

Arvind Narayanan is the AI critic you may want to follow for two reasons:

  1. He is not trying to be sensational by taking extreme positions
  2. He is a first rate academic

His work tries to define the boundaries of what can be useful and where you have to be careful in a pragmatic way.

He has published a solid paper outlining these boundaries in legal space:

  1. Information processing tasks have high clarity and high observability and are best use cases to start (categorizing requests for legal help, e-discovery)
  2. Creativity, reasoning tasks are a range (spotting errors in legal filings are easier, preparing legal filings harder)
  3. Predictive tasks are fraught with challenges (legal judgment predictions)

Lots of legal departments are evaluating Generative AI - and this is a good paper for them. In fact, we are doing some solid value accretive work in contract extraction space - that helps turn unstructured data into structured information for downstream applications. And our experience has been consistent with these observations.

  1. Document Review and Analysis
  2. Legal Research
  3. Contract Management
  4. Predictive Analytics

Challenges and Opportunities

  1. Accuracy and Reliability
  2. Ethics and Bias
  3. Data Privacy
  4. Integration with Existing Systems

AI in Law