The decision to leave a senior executive role at one of the world’s largest chip companies and build a startup is not made lightly — especially when the problem you are targeting sits at the technical and commercial center of the most consequential infrastructure market in decades. That is the position Raja Koduri occupied when he departed Intel in 2023 and founded Oxmiq Labs Inc. in San Francisco. The move was neither accidental nor impulsive. It was the logical endpoint of a career spent watching the same structural failure repeat itself across multiple organizations, multiple product cycles, and multiple billions of dollars in GPU hardware investment.

What Four GPU Programs Teach You That No Classroom Can

Most engineers spend their careers inside a single hardware ecosystem. Raja Koduri has led GPU architecture at four distinct organizations — ATI Technologies, Advanced Micro Devices, Apple, and Intel — each with a different competitive position, a different product philosophy, and a different relationship to the software developers who ultimately determine whether hardware succeeds or stalls.

At ATI, Koduri developed his foundation in graphics architecture at a company that was, at the time, genuinely competitive with NVIDIA for the graphics hardware market. When Advanced Micro Devices acquired ATI, Koduri eventually rose to Senior Vice President and Chief Architect of the Radeon Technologies Group, overseeing AMD’s GPU development across consumer and compute workloads. That role placed him at the front line of the competitive dynamics between AMD and NVIDIA — the technical decisions, the ecosystem trade-offs, and the commercial consequences of those choices over multiple product generations.

His subsequent role in graphics engineering at Apple offered a fundamentally different model: a company that designs its own silicon, controls its own software stack, and treats hardware-software integration not as a goal but as a baseline requirement. Apple does not ship hardware and then build software support. It ships both together, deliberately, as a unified system.

At Intel, as Chief Architect and Executive Vice President of the Architecture, Graphics and Software (IAGS) division, Koduri oversaw the most ambitious discrete GPU program the company had undertaken in its history. The scope of that effort — designing new GPU silicon, building a developer toolchain, establishing market presence from near-zero in the discrete GPU segment — made it one of the most instructive exercises in ecosystem-building in recent semiconductor history. What worked, what did not, and why are questions Koduri is uniquely positioned to answer.

The Structural Problem That Never Changed

Through four programs, across two decades, one variable remained constant: NVIDIA’s software ecosystem — specifically, the CUDA parallel computing platform — functioned as a structural barrier that no amount of hardware capability fully overcame.

NVIDIA introduced CUDA in 2006. In the years that followed, it became the default programming environment for GPU-accelerated workloads. Machine learning researchers built on it. Framework developers built on it. Cloud providers built infrastructure around it. By the time enterprise AI development reached the scale it occupies today, CUDA was not simply a tool — it was the environment in which AI software was conceived, written, tested, and deployed.

For GPU hardware alternatives, this created a problem that hardware alone could not solve. A chip with superior specifications in a specific workload is commercially irrelevant if the developer toolchain required to run that workload does not support it. The switching cost is not measured in dollars — it is measured in the time required to rewrite, retest, and redeploy software that was built for a different execution environment. In most enterprise contexts, that cost is prohibitive.

Koduri did not observe this from a distance. He worked inside organizations that built and shipped competitive GPU hardware, watched it fail to achieve its market potential, and understood precisely where the failure originated. That understanding is the direct foundation of Oxmiq Labs.

RISC-V, CUDA Compatibility, and the Startup Bet

Oxmiq Labs is built on a RISC-V hardware architecture — a choice that signals the company’s orientation from the outset. RISC-V is an open-source instruction set architecture, meaning its foundational hardware design language is not owned, licensed, or controlled by any single commercial entity. For a startup whose central mission is to reduce proprietary lock-in in the GPU ecosystem, building on an open hardware foundation is structurally coherent: it eliminates one layer of vendor dependency before the software layer is even addressed.

The company’s focus is CUDA workload portability. Specifically, Oxmiq Labs works to enable Python-based AI applications — the dominant development environment for machine learning — to run on non-NVIDIA hardware without code modification. The goal is not to replace CUDA, to compete with it on developer experience, or to build a rival ecosystem. The goal is to make CUDA a hardware-agnostic execution standard rather than a hardware-specific dependency.

This framing reflects an important strategic clarity. GPU hardware startups attempting to compete with NVIDIA by offering a better platform are asking developers to change their behavior. Oxmiq’s approach asks nothing of developers at all — it asks the infrastructure to accommodate the code that already exists. That inversion is not a marketing distinction. It is a product architecture decision that determines how the company reaches the market and what its adoption path actually looks like.

The Assets a Corporate Career Builds — and When They Matter

Building a startup after a long corporate career carries a specific set of advantages that are rarely available to founders at earlier career stages. In Koduri’s case, those advantages are particularly concentrated and particularly relevant to the problem Oxmiq Labs is solving.

The first is technical depth. Designing GPU architectures across multiple organizations and multiple product generations produces an understanding of the hardware-software interface that is not available from the outside. The decisions that determine whether a GPU program succeeds or fails — where to invest engineering resources, which compatibility trade-offs to make, how to sequence a developer ecosystem — are learned by doing, not by studying.

The second is industry relationships. Beyond his operational role at Oxmiq Labs, Raja Koduri serves in advisory and board capacities for leading semiconductor and AI companies. Those positions reflect a professional network built across decades of senior technical leadership. For a company whose path to adoption runs through partnerships with chip manufacturers, cloud providers, and AI platform developers, that network is not incidental — it is a competitive asset.

The third is credibility within the engineering community. Developers and architects who have worked with Radeon hardware, followed Intel’s discrete GPU program, or studied GPU architecture at a technical level know Koduri’s body of work. That reputation reduces the friction of establishing Oxmiq’s technical legitimacy — a meaningful advantage in a market where claims about CUDA compatibility will be evaluated skeptically and tested rigorously.

The Market Timing Behind the Founding Decision

The founding of Oxmiq Labs in 2023 coincided with a specific moment in the AI infrastructure market: the period immediately following the dramatic acceleration of enterprise AI adoption and the corresponding surge in GPU demand that revealed, in sharp relief, the risks of single-vendor compute dependency.

Governments evaluating domestic AI capacity, cloud providers managing compute cost and supply risk, and enterprises building long-term AI programs all faced the same structural constraint — their options were limited not by hardware availability in the abstract, but by software ecosystem lock-in in the specific. A technically credible solution to CUDA portability, arriving at this moment, addresses a problem that the market has simultaneously recognized as urgent and failed to solve.

Koduri’s academic background — a bachelor’s degree in electronics and communications engineering from Andhra University and a Master of Technology from the Indian Institute of Technology (IIT) Kharagpur — combined with his professional experience, positions him to engage that problem at the technical level it actually requires. The IIT Kharagpur program, consistently ranked among Asia’s most rigorous engineering graduate programs, grounded his work in the systems-level thinking that GPU architecture demands.

What Comes After the Compatibility Layer

The near-term objective for Oxmiq Labs is clear: establish a reliable, performant compatibility layer that makes non-NVIDIA GPU hardware accessible to the existing CUDA-based AI developer ecosystem. If that objective is achieved, the second-order effects are significant.

Cloud providers gain leverage in GPU procurement negotiations. Enterprises gain flexibility in AI infrastructure design. Chip manufacturers who have invested in non-NVIDIA GPU silicon gain a software path to developer adoption. And the AI compute market — which has operated under a degree of concentration unusual even by the standards of technology infrastructure — gains a structural mechanism for more competitive development.

None of that is guaranteed. Early-stage companies with ambitious technical goals face execution risk independent of the quality of the thesis. But the thesis itself — that CUDA compatibility, built on open RISC-V architecture, is the missing enabler for AI infrastructure diversification — is the product of a career spent at the intersection of the exact technologies involved.

Raja Koduri did not arrive at Oxmiq Labs through a pivot, an opportunistic entry, or a surface-level reading of market dynamics. He arrived there having built the hardware that demonstrated the problem, at the organizations where that problem mattered most.


About Raja Koduri

Raja Koduri is an Indian-American computer engineer, technology executive, and founder with more than two decades of experience in GPU architecture and computing platform development. He holds a bachelor’s degree in electronics and communications from Andhra University and a Master of Technology from the Indian Institute of Technology (IIT) Kharagpur. Koduri has held senior roles at ATI Technologies, Advanced Micro Devices (AMD), Apple, and Intel, where he served as Chief Architect and Executive Vice President of the Architecture, Graphics and Software division. In 2023, he founded Oxmiq Labs Inc., a San Francisco-based GPU software and IP startup focused on enabling CUDA workloads to run on non-NVIDIA hardware through RISC-V-based designs and open software frameworks. He also serves in advisory and board capacities for leading semiconductor and AI companies.

TIME BUSINESS NEWS

JS Bin