Bayes Group
← All Insights
·5 min read

The ML Research Problem: Why Hedge Funds Keep Losing Machine Learning Talent

Systematic funds are losing ML researchers to big tech within 18 to 24 months of hiring them — and most are funding the very firms they lose talent to in the process. The structural reasons, and what the funds that retain ML talent do differently.

The Revolving Door

The pattern is consistent enough now that it deserves to be named. A systematic fund hires a strong machine learning researcher — typically from academia or an elite technology firm — with a compensation package that represents a genuine premium to their previous role. Within eighteen to twenty-four months, that researcher has returned to big tech, or joined a startup, or moved to a fund with a different profile. The hiring fund is left with institutional knowledge it couldn't retain, a gap in the team, and a search process starting from scratch.

This is not a coincidence of individual decisions. It is a structural outcome driven by a genuine misalignment between what systematic funds are offering and what strong ML researchers need in order to stay engaged.

The Misalignment

The best machine learning researchers — the specialists who are genuinely advancing the state of applied ML rather than applying established techniques — are motivated by a combination of intellectual challenge, professional recognition, and the sense that their work matters.

Hedge funds can provide intellectual challenge. The problems in systematic trading — regime detection, non-stationary time series, signal combination under correlation constraints, market microstructure modelling — are genuinely hard, and talented researchers often find them engaging, at least initially.

What hedge funds structurally cannot provide is professional recognition. Research at a systematic fund is confidential. A researcher who spends three years producing genuinely novel ML approaches to financial prediction cannot publish those approaches, cannot present them at conferences, and cannot build the kind of academic and professional reputation that compounds into a career. Meanwhile, a researcher at Google DeepMind or Anthropic is accumulating exactly that kind of compounding reputation, often at comparable or higher total compensation.

The second mismatch is data infrastructure. Elite ML researchers are accustomed to working with extraordinary compute resources, carefully curated large-scale datasets, and research infrastructure built by world-class engineers. Systematic funds, even the largest and most sophisticated, cannot match the scale of infrastructure that a major technology firm provides. Researchers who joined with vague expectations about what the fund's data environment looked like frequently find the gap between expectation and reality professionally frustrating.

What the Funds That Retain ML Talent Do Differently

A small number of systematic funds have materially better retention of senior ML researchers than their peers. The common threads are specific and worth examining.

They hire differently. The ML researchers who thrive in systematic finance are not the most academically prestigious candidates — they are researchers who are explicitly motivated by applied prediction under uncertainty in competitive environments. Funds that hire on the basis of publication record and academic prestige tend to hire researchers who miss that prestige environment. Funds that hire on the basis of demonstrated motivation for the financial prediction problem specifically — those who have thought carefully about why this problem space is interesting and have a clear answer — retain much better.

They create protected research time. Several funds with strong retention records have formalised a research time allocation — typically 20–30% of a researcher's working time — for work on problems that are adjacent to but not directly part of the fund's proprietary alpha research. This can include publishable academic work in areas that don't directly reveal alpha methodology, open-source infrastructure contributions, or fundamental research with a longer horizon than typical strategy development cycles. This is a meaningful retention investment, not a token gesture.

They compete seriously on infrastructure. Funds that have invested in genuinely high-quality data infrastructure, compute access, and research tooling — and that communicate this investment credibly during the hiring process — have a structurally easier time attracting and keeping researchers who care about their environment. The funds that struggle are often those that talk about infrastructure quality without it being operationally real.

The Hiring Implication

The most common tactical error in hiring ML researchers is to approach them as interchangeable with quantitative researchers in the traditional sense. The interview process, the role design, and the career path need to be specific to the ML researcher profile — which values different things from a traditional statistical researcher.

Before opening a search for ML talent, it is worth being honest about two questions internally: what does the fund genuinely offer that a major technology company doesn't, and is that offer credible to a sophisticated researcher who has other options? If the honest answer to either question is uncertain, that uncertainty will show in the hiring process — and researchers who have options will find it.


Bayes Group places machine learning researchers and systematic trading specialists at funds and institutional asset managers globally. If you are building ML research capability and want a candid conversation about the market, reach out.

Bayes Group

Ready to discuss a mandate?

We work with a small number of firms at any time.