(Most) Equity Benchmarks Are Lying to You

Pave Data Lab
May 30, 2023
min read
Katie Rovelstad

The equity benchmarks we have been using to guide candidate and employee equity are flawed: they are missing key details about the equity practices that provide meaning to equity grants. 

There are three primary issues with equity benchmarking:

  1. They lack information about equity practices
  2. They lack context on the type of equity/grant
  3. They lack standardization around how the equity value is calculated & understood

We dive deep into these issues and how to solve them in our most recent guide “(Most) Equity Benchmarks Are Lying to You,” and cover them at a high-level below.

Information about equity practices

The structure of equity grants – equity vehicle, vesting duration, vesting interval, vesting structure – influences the value of that grant. And yet, equity benchmarks never include these details. 

Let’s compare two employees at two companies that would fall into the same “benchmark” in most equity benchmarks. They both have these basic benchmarking stats:

  • Company Valuations: $1.5B
  • Intended Grant Value: $100,000
  • Number of Shares: 11,310
  • % Ownership: 0.005%
  • Equity Type: RSUs
  • Vesting Start Date: Jan 2020

However, they differ in their equity practices:

At 2.5 years (the average startup employee tenure), these employees would have vested drastically different amounts:

  • Compay A Employee: 7,975 shares
  • Company B Employee: 5,655 shares

This means that at 2.5 years, the employee at Company B has vested 41% more than the employee at Company A – but the equity benchmark would call these two employee’s equity compensation functionally the same!

Context on the type of equity/grant

There are half a dozen common reasons employees receive equity:

  • at hire
  • at promotion
  • for performance
  • at a certain tenure interval

The amount of equity varies wildly depending on the reason, and yet most equity benchmarking providers offer “Total Equity” value. 

A rare few will distinguish between “new hire grant” and “refresh grant”, and virtually no benchmarking providers offer equity benchmarks for the different reasons why employees are granted equity, or benchmarks into the most common practices used by companies to calculate these different types of equity. 

They lack standardization about the value of equity

When you pull an equity benchmark in dollar-terms, are you looking at the value at time of grant, at time of vest, or today? Is it the gross equity, value or percent ownership? Is it over the vesting lifetime, or for 1 year of vesting? 

Grant values differ wildly across time depending on a company’s performance and the macroeconomic environment; there is no “atomic unit” by which to measure equity value to an employee it is impossible to compare equity packages. That means that when you look at equity benchmarks, you are never comparing apples to apples.

By omitting equity practices and aggregating equity types, equity benchmarking providers aren’t sharing the full truth about equity benchmarks. These omissions can lead to dramatically different equity outcomes and benchmarks. 

If you're interested in learning more about how equity benchmarks are lying to you, or what you can do about it (both in the short term and the long term), check out our guide “(Most) Equity Benchmarks Are Lying to You” where we go deeper into these problems, and share a glimpse into the most common equity practices behind equity benchmarks.

Learn more about Pave’s end-to-end compensation platform
Katie Rovelstad
Operations Leader
Katie is an operations leader at Pave. Prior to joining Pave, Katie held various roles at Segment.

Become a compensation expert with the latest insights powered by Pave.

(function (h, o, t, j, a, r) { h.hj = h.hj || function () { (h.hj.q = h.hj.q || []).push(arguments) }; h._hjSettings = { hjid: 2412860, hjsv: 6 }; a = o.getElementsByTagName('head')[0]; r = o.createElement('script'); r.async = 1; r.src = t + h._hjSettings.hjid + j + h._hjSettings.hjsv; a.appendChild(r); })(window, document, 'https://static.hotjar.com/c/hotjar-', '.js?sv='); !function () { var analytics = window.analytics = window.analytics || []; if (!analytics.initialize) if (analytics.invoked) window.console && console.error && console.error("Segment snippet included twice."); else { analytics.invoked = !0; analytics.methods = ["trackSubmit", "trackClick", "trackLink", "trackForm", "pageview", "identify", "reset", "group", "track", "ready", "alias", "debug", "page", "once", "off", "on", "addSourceMiddleware", "addIntegrationMiddleware", "setAnonymousId", "addDestinationMiddleware"]; analytics.factory = function (e) { return function () { var t = Array.prototype.slice.call(arguments); t.unshift(e); analytics.push(t); return analytics } }; for (var e = 0; e < analytics.methods.length; e++) { var key = analytics.methods[e]; analytics[key] = analytics.factory(key) } analytics.load = function (key, e) { var t = document.createElement("script"); t.type = "text/javascript"; t.async = !0; t.src = "https://cdn.segment.com/analytics.js/v1/" + key + "/analytics.min.js"; var n = document.getElementsByTagName("script")[0]; n.parentNode.insertBefore(t, n); analytics._loadOptions = e }; analytics.SNIPPET_VERSION = "4.13.1"; analytics.load("0KGQyN5tZ344emH53H3kxq9XcOO1bKKw"); analytics.page(); } }(); $(document).ready(function () { $('[data-analytics]').on('click', function (e) { var properties var event = $(this).attr('data-analytics') $.each(this.attributes, function (_, attribute) { if (attribute.name.startsWith('data-property-')) { if (!properties) properties = {} var property = attribute.name.split('data-property-')[1] properties[property] = attribute.value } }) analytics.track(event, properties) }) }); var isMobile = /iPhone|iPad|iPod|Android/i.test(navigator.userAgent); if (isMobile) { var dropdown = document.querySelectorAll('.navbar__dropdown'); for (var i = 0; i < dropdown.length; i++) { dropdown[i].addEventListener('click', function(e) { e.stopPropagation(); this.classList.toggle('w--open'); }); } }