Skip to main content

Methodology

How we compute the number you see.

TL;DR

Every month, each teammate answers 5 questions (90 seconds). Their answers produce a single score from 1 to 10 called the PSI. Team and org PSI are simple averages. Privacy floors prevent small-team de-anonymization. Reports are deterministic and honest, even when the numbers are bad.

The Parallax Strengths Pulse Index (PSI) is the single number a manager, a teammate, or a CFO ever sees on a Parallax dashboard. This page documents every rule that goes into it: the questions, the weights, the formula, the rollup math, the privacy floors, and the things we will never do to it. Competitors are welcome to copy the rules. They cannot copy the fact that we published them.

Last updated 2026-04-11. Question set: v1.

01 · The instrument

What the Pulse measures.

The Parallax Pulse is a 90-second monthly instrument. It has five questions. It is answered by the teammate, not by the manager. It is designed to measure one specific thing: how often, over the last month, the work this person did actually used what they are naturally good at. Not engagement. Not satisfaction. Not NPS. Strengths utilization over time.

The Pulse is teammate-owned by default. An individual response never leaves Parallax in an org-level rollup unless the teammate explicitly opts in with the shareAsVoice flag. A teammate's own manager cannot see individual responses unless that teammate opts in with shareWithManager. Both flags are per-cycle, per-teammate, and default to off. If neither is set, the response contributes only to a privacy-floored aggregate and nothing else.

Parallax detects recurring themes across pulse answers when 3 or more teammates describe similar experiences. Individual answers are never shown. Only the theme and the count surface.

02 · The v1 question set

Five questions. Weights sum to one.

Every cycle copies this set at open time, so changing it in the future does not rewrite the past. Slugs are stable identifiers used in the database and in this document.

  1. Q1 · q1_utilizationlikert10 · weight 0.30

    How often did your work this month use what you're naturally good at?

  2. Q2 · q2_visibilitylikert10 · weight 0.20

    How often did your manager connect work to what you're good at this month?

  3. Q3 · q3_felt_seenlikert10 · weight 0.25

    I felt seen for what I'm naturally good at this month.

  4. Q4 · q4_held_backyesno · weight 0.15 · inverts

    There was work I wanted to raise my hand for but held back on.

  5. Q5 · q5_momentfreetext · weight 0.10

    Describe one moment this month when you felt at your best at work.

The same set in a compact table:

slug             type      weight   inverts
────────────────────────────────────────────
q1_utilization   likert10  0.30     no
q2_visibility    likert10  0.20     no
q3_felt_seen     likert10  0.25     no
q4_held_back     yesno     0.15     yes
q5_moment        freetext  0.10     no
────────────────────────────────────────────
sum of weights                      1.00

Weights are validated at cycle-open time and again at cycle-close time. If they ever drift from a sum of exactly 1.00 the cycle refuses to compute a score. No silent rebalancing.

03 · The formula

The PSI formula, in full.

Each teammate's PSI is a weighted average of their five responses, normalized to a 1–10 scale and clamped into [1.00, 10.00]. There is no hidden machine learning. There is no smoothing. There is no peer benchmarking adjustment. The pseudo-code below is the whole thing:

for each question q in the cycle:
  raw        = teammate response to q
  normalized = map raw to 1..10 by type:
                 likert10 → raw (1..10)
                 likert5  → raw * 2 (historical backward compat)
                 yesno    → yes=10, no=1
                 freetext → non-empty=10, empty=1
  if q.invertsScore:
    normalized = 11 - normalized   // mirrors across 5.5
  contribution = normalized * q.weight

psi = sum(contribution for every q)
psi = clamp(psi, 1.00, 10.00)

The freetext rule is deliberately presence-only. A short answer and a long answer are worth the same. A grammatically perfect answer and a one-word answer are worth the same. We made this call on purpose: the moment we start scoring the content of the free-text answer we have introduced subjectivity, and this instrument has to be defensible in a room with a skeptical CFO.

Inversion matters. Q4 asks whether the teammate held back on work they wanted to raise their hand for. A “yes” there is bad news for strengths utilization. We mirror yes → 1 across the midpoint to 10 so that every question ends up on the same “higher is better” footing before we weight and sum.

Worked example, computed by hand:

Teammate answers:
  q1 utilization = 8          (likert10)
  q2 visibility  = 6          (likert10)
  q3 felt_seen   = 8          (likert10)
  q4 held_back   = no         (yesno, inverts)
  q5 moment      = "led the planning session"

Normalize:
  q1 → 8
  q2 → 6
  q3 → 8
  q4 → no=1, then inversion 11-1=10  (held_back=no is GOOD)
  q5 → non-empty text → 10

Weight and sum:
   8 * 0.30  =  2.40
   6 * 0.20  =  1.20
   8 * 0.25  =  2.00
  10 * 0.15  =  1.50
  10 * 0.10  =  1.00
  ─────────────────
  total      =  8.10

clamp(8.10, 1.00, 10.00) = 8.10

PSI = 8.10

04 · Reading the number

What the number actually means.

A PSI on its own is meaningless. Seven point two out of ten is a different reading at the team level than it is at the org level, and a “good” score in one cycle is not automatically a “good” score in the next. These four bands are the plain-English frame the product uses everywhere a PSI number appears — in the org-insights gauge, on the teammate's /me card, and in every tooltip that sits next to the score. One source of truth so the interpretation stays consistent.

The bands are descriptive, not evaluative. They name the pattern the instrument is picking up so a teammate or a manager can ask the next useful question. They do not rank people against each other, and the lowest band is not a deficiency — it is a read on the match between the work and the person doing it.

  1. Emerging1.06.0

    Your strengths are showing up sometimes but not reliably across the work.

    The instrument is flagging a gap between what you lead with and the shape of your current work. Not a judgement on you — a read on the match.

    Try Pick one theme in your top 5. Name one task this week where you can use it on purpose. Notice what changes.

  2. Developing6.08.0

    Your strengths are showing up regularly, with room to use them on purpose more often.

    Most weeks you can point to a moment where your top 5 carried the work. The difference between 7 and 8 is usually intentional practice — doing what you already do well, but deliberately.

    Try Pick the one theme that felt most alive last week. Name the next moment it fits. Put it on the calendar.

  3. Embedded8.09.0

    You're using your strengths most of the time. The practice is working.

    Your day-to-day work is consistently drawing on the patterns you lead with. That reliability is the point — keep the practices that make it stick.

    Try Name the one routine or ritual that made this reliable. Share it with a teammate who is still figuring theirs out.

  4. Sustained excellence9.010.0

    You're at the top of what this instrument can measure. Sustained practice, not a plateau.

    Holding here cycle after cycle is the realised version of the Parallax promise — you use your strengths on purpose, every day, and it shows. The risk is taking it for granted.

    Try Keep what's working visible. Write down the two or three practices you have internalised so the pattern stays reproducible when the work changes.

A note on the top band. A teammate who is holding at 9 or above cycle after cycle is not plateaued — they are at the top of what this instrument is built to measure. “Flat” there is the success state of the practice, not a signal that anything is stuck. The product's Pattern Watch engine uses the same 9.0 threshold so the tone in the narrative never drifts from the tone on the gauge.

05 · Partial submissions

Four out of five is zero out of five.

A Pulse with fewer than five answers is not scored at all. We do not impute. We do not fill the missing question with a neutral midpoint. We do not fall back to the previous month's number. The teammate either submitted the full instrument this cycle or they did not contribute a data point this cycle, and that absence is visible in the sample size n next to every rollup.

The form prevents a partial submission from ever being sent. The calculator re-checks the same invariant when the cycle closes, so a bug in the form layer can never corrupt the index.

06 · Rollups and privacy floors

Team and org numbers. Unweighted means.

Team PSI is the unweighted arithmetic mean of the individual PSI scores of every teammate on that team this cycle. Org PSI is the unweighted mean of the individual PSI scores of every teammate in the org this cycle. “Unweighted” here is load-bearing: larger teams do not count for more than smaller teams, and a teammate does not get more weight because they happen to be on three teams. Every voice counts once, equally. Any other rule would quietly make certain humans matter less than other humans, and that is not an instrument I am willing to ship.

Team PSI  = mean( person PSI for every teammate on the team )
Org  PSI  = mean( person PSI for every teammate in the org  )

// unweighted arithmetic mean — every teammate counts equally
// PSI range: 1.00 .. 10.00

if teammates_on_team < 3:  show "not enough responses yet"
if teammates_in_org  < 5:  show "not enough responses yet"

The floors are not soft suggestions. Team PSI with a sample size below three and org PSI with a sample size below five are simply not displayed — the dashboard renders “not enough responses yet” instead of the number. A team of two will never see a team PSI on this product. An org of four will never see an org PSI on this product. The “we might as well just show it” temptation is real and we have pre-committed to ignoring it, because the floor is the only thing that makes anonymity a promise instead of a slogan.

07 · Baseline and versioning

Your first cycle is a baseline. Forever.

Every org's first cycle is flagged isBaseline: true and can never be overwritten. Every before-vs-after claim in every report Parallax ever generates runs against that baseline. This is why we cannot cherry-pick — the comparison point is locked before we know whether the program is going to work.

Every cycle also copies the question set it opened with. If a future v2 set ever replaces v1, old cycles keep their v1 questions and their v1 weights. The April 2026 PSI for any teammate, team, or org will be reproducible in April 2030 by feeding the same stored responses back through the same stored rules. PSI history is write-once. We are never going to silently re-score the past.

08 · Pre-commitments

What Parallax will never do.

These are the commitments that make the number on the dashboard worth trusting. They are hardcoded into the product, not footnotes in a contract.

We will never

Silently rewrite a past score.

Historical psi_snapshots are write-once. If we ever discover a calculator bug we will publish a correction with the date and the math, not a quiet re-run.

We will never

Judge the content of your words.

Free-text answers are scored on presence only. Nothing you write is read by a grader, scored for sentiment, or mined for quality. Write one word or three paragraphs — the weight is identical.

We will never

Aggregate below the privacy floor.

Not even to make a dashboard look more alive. A team of two never gets a team PSI. An org of four never gets an org PSI. There is no admin override for this.

We will never

Claim causation where we have correlation.

Pulse scores moved up after the workshop. That is a correlation. We will say so. We will never put the word 'because' into a report template unless the data actually supports it.

We will never

Let an LLM write the quarterly report.

The automated quarterly and annual reports are generated by deterministic rules applied to the snapshots. The ruleset is published. The narrative templates are published. Generative text does not write the customer-facing number.

We will never

Soften bad news.

If the program is flat, the report says flat. If the program is declining, the report says declining, at the top of the page, in plain language, even when the news is bad for Parallax.

09 · Naming

PSI is not the Q12.

The Parallax Strengths Pulse Index is a Parallax-native instrument. It is not the Gallup Q12, it is not derived from the Gallup Q12, and it does not claim any compatibility with it. The Q12 is a proprietary engagement instrument owned by Gallup, and it measures a different thing than PSI does. Engagement asks, roughly, “is this a place you want to keep working?” PSI asks, roughly, “did the work you did this month actually use what you are naturally good at?”

I say this with real respect for the work Gallup does — I am a Gallup-Certified Strengths Coach and I use their instruments every week. PSI is not competing with the Q12. It is measuring something Q12 is not designed to measure: strengths utilization over time, at a monthly cadence, owned by the teammate.

Authored by

Mark Nwokedi

Gallup-Certified Strengths Coach · Founder, Parallax

This methodology is written by a person, not a committee, and it will stay that way. If anything on this page is ever unclear, wrong, or differs from what the product actually does, email me directly at mark@parallaxmodel.com and I will fix it.

Last updated 2026-04-11 · Question set v1