Part 1

What is user research, actually?

A practitioner translation of UXR for an audience that already knows research. Same craft, different rhythm.

~1,500 words · reading time scales with you

The job in one paragraph

User research, or UXR, is the discipline of generating evidence that helps a product team choose what to build, how to build it, and how to know whether it worked. The unit of analysis is a person doing a task with a product. The deliverable is usually a short memo, a workshop, or a number on a dashboard. The audience is rarely other researchers; it is product managers, designers, engineers, and executives who have to make a decision this quarter. UXR is real research — sampling, instruments, validity, and ethics all apply — but it is research bent toward a decision, on a cadence measured in weeks.

Criminology asks why people behave; UXR asks why people clicked. The methods are cousins; the timelines are not.
— The translation, in one line

Criminology ↔ UXR: a concept table

Almost every method in a criminologist's toolkit has a UXR analog. The shape of the work is recognizable; the constraints around it are different. The table below is the fastest orientation we can give you. Each row is expanded in the prose that follows, and the full taxonomy of methods lives in Part 2.

Criminology concept UXR equivalent What changes
Ethnography Contextual inquiry / field study Days in the field, not months. Photos and quotes, not field notebooks.
Focus group Usability session or co-design workshop Task-centered, not opinion-centered. Stimuli (mocks, prototypes) in the room.
Semi-structured interview Generative user interview Five to eight participants, tighter guide, synthesis in a week.
IRB protocol Consent + privacy + legal review Lighter touch, participant-led; the gate is privacy/legal, not an IRB.
Grounded theory Affinity diagramming / thematic synthesis Same coding discipline, hours instead of months, often two analysts.
Program evaluation Impact measurement / product evaluation Outcome metric is a product KPI; effect is reported with caveats, not p-values.
Survey research Quant UXR / large-N survey Shorter, behavior-focused, often paired with product logs.
Systematic review Desk research / secondary research Narrower scope, includes industry sources, one-pager output.
Ride-along / observation Diary study / shadow-and-debrief One or two weeks, structured artifacts (photos, voice memos), debrief at end.

Two patterns are worth naming. First, every UXR method trades depth for speed compared with its academic cousin. Second, almost every UXR method is paired with a product artifact — a prototype to react to, a log to triangulate against, a metric to move. The pairing is what turns research into a decision rather than a paper.

The IRB row in the table deserves a longer note because it is the one place where the translation surprises people. Industry has no IRB. It does have privacy review, legal review, and a security review for anything that touches sensitive data, and the bar at a serious company is higher than many academics expect. What it does not have is a months-long pre-registration cycle. A practitioner writes a short consent script, a data-handling plan, and a participant-facing summary; runs them past privacy and legal counsel; and is usually ready to recruit within a week. The ethical substance is the same — informed consent, data minimization, the right to withdraw, fair compensation — but the apparatus is sized to the cadence of the work.

A short glossary

Eight terms you will hear in the first week of any UXR job. None of them are technical in the way criminology can be technical, but each one is doing real work in conversation.

Three week-in-the-life vignettes

The same job title can mean very different days depending on where it sits. Three sketches, drawn from common industry shapes.

The startup researcher

A team of forty. One researcher, who is also the participant recruiter, the consent author, and the analytics partner. Monday is a kickoff with a PM who wants to know whether a new onboarding flow will work for first-time users. By Wednesday the researcher has run five remote interviews with recruited participants from a panel vendor. Thursday is synthesis with a designer over a shared Miro board. Friday is a twenty-minute readout to engineering, ending with three concrete changes. The cycle repeats. The work is broad, the rigor is real but compressed, and the reward is that almost every study changes something visible in the product the following sprint.

The FAANG researcher

A team of forty researchers, embedded across a dozen product surfaces. The week looks less like fieldwork and more like a small academic department. Monday is a research review with peers; Tuesday is a working session with a design partner on an experiment plan; Wednesday is interviews; Thursday is alignment with a quant researcher on a survey instrument; Friday is a strategic readout to a director. Studies are larger (twelve to twenty participants is normal), instruments are more carefully built, and the path from insight to shipped change is longer and more political. Rigor is higher, cadence is slower, leverage per study is enormous.

The civic-tech researcher

A team of fifteen at a non-profit or government-adjacent organization building services for the public. Users include people in difficult circumstances — applying for benefits, navigating a court system, contacting a parole officer. The methods are the closest to criminology: ride-alongs, intercept interviews, partnerships with frontline staff. The tradeoffs are different too. Recruitment is harder, consent is heavier, and the decision-makers are often not in the room. The week may include a stakeholder workshop with caseworkers, a usability test with a screen reader user, and a budget conversation about whether to compensate participants in cash or gift cards. The work feels the most like applied social science of the three.

Generative, evaluative, strategic — with criminology examples

Most UXR teams sort their work into three modes. The labels are imperfect, but the distinctions matter because each mode rewards different criminology muscles.

Generative

You are designing an app for people on probation to check in with their officer. Before anyone sketches a screen, a generative study answers: what does check-in already look like, where does it break down, what does it cost a person to comply? Methods: ten semi-structured interviews, two ride-alongs, a synthesis. A criminologist's interview and field skills transfer almost without translation; what changes is the time budget and the deliverable (a one-page opportunity map, not a journal article).

Evaluative

The team has a working prototype of the check-in app. An evaluative study answers: can a person actually use it? Methods: eight moderated usability sessions, a short survey of perceived burden, a benchmark task-success rate. The closest criminology analog is outcome evaluation of a specific intervention — narrow, instrumented, and tightly tied to a yes-no decision.

Strategic

Six months in, leadership asks whether the team should also build for parole, for pretrial release, or stay focused. A strategic study answers: what is the shape of the market, who else serves these users, where is the leverage? Methods: a desk-research review, ten expert interviews, a segmentation. This is policy-brief work in a different register, and it is where senior UXR practitioners earn their seniority.

The methods are the easy part. The harder part is choosing the smallest study that will move the next decision.

What stays the same: rigor

A common worry from criminology PhDs considering the pivot is that UXR will require them to abandon rigor. The honest answer is that rigor still matters, but it shows up differently. Sample sizes are smaller, but sampling logic is still defended out loud: a five-person study with the wrong five people is a five-person study with the wrong five people, and a good UXR practitioner will say so. Coding is faster, but inter-coder agreement is still negotiated when stakes are high. Effects are smaller and reported with confidence intervals, not asterisks, and a serious team will reject a claim that cannot survive a second look. What is gone is the journal-article apparatus — the lit review, the hedging, the discussion section. What remains is the discipline of taking uncertainty seriously and writing it down in a way the next decision-maker can use.