Skip to content

Why I Am Building Glide

Published: at 04:33 AM
0 views

I’ve sat across the table from thousands of candidates. Aerospace engineers. Pharma researchers. Software developers three months into a search that was supposed to take two weeks. I’ve watched people who are clearly excellent at what they do stumble through a process that was never designed to help them succeed.

Recruiting gave me a front-row seat to one of the strangest inefficiencies in modern professional life. The gap between how good someone is and how well they navigate the system that’s supposed to connect them to work.

That gap isn’t small. It’s structural. And between 2024 and 2026, it’s gotten measurably worse. Applications per role have nearly tripled since 2017. Success rates hover around 0.4% per application. Over half of candidates report being ghosted. Analysts now describe the conditions facing white-collar professionals as a “white-collar recession” despite aggregate economic resilience.

Table of contents

Open Table of contents

The system nobody designed

Job searching in 2026 looks almost identical to job searching in 2010. The interfaces are shinier. The platforms are faster. But the underlying model hasn’t changed. You write a resume. You apply to a listing. You wait. You hear nothing. You apply again. You tailor another cover letter. You wait again.

The scale of this dysfunction is now quantifiable. Greenhouse platform data shows the average job opening receives 242 applications, nearly triple the volume from 2017 when unemployment was at a comparable level. That translates to an approximate success probability of 0.4% per application. Workday’s 2024 Global Workforce Report found that job applications grew four times faster than new requisitions, creating what they bluntly called “an employer’s market flooded with job applications.” The ratio of applications per recruiter has reached roughly 500:1, about four times higher than four years earlier.

Somewhere in the middle of every one of those applications, an Applicant Tracking System scans the resume for keywords. Over 39% of Fortune 500 companies use Workday as their primary ATS. SAP SuccessFactors holds another 13%. These systems were built to serve the employer, not the candidate. If the keywords don’t match, you’re gone before a human ever sees your name. HiringThing’s 2025 analysis estimates that candidates submit between 32 and 200+ applications before securing a single offer, with only 0.1–2% of cold online applications converting.

Meanwhile, referral candidates consistently convert at dramatically higher rates despite representing a minority of total applicants. The jobs that most candidates never see, filled through networks, internal transfers, and conversations that happen before a role ever reaches a job board, represent a structural feature of how hiring actually works.

So the system that most job seekers rely on (public listings, keyword-optimized resumes, mass applications) is already operating at 0.4% efficiency, filtered through software that penalizes perfectly qualified people for trivial reasons, in a market where applications are growing four times faster than openings.

That’s not a broken feature. That’s a broken architecture.

What I kept seeing from the inside

As a recruiter, I watched patterns repeat across industries, seniority levels, and geographies. The data that’s emerged since confirms every one of them.

Talented people applying to the wrong jobs. Not because they lacked judgment, but because they lacked information. They didn’t know which companies were actually growing. They didn’t know which roles matched their trajectory. They didn’t know the difference between a company that says “we value growth” and one that actually promotes from within. Meanwhile, 59% of employers have raised experience requirements, further tightening access to white-collar roles for career-switchers and junior talent.

Candidates being ghosted, misled, and mistreated. Greenhouse’s 2024 Candidate Experience Report found that 52–61% of candidates had been ghosted by employers during the interview process. Over half (54%) reported encountering discriminatory interview questions, a 20-percentage-point increase from the previous year. The most common inappropriate topics: age, race, and gender. And 53% of candidates said the responsibilities described in job postings differed significantly from what they found once they started. Another 53% reported receiving heavy praise during hiring only to be lowballed on salary and title in the offer. These bait-and-switch dynamics feed a cycle of distrust that damages both sides.

Candidates burning out before they got started. Average time-to-hire for white-collar roles has stretched past 42 days. During searches that routinely last five to six months, candidates send 32 to 200+ applications, navigate multi-stage interview processes, and receive almost no feedback. Greenhouse found that about 40% of white-collar job seekers in 2024 did not receive any interviews over extended search periods. Historically underrepresented groups were 67% more likely to be ghosted than white candidates in some markets.

Employers struggling just as much. On the other side, Greenhouse data shows a 500:1 applications-to-recruiter ratio. Lean talent acquisition teams are outpaced by volume, leading to shortcuts: keyword filters, heavy reliance on referrals, and minimal candidate communication. Skills shortages in AI, cybersecurity, and data science coexist with surpluses in generic corporate roles. The result is that 20% of candidates reject offers due to a poor interview experience, and high early-tenure turnover erodes the ROI of lengthy hiring processes.

Preparation happening in a vacuum. Candidates would spend hours researching a company before an interview, piecing together information from Glassdoor reviews, LinkedIn posts, news articles, and guesswork. No single place to understand a company’s stability, culture trajectory, department growth, or how employees actually feel about working there.

I kept thinking: this is solvable. Not with another job board. Not with a better search filter. With a fundamentally different approach to how a professional finds, evaluates, and pursues work.

Why another job board is not the answer

The instinct when you see a problem like this is to build a better version of what already exists. Better listings. Better filters. Better recommendations.

But that’s optimizing the wrong thing.

The infrastructure already exists at massive scale. Indeed attracts over 350 million unique visitors per month across 60+ countries. LinkedIn has 830 million professionals, with 40 million searching for jobs every week. ZipRecruiter, StepStone, Seek, Naukri, and dozens of regional platforms serve every geography and industry. These platforms integrate with ATS systems like Workday, Greenhouse, and SuccessFactors that power the back end of nearly every large employer’s hiring process.

And yet the outcomes are getting worse, not better. Critics argue that major platforms like LinkedIn and Indeed don’t fully exploit their rich behavioral data to improve matching, partly because high turnover and repeated applications are lucrative for their business models. AI-assisted application tools now make it trivially easy to auto-fill forms, generate tailored resumes, and mass-apply to dozens of roles, which inflates application counts and deepens the signal-to-noise problem. Greenhouse research indicates that 28% of job seekers use AI to mass apply rather than focus on targeted opportunities. Reports of “ghost jobs,” postings kept open with little or no intent to hire, create additional frustration.

The problem isn’t that job boards are bad at listing jobs. They’re fine at that. The problem is that listing jobs is only a tiny fraction of what a job seeker actually needs.

A person in a career transition needs to understand which roles fit their skill profile. They need to know which companies are stable and which are contracting. They need a resume that speaks the language of each specific job description. They need interview preparation that’s grounded in real company data, not generic advice. They need to track where they are in the process, what’s working, what’s not, and where to invest their limited energy.

They need a system that works with them, not a billboard they stare at.

That’s the difference between a job board and a career platform. A job board is a marketplace. A career platform is an operating system for your professional life during the most stressful transition you’ll face.

So I started building

Glide started from a simple premise. What if everything a job seeker needs existed in one place, and what if intelligence, real contextual AI, connected all of it?

Not AI as a gimmick. Not a chatbot stapled onto a job board. AI as the connective tissue between every stage of the career journey.

Here’s what that looks like, and how it works under the hood.

The architecture

Glide’s AI layer is not a single monolithic model. It’s a distributed service architecture sitting between the application logic and multiple inference backends. Each feature is served by the most appropriate inference pipeline. Large language model calls for generation. Rule-based scoring engines for deterministic calculations. Retrieval-augmented research pipelines for grounding. Lexicon-driven analysis for speed-critical classifications.

flowchart LR
    A["Deterministic (0.1)"] --> A1[Resume Parsing]
    A --> A2[Skill Verification]
    A --> A3[Scoring]
    B["Balanced (0.4)"] --> B1[Interview Prep]
    B --> B2[Resume Tailoring]
    B --> B3[Career Pathways]
    C["Creative (0.7)"] --> C1[Company Assessments]
    C --> C2[Career Advice]
    D["Conversational (0.8)"] --> D1[Career Chat]
    D --> D2[Mock Interviews]

The design principles that hold the whole thing together:

Task-specific model routing. Every AI feature routes to a model configuration optimized for its specific requirements. High-creativity tasks (career advice, company assessments) use higher temperature settings and broader sampling. Precision tasks (resume parsing, skill verification) use near-deterministic configurations with constrained output schemas.

Structured output enforcement. All generative calls that produce data for downstream processing enforce JSON schema constraints at the inference layer. This eliminates parsing ambiguity and ensures type-safe integration with the application database. No “the model sometimes returns markdown instead of JSON” problems.

Multi-stage validation. Generated outputs pass through validation layers before persistence. Schema validation, business rule checks, deduplication, normalization, and in some pipelines, a secondary model call acting as a reviewer.

Retrieval-augmented context. Several features (company assessment, interview preparation, salary insights) perform real-time web research before generation. Research results are injected into the prompt context window, grounding the model’s output in current factual information rather than relying solely on parametric knowledge.

Tiered caching. AI outputs are cached at multiple levels with content-hash-based invalidation. Cache lifetimes are calibrated per feature based on how quickly the underlying data changes.

The platform maintains four inference configurations, each tuned for its task. Deterministic mode runs with minimal randomness for parsing, extraction, and scoring. Balanced mode adds enough variability for structured generation. Creative mode opens up for personalized advice and company assessments. Conversational mode runs warmest for chat and mock interviews.

This isn’t a minor configuration detail. It’s what makes resume parsing feel reliable while career advice feels human. The same model, tuned differently for each task.

The AI/ML model: under the hood

Glide’s intelligence layer is built on top of large language models, but calling it “an LLM wrapper” would be like calling a Formula 1 car “a thing with an engine.” The model is only one component. The real system is the orchestration, the constraints, the validation, and the feedback loops surrounding it.

Model selection and specifications

Glide currently routes inference across multiple model tiers depending on task requirements:

Task CategoryModel TierContext WindowTemperatureAvg LatencyCost per 1K calls
Resume Parsing & ExtractionFrontier128K tokens0.1~3sHigh
Job Scoring & Skill VerificationMid-tier128K tokens0.3~1.5sMedium
Resume Tailoring & Cover LettersFrontier128K tokens0.4~4sHigh
Interview Prep & Mock InterviewsFrontier128K tokens0.7–0.8~3.5sHigh
Career Chat & AdvisoryFrontier128K tokens0.8~2sHigh
Batch Pre-screeningLightweight32K tokens0.1~0.3sLow
Profile SuggestionsMid-tier128K tokens0.4~2sMedium
pie title Distribution of Inference Calls by Model Tier (%)
    "Frontier (GPT-4 class) — 30%" : 30
    "Mid-tier (GPT-4o-mini) — 25%" : 25
    "Lightweight (classifiers) — 45%" : 45

How we train the model

General-purpose LLMs are impressive. They’re also mediocre at career-domain tasks out of the box. A frontier model can write a decent cover letter on the first try. But ask it to extract structured work experience from a resume with non-standard formatting, or to evaluate whether a candidate’s three years of “data analysis” in a pharma context qualifies them for a “data engineering” role in fintech, and it starts making mistakes that a recruiter would catch in seconds.

Prompt engineering gets you 80% of the way. The last 20% requires training the weights.

Glide fine-tunes task-specific model variants using supervised learning on curated, domain-specific datasets. Not one monolithic fine-tune. Multiple specialized adaptations, each targeting a narrow task where we’ve measured a concrete quality gap between prompted base models and what the product requires.

Where we fine-tune and why:

The training pipeline:

flowchart TB
    A[Production Inference Logs] --> B[Candidate Selection]
    B --> C[Human Annotation]
    C --> D[Quality Assurance]
    D --> E[Dataset Assembly]
    E --> F[Fine-tuning Run]
    F --> G[Evaluation Suite]
    G -- Pass --> H[Shadow Deployment]
    G -- Fail --> I[Error Analysis]
    I --> C
    H --> J[A/B Test vs Production]
    J -- Win --> K[Promote to Production]
    J -- Lose --> I

Data collection. Every inference call in production is logged with its input, output, and downstream signals. Did the user accept the extracted resume data or manually correct it? Did the user advance a recommended job to the next pipeline stage or archive it immediately? Did the tailored resume lead to an interview callback? These implicit feedback signals are the raw material for training data.

We don’t use raw production logs directly. A dedicated annotation pipeline selects candidates from the logs: cases where the model’s output diverged from the user’s subsequent action (corrections, rejections, overrides). These represent the model’s actual failure modes in the real distribution, not synthetic edge cases someone imagined in a lab.

Annotation. Human annotators with recruiting domain expertise label each selected example. For resume extraction, annotators verify every extracted field against the original document. For skill classification, annotators assess proficiency based on the full work history context. For relevance scoring, annotators evaluate candidate-job fit on a 4-point scale with written justifications.

Inter-annotator agreement is measured on every batch. If two annotators disagree on more than 15% of labels in a batch, the disagreements go to a senior reviewer for adjudication, and the annotation guidelines get refined. This is tedious. It’s also the difference between a fine-tuned model that actually improves and one that learns from noisy labels and performs worse than the base model.

Training configuration. We fine-tune using LoRA (Low-Rank Adaptation) rather than full-weight updates. LoRA trains a small number of low-rank matrices that modify the attention layers of the base model while keeping the original weights frozen. This has three practical benefits:

Hyperparameters we’ve converged on after extensive experimentation: learning rate 1e-4 with cosine decay, LoRA rank 16 for extraction tasks and rank 32 for classification tasks, batch size 8 with gradient accumulation to an effective batch size of 32, 3-5 epochs with early stopping on validation loss. These aren’t magic numbers. They’re the result of dozens of training runs with systematic ablation.

Evaluation. Every fine-tuned model is evaluated against the same test set that the base model was evaluated against before training began. We track:

Deployment. New fine-tuned models enter production through a staged rollout. First, a shadow deployment where the new model runs in parallel with the production model on live traffic, but its outputs are logged without being shown to users. We compare outputs side by side. If the new model’s outputs are strictly better on a random sample of 500+ comparisons (judged by human reviewers), it advances to an A/B test where 10% of users receive the new model’s outputs. Conversion metrics (user acceptance rate, correction rate, downstream success signals) determine whether the new model graduates to full production.

The entire cycle, from identifying a quality gap to deploying a fine-tuned model, takes 4-6 weeks. Most of that time is annotation and evaluation, not training. The actual GPU hours are a tiny fraction of the total effort.

What we don’t fine-tune. Generation tasks that require high creativity and personalization (career advice, company assessments, interview preparation content) remain on prompted base models. These tasks benefit from the base model’s broad world knowledge and writing fluency, and the output quality is harder to capture in structured training labels. Prompt engineering plus structured output schemas plus retrieval-augmented context gets us to the quality bar we need for these tasks. For now.

We also maintain a rigorous prompt engineering discipline alongside fine-tuning. Even for fine-tuned models, the system prompt matters. The fine-tuned weights handle the domain-specific judgment; the prompt handles the output format, tone, and task framing. They’re complementary, not substitutes.

Evaluation metrics per task:

Retrieval-augmented generation (RAG) pipeline

Several of Glide’s highest-value features depend on real-time information that no pre-trained model can know. Company news from last week. Salary data from this quarter. A CEO departure announced yesterday. This is where the RAG pipeline earns its keep.

flowchart TB
    A[User Query / Feature Trigger] --> B[Query Formulation]
    B --> C[Multi-Source Web Research]
    C --> D[Result Deduplication & Ranking]
    D --> E[Context Assembly]
    E --> F[Prompt Construction with Research Context]
    F --> G[Model Inference]
    G --> H[Output Validation & Post-Processing]

Query formulation is not trivial. A naive approach would pass the user’s question directly as a search query. Glide instead constructs domain-optimized search queries. For a company assessment, the system generates three parallel queries targeting different information facets (culture, workforce movements, strategic direction). For salary insights, queries are parameterized by role title, location, and seniority level.

Multi-source research uses a waterfall pattern across research providers. If the primary provider returns insufficient results, the secondary activates. If both fall short, a tertiary fallback engages. This ensures research-dependent features degrade gracefully rather than failing outright.

Result ranking scores each search result on relevance to the original query, source authority (major publications rank higher), and recency (exponential decay, half-life of ~14 days). Only top-ranked results enter the context window.

Context window management is critical. With 15,000+ tokens of candidate context, job context, and research results competing for space in a 128K context window, the system must prioritize. Research results are truncated to their most relevant paragraphs. Candidate context is compressed to essential fields. The system never exceeds 80% of the context window, leaving headroom for the model’s reasoning and output generation.

Structured output and type safety

Every generative call in Glide that feeds into downstream processing (as opposed to being displayed directly to the user) enforces a JSON schema at the inference layer. This isn’t just “please return JSON.” It’s schema-constrained decoding where the model’s token sampling is restricted to only produce valid JSON conforming to the specified schema.

This eliminates an entire class of bugs:

The schemas are defined as TypeScript interfaces that generate both the inference-layer constraints and the application-layer type definitions. One source of truth, compile-time guarantees, zero parsing ambiguity.

Inference cost economics

AI inference is expensive. At Glide’s scale, uncontrolled inference costs would make the product economically unviable. The cost discipline is baked into the architecture:

StrategyImpact
Batch pre-screening before per-job scoringEliminates 60–70% of jobs before expensive scoring
Content-hash caching on profile-dependent featuresPrevents regeneration on unchanged profiles
Similarity-based cache reusePrevents regeneration on minor profile edits
Tiered model routingSends lightweight tasks to cheaper models
Parallel research queriesReduces wall-clock time (not cost, but UX)
Response caching within sessionsAbsorbs duplicate requests from UI re-renders
Token-aware context compressionReduces input token costs by 20–30% on long contexts
---
config:
    xyChart:
        width: 600
        height: 320
        xAxis:
            labelPadding: 18
        yAxis:
            labelPadding: 18
        plotReservedSpacePercent: 40
---
xychart-beta
    title "Cumulative Cost Reduction by Optimization Layer (% of Baseline)"
    x-axis ["Baseline", "+Prescreen", "+Cache", "+Routing", "+Compress", "+Session"]
    y-axis 0 --> 100
    line [100, 35, 25, 18, 14, 12]

The result is a per-user inference cost that’s sustainable at the current pricing model, with clear scaling economics as user volume grows and we negotiate volume pricing with model providers.

Safety, guardrails, and adversarial robustness

Career data is sensitive. Resumes contain personal information, employment history, compensation expectations. The AI layer is designed with multiple safety boundaries:

Input sanitization: All user-supplied text is scanned for known prompt injection patterns before entering any prompt context. Detected patterns are stripped. User content is wrapped in explicit delimiters with instructions to treat enclosed text as data, not instructions.

Output filtering: Generated content is scanned for personally identifiable information that shouldn’t appear in shareable outputs (e.g., a company assessment shouldn’t contain the user’s salary expectations). PII detected in inappropriate contexts is redacted before display.

Hallucination mitigation: For factual features (company data, salary figures), generated claims are cross-referenced against structured data in the application database. If the model claims a company has 10,000 employees but the database shows 2,000, the structured data wins. The RAG pipeline further grounds outputs in verifiable sources.

Failure graceful degradation: Every AI feature has a non-AI fallback path. If inference fails (timeout, rate limit, provider outage), the system returns cached results, generic high-quality content from curated libraries, or clearly communicates that the feature is temporarily unavailable. The platform never shows a blank screen or a raw error because an API call failed.

Audit logging: Every inference call is logged with input context (sanitized), output, latency, token usage, model version, and cache status. This supports debugging, cost monitoring, quality regression detection, and compliance requirements.

Resume intelligence pipeline

Everything in Glide starts with the resume. You upload it, and the system transforms an unstructured document into a normalized, queryable candidate profile.

The flow looks like this:

flowchart TB
    A[Resume Upload] --> B[Text Extraction + Cleaning]
    B --> C[AI Extraction Model]
    C --> D[Post-Extraction Enrichment]
    D --> E[Candidate Match Profile]
    C --- C1["Extracts: Work Experience, Projects, Education, Certifications, Skills"]
    D --- D1["Project Derivation, Date Validation, Deduplication, Skill Normalization"]

The cleaning stage matters more than it sounds. Resumes arrive in every imaginable format. The system normalizes to UTF-8, strips control characters, collapses redundant whitespace, removes zero-width characters, and detects section boundaries using capitalization patterns, horizontal rules, and whitespace density changes. Without this, the extraction model hallucinates section breaks or merges separate jobs into one.

The extraction model runs in deterministic mode with a system prompt establishing domain expertise context. That framing isn’t decoration. Empirical testing showed it improves extraction accuracy on ambiguous resume formats.

The extraction schema defines five entity types:

Output format is enforced as structured JSON at the inference layer.

After extraction, a secondary enrichment pass runs. If the model fails to extract standalone projects (common in experience-heavy resumes), the system derives project entities from the key projects, achievements, and responsibilities fields within each work experience entry. Every extracted entity is validated against business rules: work experience requires at minimum a role title and organization, date ranges are checked for logical consistency, education requires at minimum an institution name, and duplicate entries are detected and merged.

Skill normalization is its own pipeline. Lowercasing, whitespace trimming, alias resolution (“JS” to “JavaScript”, “ML” to “Machine Learning”), and removal of overly generic terms that don’t carry signal for matching.

Job matching and ranking engine

The matching engine is where Glide earns the “not a job board” distinction. Every incoming job is scored against the user’s profile through a multi-stage pipeline.

flowchart TB
    A[Incoming Jobs] --> B[Stage 1: Candidate Match Profile]
    B --> C[Stage 2: Batch Pre-Screening]
    C -- Filtered --> X[Discarded]
    C -- Pass --> D[Stage 3: Skill Verification]
    D --> E[Stage 4: Composite Scoring]
    E --> F[Stage 5: Pipeline Ranking]
    F --> G[Your Pipeline: 50+ Ranked Jobs]

Stage 1: Candidate match profile construction

Before any jobs are scored, the system builds a Candidate Match Profile. A distilled representation of your qualifications optimized for comparison.

The match profile contains:

The three-tier skill weighting is important. Saying “I know Python” on your profile doesn’t carry the same weight as having three years of Python projects in your work history. The system calibrates this automatically from the resume extraction.

Stage 2: Batch pre-screening

Incoming jobs are processed in batches through a pre-screening filter before full scoring. This is a cost-optimization step.

The pre-screen evaluates three binary signals per job:

Jobs that fail any check are discarded before reaching the scoring stage. This filters out a significant portion of raw scraped results, reducing inference costs downstream.

Stage 3: Skill verification

For jobs that pass pre-screening, the system performs per-skill verification against the job’s requirements. Each required skill is evaluated:

This isn’t keyword matching. It’s structured evaluation. The system doesn’t just check if “React” appears in your profile. It assesses how strong your evidence is for React based on what you’ve actually built.

Stage 4: Composite scoring

The final match score is a weighted combination of five factors:

The weights are configurable per user through a match strictness preference. Strict mode prioritizes skill coverage heavily. Relaxed mode shifts weight toward role alignment and experience, making it easier to discover adjacent opportunities. Your pipeline, your rules.

---
config:
    xyChart:
        width: 600
        height: 320
        xAxis:
            labelPadding: 18
        yAxis:
            labelPadding: 18
        plotReservedSpacePercent: 40
---
xychart-beta
    title "Match Score Weights (%): Strict (bar) vs Relaxed (line)"
    x-axis ["Skills", "Optional", "Proficiency", "Role Fit", "Experience"]
    y-axis 0 --> 50
    bar [40, 10, 20, 15, 15]
    line [20, 10, 15, 30, 25]

Fallback scoring: when skill verification produces a zero score (typically due to sparse job descriptions), a fallback scorer activates using lightweight heuristics to produce a baseline score. Jobs are never silently dropped.

Stage 5: Pipeline ranking

Jobs in the pipeline are ranked using a composite formula that blends match score, skill coverage, proficiency, company reputation, and employee ratings. Company reputation is intentionally weighted higher than its raw scale would suggest. Empirically, seeing a well-rated company at the top of the pipeline builds trust in the system. Trust matters early.

The job pipeline

The pipeline is the heart of the Glide experience. It’s where you manage every opportunity from first discovery through to outcome.

flowchart LR
    A[Pipeline] --> B[Shortlisted]
    B --> C[Applied]
    C --> D[Interview]
    D --> E[Hired]
    A --> F[Archived]
    B --> F
    C --> F
    D --> F

Six stages. Pipeline, Shortlisted, Applied, Interview, Hired, Archived. Glide automatically populates the Pipeline stage with matched opportunities. The platform aims to maintain 50 or more relevant jobs per user at any given time. As you move jobs between stages, Glide tracks every transition and uses them to generate analytics.

Each job in the pipeline shows the title, company name and logo, location, employment type, salary (when available), match score, skill match percentage, and Glassdoor ratings. You can filter by keywords, location, match score, seniority level, and stage. You can sort by date added, match score, or company name.

Glide periodically checks whether listed jobs are still active, so you’re not wasting time on stale postings.

Company stability scoring

This is one of the features I’m most proud of. And it’s entirely rule-based. No model inference. Zero inference cost. Completely reproducible.

The Stability Score rates a company from 0 to 100 on how stable and reliable it is as an employer. The model evaluates eight factors, each with its own weight:

The final score is a weighted average across all factors, mapped to a label: Strong, Moderate, Fair, Weak, or At Risk.

---
config:
    xyChart:
        width: 650
        height: 320
        xAxis:
            labelPadding: 18
        yAxis:
            labelPadding: 18
        plotReservedSpacePercent: 40
---
xychart-beta
    title "Stability Score: Factor Weights (%)"
    x-axis ["Tenure", "Headcount", "YoY", "Age", "Size", "Sentiment", "Funding", "Tier"]
    y-axis 0 --> 25
    bar [22, 16, 14, 12, 10, 10, 9, 7]

No model inference. No hallucination risk. Reproducible, explainable, and zero cost per calculation.

News impact scoring and sentiment

Alongside the Stability Score, the news system evaluates the significance and emotional tone of every article about a company. Also entirely rule-based. No model inference.

The impact score (0-100) is a weighted composite of five signals:

Keyword Severity: Articles are scanned for keywords grouped into severity tiers, from critical events (bankruptcy, mass layoffs) down to routine updates (office openings, awards). The highest-severity keyword found determines the tier score.

Source Authority: The publishing source is classified into credibility tiers. Major wire services and financial publications carry the most weight. Industry blogs and unranked sources carry less.

Content Magnitude: Quantitative signals within the article text (dollar amounts, employee counts, percentages) are extracted and mapped to magnitude bands. “Laid off 5,000 employees” scores higher than “laid off 50 employees.”

Recency: An exponential decay function based on article age. Articles lose relevance over time with a half-life of roughly two weeks.

Linguistic Gravity: Measures the intensity of language through lexical analysis. Sentences containing superlatives, urgency markers, and definitive language score higher than hedged or speculative phrasing.

pie title News Impact Score: Signal Weight Breakdown (%)
    "Keyword Severity — 30%" : 30
    "Source Authority — 25%" : 25
    "Content Magnitude — 20%" : 20
    "Article Recency — 15%" : 15
    "Linguistic Gravity — 10%" : 10

Sentiment classification uses a lexicon-based approach. The article text is tokenized and scanned against positive and negative word dictionaries, producing a sentiment label of positive, negative, mixed, or neutral.

Category assignment uses a priority-ordered keyword scan across categories including Layoffs, Acquisition, Funding, Legal, Financial, Leadership, Hiring, Expansion, and more. The design ensures the most consequential interpretation is chosen when an article spans multiple topics. An article about layoffs that also mentions restructuring gets categorized as “Layoffs,” not the softer “Leadership.”

Company assessment generation (The Rundown)

The Rundown produces a concise, opinionated career perspective on any company. This is a retrieval-augmented generation pipeline.

Before generation, the system performs three parallel web research queries:

  1. Careers and culture: recent articles about workplace culture, employee experience, hiring practices
  2. Workforce movements: layoffs, hiring surges, restructuring, headcount changes
  3. Growth and strategy: strategic direction, market position, product launches, competitive moves

Search results are deduplicated by URL and ranked by relevance. The top results from each query are concatenated into a research context block.

The research context is combined with the company’s structured profile data (headcount, growth rate, funding history, industry, tenure metrics) and passed to the model with a tightly constrained prompt. The output is structured into a strategic overview, a risk and opportunity assessment, and a synthesized recommendation from a career perspective.

The output includes inline semantic highlighting: positive signals wrapped in positive markers, negative signals in negative markers, contextual information in neutral markers. This is achieved through HTML markup in the output, enabling the frontend to render color-coded text without additional NLP processing.

Assessments are cached and periodically refreshed. Company-level strategic narratives change slowly enough that regular updates capture meaningful shifts without unnecessary regeneration.

Company Intel: the full picture

Company Intel is what Glassdoor would look like if it cared about helping you make a career decision rather than selling job ads.

flowchart LR
    A[Company Intel] --> B[Company Profile]
    A --> C[Stability Score]
    A --> D[The Rundown]
    A --> E[Employee Trends]
    A --> F[Talent Flow]
    A --> G[Funding Journey]
    A --> H[News and Sentiment]
    A --> I[Community Reviews]

Company Profiles: complete picture of any company. Identity, industry, size, type, headquarters, founding year, stock ticker for public companies, direct links to website and social profiles.

Key Statistics: employee count, average employee tenure, 12-month headcount growth, total funding raised, number of funding rounds. Immediate snapshot.

Company DNA: growth signal (actively growing, stable, declining), peak headcount and distance from peak, business stage, market tier, culture tags describing what it’s like to work there.

Year-over-Year Growth: headcount changes by year in a visual format, revealing long-term workforce trends.

Employee Trends: headcount over time with both yearly and monthly views. Hiring phases, freezes, layoffs, recovery periods. All visible.

Department Insights: explore individual departments within a company. Their headcount trends, top skills, where employees come from, where they go when they leave.

Department Growth: which departments are expanding and which are contracting over the last 12 months.

Department Comparison: compare up to three departments side by side on size, growth, skills, and trends.

Talent Flow: employee movement patterns. Where the company’s employees came from, where they go when they leave, which companies have the strongest two-way talent exchange.

Funding Journey: timeline of every funding round. Round type, date, amount raised, lead investors.

Company News: most relevant recent news, categorized by topic (layoffs, hiring, expansion, acquisition, funding, leadership, product, financial, legal, security, labour, general) and ranked by impact using the scoring system described above.

Feedback and Reviews: community-sourced ratings and written reviews across seven dimensions. Overall, work-life balance, compensation, job security, management, culture, career growth. All anonymous. Users can rate reviews as helpful.

This is the research that used to take hours, distilled into something you can absorb in minutes. Not vibes. Data.

Resume and CV tailoring engine

One of the most soul-crushing parts of job searching is rewriting your resume for every application. Glide solves this.

The tailoring engine receives your existing CV content alongside the target job description, company name, and job title. Pro users can additionally supply custom context: free-form instructions specifying particular accomplishments, skills, or angles they want emphasized.

The model analyzes the alignment between your experience and the job requirements, then rewrites the resume content to:

The output preserves your factual experience while optimizing its presentation for the specific opportunity. Same career. Better framing.

When requested, a separate generation call produces a cover letter using the same candidate + job context. Three-paragraph structure: connection to the company/role, relevant qualifications and accomplishments, forward-looking closing statement.

Generated content is rendered into downloadable PDFs using a template engine. Six professionally designed themes: Professional, Modern, Executive, Harvard, Tech, and Minimal. Each applies distinct typography, layout, spacing, and visual hierarchy appropriate to different industries and seniority levels.

Free users get 5 resume generations per month. Pro users have unlimited access and all themes.

Interview preparation engine

When you have an interview coming up, Glide doesn’t hand you a list of generic questions. It builds a preparation package grounded in real, current information.

The engine assembles a context package for each interview:

flowchart TB
    A[Candidate Context] --> D[Combined Context]
    B[Job Context] --> D
    D --> E[Company Research: Real-time Web Search]
    E --> F[Job Explanation]
    E --> G[Interview Essentials]
    E --> H[Common Topics]
    E --> I[Recovery Tips]

Four independent generation calls run against the assembled context:

Job Explanation: a personalized explanation of the role tailored to your background. The model runs in creative mode for natural, conversational language. What the role involves, why your background is relevant, what the hiring team is likely prioritizing.

Interview Essentials: core topics, themes, and knowledge areas you should be ready to discuss. Structured as a prioritized list with explanations of why each topic matters for this specific role.

Common Topics: specific questions and discussion points that commonly arise in interviews for this type of role at this type of company. Predicted from patterns in the job description and company research.

Recovery Tips: practical strategies for handling difficult moments during the interview. How to respond when stumped, how to redirect a conversation that’s gone off track, how to recover from a weak answer.

Calendar integration connects with Google Calendar to automatically detect upcoming interview events. Reminders are sent via email one day before each scheduled interview.

Mock interview system

Beyond preparation content, Glide lets you practice the actual experience of being interviewed.

Questions are generated one at a time in a conversational flow. The system maintains full conversation history and adapts each subsequent question based on your previous responses. You respond using your voice through speech input, simulating the real flow of conversation rather than reading and typing.

Six interview modes, each with a distinct generation persona:

The adaptive follow-up logic is worth explaining. After each answer, a follow-up question may be generated. The probability of follow-ups increases as the interview progresses, starting low for early questions and climbing as the conversation develops. This mirrors real interview dynamics where interviewers dig deeper as they identify areas worth exploring.

At the conclusion, a separate model call generates structured feedback. Per-question analysis with qualitative feedback, identified strengths, areas for improvement, and numerical ratings. Plus an overall assessment covering communication clarity, technical depth, behavioral evidence quality, and interview readiness.

The feedback model uses a different inference configuration than the question generator. The evaluator perspective is independent from the interviewer perspective.

Career pathway generation

Glide goes beyond the immediate job search by helping you plan long-term.

Set a target role. The position you ultimately want to reach. Glide generates a personalized career pathway from your current role to that destination.

flowchart LR
    A[Current Role] --> B[Milestone 1]
    B --> C[Milestone 2]
    C --> D[...]
    D --> E[Milestone 9]
    E --> F[Milestone 10: Target Role]
    G[Skill Gaps Identified] -.-> B
    H[Learning Resources] -.-> B
    I[Salary Trajectory] -.-> A
    I -.-> D
    I -.-> F

The model generates exactly 10 milestones, enforced through output validation with retry logic. If the initial generation produces fewer or more than 10, the system re-prompts until the constraint is satisfied.

Each milestone contains:

The pathway also includes estimated total duration in months and an explicit enumeration of skill gaps (skills you currently lack that are critical for the target role).

A supplementary model call generates salary range estimates at three points along the pathway: current role market rate, mid-pathway compensation, and target role market rate. The salary generation uses a multi-source research approach. Web search results from salary databases are retrieved first, then synthesized by the model. A fallback path activates an alternative search provider if the primary source returns insufficient data.

Profile suggestion engine

Think of this as a grammar and style assistant, but for your career profile.

The engine analyzes three profile sections independently:

Each suggestion contains the field being referenced, the original text span, the recommended replacement, and a brief explanation of why the change strengthens the profile.

The engine implements a content-hash-based caching strategy. When your profile content hasn’t changed, cached suggestions return immediately. When content changes, the system performs incremental analysis, only regenerating suggestions for sections that were modified. This dramatically reduces inference costs for users who make small, iterative edits.

If the inference call fails (timeout, rate limit, error), the system returns generic high-quality suggestions from a curated library. You always receive value from the feature, even during service degradation.

Glide Select: talent assessment

Glide Select identifies high-potential candidates for the talent network through AI-generated case studies tailored to your professional background.

Before generating case studies, the system analyzes your resume to extract three classification signals: industry, experience years, and functional domain. These calibrate difficulty and focus.

The system generates 20 case studies per candidate, distributed across 8 assessment areas:

  1. Technical Competency
  2. Professional Judgment
  3. Cognitive Capabilities
  4. Communication and Collaboration
  5. Adaptability and Resilience
  6. Ethical Reasoning
  7. Strategic Thinking
  8. Leadership Potential

Each case study contains a realistic scenario, a specific question, multiple choice options with varying quality levels, the domain aspect being tested, difficulty level calibrated to the candidate’s experience, primary and secondary assessment areas, and specific skills tested.

The difficulty calibration ensures a senior professional receives cases that test strategic and leadership dimensions, while an earlier-career candidate receives cases focused on execution and foundational judgment.

Salary intelligence pipeline

Salary data is one of the most requested and most unreliable categories of information in the career space. Glide’s salary pipeline is built to produce structured compensation insights you can actually trust.

flowchart LR
    A[Stage 1: Research] --> B[Stage 2: Validation]
    B --> C[Stage 3: Synthesis]
    C --> D[Market Rate]
    C --> E[Experience Progression]
    C --> F[Location Comparison]
    C --> G[Skills Premium]
    C --> H[Negotiation Playbook]

Stage 1: Research. The pipeline queries multiple web research providers in a waterfall pattern (primary, secondary, tertiary fallback) to retrieve salary data from compensation databases, job boards, and salary report publications. Research targets data relevant to your role, location, experience level, and skill set.

Stage 2: Validation. Retrieved data passes through a validation model assessing realism (are figures within plausible ranges?), consistency (do multiple sources agree?), outlier detection (should any data points be excluded?), and confidence scoring (low-confidence data points are discarded).

Stage 3: Synthesis. Validated data is synthesized into seven insight categories:

CategoryWhat It Covers
Market RateCurrent median and range for the role
Experience ProgressionHow compensation scales with seniority
Location ComparisonGeographic pay differentials
Company BenchmarksHow specific companies compare
Skills PremiumSalary uplift from specific in-demand skills
Total RewardsBeyond base salary: equity, bonuses, benefits
Negotiation PlaybookData-backed negotiation recommendations

Each insight is scored for relevance to you based on location, role, experience, and skills. High-relevance insights are surfaced prominently. Low-relevance insights are available but not prioritized.

Salary insights are cached and refreshed on a regular cadence, keyed by role, location, and experience. Slow enough market movement to justify the interval, fast enough refresh to prevent staleness.

Analytics and insights

Glide tracks your job search activity and translates it into something most job seekers never have: a feedback loop.

Application and Interview Tracking: how many applications submitted and interviews secured within a date range.

Application-to-Interview Rate: what percentage of applications convert to interviews. A clear measure of how effective your approach is.

Stage Distribution: where your jobs currently sit across pipeline stages. Are you building enough volume at the top, or stuck at a particular stage?

Top Skills: skills appearing most frequently across jobs you’ve been pursuing. What the market is asking for.

Skill Gaps: skills in demand across your target jobs but missing from your profile. Where to invest development time.

Top Industries: which industries your search is concentrated in.

Average Company Ratings: aggregated Glassdoor ratings of your target companies. A sense of the quality of companies you’re pursuing.

Momentum Score: activity level over the past 30 days. Encourages consistent engagement.

Most Successful Industry: which industry has yielded the best results in terms of advancing jobs to later pipeline stages.

Average Time to Interview: how long from application to interview stage. Reveals where friction lives.

Activity Heatmap: search activity over the past year in a visual calendar format. Patterns of engagement and inactivity, easy to spot.

---
config:
    xyChart:
        width: 600
        height: 320
        xAxis:
            labelPadding: 18
        yAxis:
            labelPadding: 18
        plotReservedSpacePercent: 40
---
xychart-beta
    title "Typical Pipeline: Jobs per Stage (per User)"
    x-axis ["Pipeline", "Shortlisted", "Applied", "Interview", "Hired", "Archived"]
    y-axis 0 --> 60
    bar [52, 18, 12, 5, 1, 24]

Prompt engineering and safety

Every prompt in the system follows a structured template:

flowchart TB
    A[1. System Prompt] --> B[2. Context Block]
    B --> C[3. Task Instruction]
    C --> D[4. Output Schema]

System prompts use specific persona definitions because empirical testing showed role-based prompting improves output quality on domain-specific tasks compared to generic instruction-only prompts.

All user-supplied text that enters a prompt context passes through a sanitization layer. Known injection patterns are detected and stripped. Input lengths are bounded. User-supplied content is wrapped in explicit delimiters that instruct the model to treat the enclosed text as data rather than instructions, creating a clear boundary between system instructions and user input.

Caching, refresh, and cost discipline

The AI layer implements tiered caching to balance freshness, cost, and latency. Different features use different strategies depending on how quickly the underlying data changes:

Cost optimization across the system:

Design decisions I keep getting asked about

No embeddings or vector store. The architecture does not use embedding-based retrieval or vector databases. All matching is performed through structured scoring and direct LLM evaluation. The candidate-to-job matching problem is better served by explicit, interpretable scoring factors than by embedding similarity, which can produce matches that are semantically close but professionally irrelevant. “Data Scientist” and “Data Entry” are close in embedding space. They are not close in career space.

Targeted fine-tuning, not blanket fine-tuning. We fine-tune LoRA adapters for specific tasks where we’ve measured a concrete quality gap (resume extraction, skill classification, relevance scoring). Creative generation tasks stay on prompted base models. This gives us domain accuracy where it matters without sacrificing the base model’s fluency and world knowledge where we need it.

Deterministic scoring where possible. Stability Score, News Impact Score, and several other features are implemented as rule-based systems rather than model-inferred scores. Reproducibility, explainability, and zero inference cost for high-frequency calculations.

Structured output over free text. Wherever downstream systems consume AI output, structured JSON schemas are enforced. This eliminates an entire class of parsing bugs and makes the AI layer’s contract with the application layer explicit and testable.

Asymmetric cost sensitivity. The system is aggressive about filtering cheap signals early (batch pre-screening, cache reuse, similarity detection) so that expensive model calls are reserved for high-value, user-facing generation tasks.

The part that matters most

I can describe the architecture. I can explain the scoring models and the caching strategies and the prompt engineering.

But none of that is why I’m building Glide.

I’m building Glide because I watched a senior engineer cry in a coffee shop after six months of searching. Not because she wasn’t good enough. Because the system made her feel like she wasn’t.

I’m building Glide because I’ve seen brilliant people stay in jobs that were slowly hollowing them out, not because they couldn’t leave, but because the process of leaving felt so overwhelming that staying seemed easier.

The numbers behind those stories are staggering. About 40% of white-collar job seekers in 2024 did not receive any interviews over extended search periods despite sending dozens of applications. Average time-to-hire now exceeds 42 days, during which candidates navigate multi-stage processes with minimal feedback. Between 52% and 61% of candidates are ghosted by employers during the interview process. Over half encounter discriminatory questions. Over half find the actual job doesn’t match what was advertised. Underrepresented groups are 67% more likely to be ghosted.

These aren’t edge cases. They’re the median experience.

I’m building Glide because the emotional toll of job searching is treated as an unavoidable cost rather than a design failure. Because sending 32 to 200+ tailored applications, preparing for multiple interviews, and navigating ambiguous outcomes creates a significant emotional load that discourages people from exploring better-fit opportunities. Because 20% of candidates reject offers due to poor interview experiences, meaning even the process of succeeding is failing.

Job searching shouldn’t require six months, a spreadsheet, fourteen browser tabs, three resume versions, and a therapist.

It should require one tool that actually understands you.

What Glide is not

Glide is not a job board. It doesn’t list jobs and leave you to figure out the rest. It works alongside you from the moment you start exploring to the moment you evaluate an offer. And beyond that, with career pathway planning.

Glide is not an automation tool that applies to jobs on your behalf. That approach treats hiring as a numbers game. It’s not. It’s a matching problem, and matching requires understanding both sides deeply.

Glide is not a replacement for human judgment. The AI provides data, structure, and preparation. The decisions are yours. The career is yours.

Glide is a career companion. One that removes the busywork, fills the information gaps, and lets you focus on what actually matters: finding work that fits who you are and where you want to go.

Why now

Analysts increasingly describe the conditions facing college-educated professionals in 2024–2026 as a “white-collar recession,” despite broader economic resilience. Hiring of high-earning professionals dropped to its lowest level since 2014. US job vacancies slid to 7.6 million by December 2024, the weakest level since late 2020. Average monthly job growth fell to about 203,000 over the trailing twelve months, down sharply from the post-pandemic rebound. Applications surged far faster than openings, shifting bargaining power decisively toward employers who now impose stricter requirements, run longer interview processes, and feel less urgency to make offers.

The Robert Walters Global Jobs Index shows the volatility: professional vacancies in January 2025 were 54% higher than December 2024, then dropped 8.3% in February 2025. Sectors like technology, media, and energy saw contraction in white-collar postings while healthcare and construction grew. The US alone accounted for roughly 40% of all vacancies in the index. Global unemployment numbers may look stable at 190–200 million, but that headline masks severe sectoral divergence: white-collar roles in technology and corporate services are under more pressure than blue-collar work.

AI, automation, and digitization are compounding this. The World Economic Forum’s Future of Jobs work shows that these forces are reshaping task compositions in knowledge work, increasing demand for advanced analytical and AI skills while reducing some routine white-collar tasks. Many candidates apply for roles where their skills are only partially aligned, contributing to high application volumes and low conversion. Meanwhile, 60% of workers in one study blamed AI for making the job market more challenging, and candidates worry that AI screening tools may introduce or amplify bias while reducing transparency.

Economic headwinds, high interest rates, elevated corporate caution, geopolitical tension, are causing organizations to defer or slow white-collar hiring, focusing instead on productivity gains from technology rather than headcount growth. Remote and hybrid work have expanded the geographic competition for every role. Skills have a shorter half-life than ever.

Professionals navigating this need more than listings. They need intelligence. They need context. They need a system that adapts as fast as the market does.

Traditional job platforms were built for a slower, simpler world. They assumed you’d search locally, apply to a handful of roles, hear back in a reasonable time, and negotiate from a position of stability. That world is gone.

The tools didn’t keep up. Glide is built for the world that replaced it.

The road ahead

Glide launched with a waitlist because we’re onboarding in small batches. Not as a growth hack. Because the product needs to work deeply for each person before we scale it broadly. Career tools that feel generic are useless. Personalization at this level requires care.

We’re building for the job seeker first. The recruiter tools, the talent network features, the assessment programs. Those matter, and they’re coming. But the foundation is the individual. The person sitting at their laptop at midnight, wondering if they’ll ever hear back.

That person deserves better than what exists.

That’s why I’m building Glide.


Next Post
Where Decisions Disappear