ATS & Recruitment Software.
PyjamaHR is an ATS and recruitment platform that lets HR teams create and post job listings across multiple job boards, source candidates, and track applicants from first contact to hire — all from a single portal. The Job Description field sitting at the heart of the "Create a Job" workflow was a static, unguided input box. It asked recruiters to do something genuinely difficult — write a compelling, compliant, well-structured job description — with no scaffolding, no intelligence, and no safety net. This project rebuilt that workflow around AI assistance, structured guidance, and a fail-safe compliance layer.
Client

Year
2022
Industry
Hiring & Recruitment
Role
Product Designer
Challenge
The existing Job Description field had three compounding issues. Structurally, it was a blank text box — no segmentation prompts, no formatting guidance, no indication of what a good job description actually contained. Recruiters were left to figure it out themselves, and the results were inconsistent. From a quality standpoint, research consistently showed a significant gap between how clear hiring managers think their job descriptions are and how clear candidates actually find them — 72% of hiring managers believe they write clearly; only 36% of candidates agree. And from a compliance standpoint, the field offered no mechanism to catch discriminatory language, missing salary information, or keyword gaps before a listing went live. The only AI feature available — "Write with AI" — relied solely on the job title as a prompt, with no tone controls, no segmentation options, and no ability to guide the output meaningfully.
approach
Started broad — audited the existing "Create a Job" flow end-to-end before narrowing focus to the Job Description field specifically, to avoid over-engineering a solution to a symptom rather than the cause
Ran quantitative research across Academia, Google Scholar, Statista, Indeed, LinkedIn, and HRDrive to build an objective picture of what makes job descriptions succeed or fail
Supplemented desk research with ethnographic methods — sitting in on sales calls, running empathy interviews with potential customers (incentivised via product discounts), and hosting 72-hour Reddit Q&A threads to capture real recruiter behaviour
Defined eight critical problem statements from the research synthesis, then grouped them through affinity mapping to identify which solutions would address the most root causes simultaneously
Iterated rapidly through lower-fidelity internal prototypes every Friday for four weeks before moving to high-fidelity, to avoid fatiguing users with unnecessary complexity early in the process







key decisions
Scaffold the blank page, don't replace it. The temptation with an AI integration is to make the AI do everything. The research pointed in a different direction — recruiters want control over their job descriptions; they just want guardrails. The redesign kept the free-text input field at the centre but layered structured guidance around it: auto-scrolling tips drawn from real research data, segmentation prompts, and tone selection. The AI assists; the recruiter decides.
Make the AI rewrite actually configurable. The existing "Write with AI" feature used only the job title as its prompt — a one-size-fits-all approach that produced generic, untailored output. The redesigned AI rewrite adds tone controls (neutral, professional, culture-forward) and segmentation options (responsibilities, requirements, compensation, culture), and separates the rewrite and segmentation functions so recruiters can use either independently. More inputs mean more relevant outputs, and more relevant outputs mean more confident adoption.
Use data as a coaching layer. Rather than presenting tips as generic best practice, the redesign surfaced live research statistics inline — "25% of job seekers say compensation is the most important part of a job description" next to the compensation field; "candidates decide whether to apply within 14 seconds" at the top of the flow. Framing guidance as evidence rather than instruction respects the recruiter's intelligence and increases the likelihood it gets acted on.
Build a compliance checkpoint before submission. A pre-submission checklist acts as a fail-safe — prompting recruiters to confirm they've covered required sections, avoided discriminatory language, and included keyword-optimised content before the listing goes live. Critically, the checklist includes the real-world consequences of skipping steps, not just abstract warnings. Accountability increases when the stakes are made concrete.
Keyword generation as a discovery tool, not just an optimisation layer. AI-powered keyword suggestion doesn't just help job listings rank better — it helps recruiters think more precisely about who they're trying to attract. The ability to regenerate keyword sets for different demographic targets turns a technical SEO feature into a genuinely strategic one.




deliverables
UX research synthesis (quantitative + qualitative)
Persona analysis and problem statement framework
Affinity map and ideation documentation
Redesigned Job Description workflow
Scrolling tips component
Revamped AI rewrite tool with tone and segmentation controls
AI-powered keyword generation tool
Pre-submission compliance checklist
Low and high-fidelity prototypes
Internal and external test findings
Low-fidelity prototype showcasing the workflow and guardrails for AI-generated job descriptions
impact
AI-assisted job listing creation reduced time spent by clients by 38% overall
Job listings on the platform doubled following implementation of the redesigned workflow
Job listing review failure rate dropped by 28%, directly attributable to the compliance checklist and structured guidance layer
Delivered a workflow that gave recruiters meaningful control while removing the most common failure modes — blank-page paralysis, compliance gaps, and keyword blindness
reflections
The biggest open question is whether the checklist creates genuine compliance behaviour or just checkbox behaviour — recruiters confirming items without actually acting on them. The consequence framing was designed to address this, but it's a hypothesis that needs post-launch behavioural data to validate. I'd track what percentage of listings that pass the checklist still get flagged or underperform, and use that to determine whether the checklist needs to evolve from a self-reported format into an automated scan. The design can accommodate that transition without restructuring the workflow.
More work
2020 - 2026
More of my work across SaaS, AI, and startups.


