← all tools

cc-skill:ab-test-designer

Claude Code skill · anthropic · active

0
last 30 days
0 calls
0 of 30 days active
explain like I am new to this
what it does

This skill designs a small-scale A/B test for your marketing assets, landing pages, or emails. It gives you a concrete test plan with a hypothesis, kill criteria, and sample size reality check based on indie traffic levels, not big-tech benchmarks.

who it is for

Anyone running a product business at indie scale who wants to test one change at a time without getting lost in statistical jargon.

when it fires

When you ask about A/B testing, split testing, testing variants, comparing versions, or deciding between two options.

say something like
  • "Design an A/B test for my landing page headline: feature-focused vs. outcome-focused"
  • "I want to test two different email subject lines. Which one will work better?"
  • "Should I test this new ad creative against the current one?"
what you might get back
Test: Email Subject Line / Outcome vs. Feature. Hypothesis: If we shift the subject line from 'New Feature Released' to 'How to Save 3 Hours Weekly,' open rate will improve because outcome-led copy resonates with busy users. Primary metric: open rate. Kill B if below 18% after 100 opens. Keep B if above 22% after 100 opens. Realistic traffic: 500 emails per variant per week. Test duration: 3-4 days. If B wins, deploy to all future emails.
how to know it worked

You got back a test plan with one variable, a clear hypothesis, specific kill criteria tied to a conversion target, and a timeline estimate that is honest about your traffic level.

is this skill working for me
never used
cloud runs
0
0 deployments
local fires
0
Claude Code (optional)
posts published
0
from drafts
attributed
$0.00
signups + revenue
Deploy this skill as a cloud automation to put it to work — runs on a schedule, dispatches output to Telegram or email, and logs every result here.

Deploy this skill

no laptop required

Run this skill on a schedule in the cloud. Pipe output to Telegram, email, or any webhook. Every run is logged here with full output, cost, and thumbs feedback.

How this performs

Cloud-automation runs, real outputs, and feedback.

No cloud runs yet for this skill.

Once you deploy it (above), every run will show up here with the actual output, exact cost, and a thumbs button so you can rate quality over time.

what this is

Claude Code skill at /Users/olanrewajukareem/.claude/skills/ab-test-designer/SKILL.md. Auto-triggered by description matching when you type in Claude Code, OR embedded into agent system prompts (LeadFlow framework skills are wired this way). Auto-synced from disk every 6h via launchd.

Use this skill when Jay needs to design an A/B test or split test for marketing assets, landing pages, email subject lines, ad creative, pricing pages, or any variant comparison. Triggers on "A/B test", "split test", "test variants", "which version i

Local Claude Code use (optional)

Install the SKILL.md file locally so this skill is available the next time you open Claude Code on your Mac. Most users skip this and just use cloud deployments above.

Install on your machine

Save this skill to ~/.claude/skills/ab-test-designer/SKILL.md so Claude Code auto-loads it whenever its description matches your prompt.

30-day activity
never used
2026-04-03today
cost model
unitincluded
capabilities
{
  "source": "global",
  "description": "Use this skill when Jay needs to design an A/B test or split test for marketing assets, landing pages, email subject lines, ad creative, pricing pages, or any variant comparison. Triggers on \"A/B test\", \"split test\", \"test variants\", \"which version is better\", \"test this against\", or \"let's test\". Provides hypothesis format, sample size sanity check, variant pairing rules, and kill/keep criteria calibrated to small-N indie scale."
}
how it gets called
{
  "method": "embed_in_prompt",
  "skill_path": "/Users/olanrewajukareem/.claude/skills/ab-test-designer/SKILL.md"
}
This tool has never been called in the last 30 days.

That means either: (a) the system has not had a task that matched its strengths, (b) a more cost-effective alternative was picked by the router, or (c) the related automation is toggled OFF on /automations. The Recommender reviews idle tools weekly and may suggest killing it.