Start here
The mental model
Workmark has four things:
- Project — a named directory (
wm.ts) with typed metadata. - Trait — a named zod schema describing a slice of metadata.
- Command — a TypeScript file declaring args and a handler.
- Handler — a function that gets typed args + a context and returns a result.
Commands declare which traits they need. Projects declare which traits they have. The framework matches them, generates CLI args / VS Code forms / MCP tool schemas, and hands your handler fully-typed data. That's the whole idea.
Quick start
Install:
pnpm add -D @ldlework/workmarkWrite a command:
/** Build the project */
import { cmd } from "@ldlework/workmark/define";
export default cmd({
handler: (_, { sh }) => sh("cargo build"),
});Run it:
wm buildThat's the simplest case. Projects and traits earn their keep when you have multiple packages or shared config — see below.
Projects
A wm.ts file anywhere in the workspace declares a project. The framework discovers them recursively from the root.
import { defineProject } from "@ldlework/workmark/define";
export default defineProject({
name: "api",
tags: ["backend"],
has: { buildable: true, docker: { composeFile: "docker-compose.yml", service: "api" } },
});has is where a project fulfills traits. tags are free-form labels for human-readable grouping — they don't show up in commands. Use tags for "this is a backend service" documentation; use has for "this project supports the build trait."
The root can also have a wm.ts that exports multiple projects as an array — useful when each package lives in a flat layout:
export default [
defineProject({ name: "api", dir: "packages/api", has: { buildable: true } }),
defineProject({ name: "web", dir: "packages/web", has: { buildable: true } }),
];Traits
A trait is a named zod schema. Put it in .wm/traits/; the filename doesn't matter — the name field is the identity.
import { z } from "zod";
import { defineTrait } from "@ldlework/workmark/define";
/** Projects with a build step. */
export const buildable = defineTrait({
name: "buildable",
schema: z.object({
command: z.string().default("pnpm build"),
timeout: z.number().default(180_000),
}),
});When a project writes has: { buildable: { command: "cargo build" } }, the framework parses that against the schema at load time and stores the typed result.has: { buildable: true } is sugar for "use the defaults."
Traits come in two flavors:
- Data traits — the schema has fields the handler will read (like
buildable.command,docker.composeFile). - Marker traits — the schema is empty or all-defaults. Used as a filter: "projects that have the
publishabletrait."
Commands
A command lives in .wm/commands/. Subdirectories become colon-joined: commands/docker/up.ts → docker:up.
import { cmd } from "@ldlework/workmark/define";
import { buildable } from "../traits/buildable.js";
/** Build one or more packages. */
export default cmd({
needs: [buildable],
handler: (_, { traits, sh }) => sh(traits.buildable.command),
});needs lists required traits. The framework:
- Filters to projects that fulfill all needed traits.
- Exposes a
projectarg as an enum of their names (CLI / form / MCP). - Resolves the selection and hands the handler
ctx.project+ctx.traits.*, fully typed.
Select modes
How many projects a command runs against:
select: "one" // exactly one project
select: "one-or-many" // 1+ projects; handler runs per (default with needs)
select: "all" // all eligible; no user choice
for: "ghost" // bound to a specific project; no project arg exposedArgs and flags
Both are Record<string, z.ZodType>. args entries are positional (in declaration order); flags are named --foo. Descriptions come from zod's .describe().
export default cmd({
needs: [docker],
args: {
service: z.string().optional().describe("Service to restart"),
},
flags: {
force: z.boolean().default(false),
},
handler: ({ service, force }, { traits, sh }) =>
sh(`docker compose -f ${traits.docker.composeFile} restart ${service ?? ""}${force ? " --force" : ""}`),
});Aggregating across projects
With select: "all" (or "one-or-many") you can aggregate results:
export default cmd({
needs: [buildable],
select: "all",
run: {
reduce: (results) => {
const failed = results.filter(r => !r.ok);
return failed.length === 0
? ok(`${results.length} built`)
: fail(`${failed.length} failed`);
},
},
handler: (_, { traits, sh }) => sh(traits.buildable.command),
});Handlers
A handler takes (args, ctx) and returns a CallToolResult.
args is your declared args + flags, fully typed from the zod schemas.ctx is workmark-provided — it's where project, traits, and helpers live.
ctx.project // the resolved Project (when needs is set)
ctx.traits // { [traitName]: typed data } (when needs is set)
ctx.workspace // the full Workspace
ctx.sh(cmd) // shell exec in the resolved cwd; returns CallToolResult
ctx.sh([a, b]) // sequence: fail-fast, concatenate output
ctx.exec(cmd, { cwd, timeout, env }) // explicit options
ctx.ok(data) // wrap data as a success result
ctx.fail(err) // wrap as an error result
ctx.invoke(name, args) // call another commandWorking directory
ctx.sh resolves cwd automatically:
- With
needs→ each iteration'sproject.dir. - Without
needs→ the workspace root. - Override per-command via
cwd: "project" | "workspace" | (ctx) => absolutePath.
Composition
Handlers can invoke other commands by name. The framework detects cycles and returns a clean error.
handler: async (_, { invoke, fail }) => {
const check = await invoke("check", {});
if (check.isError) return fail("check failed — aborting");
return invoke("build", { project: ["api", "web"] });
}Running
CLI
wm --help # list all commands
wm build --help # per-command help
wm build api # one project
wm build api web # two projects
wm docker:up api --service=db # nested group, with a flagVS Code dashboard
Install the workmark-vsc extension. The Workspace panel shows every command with an auto-generated form: enums become dropdowns, booleans become checkboxes, required fields are enforced, and the command runs in the integrated terminal.
MCP
Workmark ships a built-in MCP server. Every command is an MCP tool; input schemas are JSON Schema derived from your zod declarations. Point your client at the binary:
{
"mcpServers": {
"workspace": {
"command": "node",
"args": ["./node_modules/@ldlework/workmark/dist/index.js"]
}
}
}AI assistants see your commands with the same validated inputs you see. No separate server to run.
Reference
Imports
import {
cmd, // declare a command
defineProject, // declare a project
defineTrait, // declare a trait
projectsOf, // enum of projects fulfilling a trait (for use in args/flags)
traitField, // per-project-data enums (.forProject / .fromArg)
fromWorkspace, // custom workspace-aware schema
fromArgs, // custom invocation-time schema
} from "@ldlework/workmark/define";
import { ok, fail, exec, execAsync } from "@ldlework/workmark/helpers";
import type { Trait } from "@ldlework/workmark/types";Project structure
your-workspace/
├── .wm/
│ ├── traits/
│ │ └── buildable.ts # trait definitions
│ └── commands/
│ ├── build.ts # wm build
│ └── docker/
│ ├── up.ts # wm docker:up
│ └── down.ts # wm docker:down
├── packages/
│ ├── api/
│ │ └── wm.ts # project definition
│ └── web/
│ └── wm.ts
├── wm.ts # optional: root project(s)
└── package.jsonCommand options
cmd({
needs?: Trait[], // required traits
select?: "one" | "one-or-many" | "all", // default: "one-or-many" when needs present
for?: string, // bind to a specific project
args?: Record<string, z.ZodType>, // positional
flags?: Record<string, z.ZodType>, // named
cwd?: "project" | "workspace" | ((ctx) => string),
run?: {
order?: "parallel" | "serial",
concurrency?: number,
stopOnFailure?: boolean,
reduce?: (results) => CallToolResult,
},
meta?: { name?: string; label?: string; description?: string },
handler: (args, ctx) => CallToolResult | Promise<CallToolResult>,
});