jq Query Builder
Write and validate jq filters. The agent uses the jq binary that's pre-installed in the container.
Workflow
- Get a JSON sample from the user. If they paste 1MB of JSON, take just the first record:
```bash
echo "$INPUT" | jq '.[0] // .'
```
- Get the desired output shape from the user. Examples beat descriptions.
- Draft a filter, run it, show the output:
```bash
echo "$INPUT" | jq '<FILTER>'
```
- Iterate until the output matches. Show each diff — never silently change the filter.
Common patterns
Pick a field
.users[].email
Filter
.users[] | select(.active == true)
Map / transform
.users | map({id, full_name: (.first + " " + .last)})
Group by
.events | group_by(.user_id) | map({user: .[0].user_id, count: length})
Top N by field
.products | sort_by(-.price) | .[0:5]
Flatten nested arrays
.[] | .items[]
Convert to CSV
(.[0] | keys), (.[] | [.[]]) | @csv
Date/time math
.events | map(.timestamp |= (fromdateiso8601))
Recursive search for any "id" key
[.. | objects | .id? | values]
Explain a filter
If the user pastes a complex filter, break it into pipe stages and explain each stage with a 1-sentence narration plus what the data looks like after it.
Output formatting flags
-rraw output (strings without quotes)-ccompact output (one object per line)-sslurp input into an array-nnull input (for synthesizing JSON)
Anti-patterns
- Don't guess at the schema. Always run
jq 'paths(scalars) | join(".")' input.json | sort -ufirst to see what fields exist. - Don't produce a 5-pipe one-liner without testing. Build it stage by stage.
- Don't use
tostringwhen the value is already a string — it's a no-op that confuses readers.
When jq isn't the right tool
- Heavy aggregation across files: use DuckDB.
- Schema-aware transforms: write Python with
json+ dataclasses. - > 1GB JSON: stream with
jq --streamor use a real parser.