🔧

jq Query Builder

Verified

by Community

Helps you write jq filters by working from a real JSON sample. The agent inspects the structure, drafts a candidate filter, runs it with `jq` (pre-installed in the container), and iterates until the output matches what you want. Also explains existing filters in plain English.

jqjsonclishelldatafiltertransform
View on GitHub

jq Query Builder

Write and validate jq filters. The agent uses the jq binary that's pre-installed in the container.

Workflow

  1. Get a JSON sample from the user. If they paste 1MB of JSON, take just the first record:

```bash

echo "$INPUT" | jq '.[0] // .'

```

  1. Get the desired output shape from the user. Examples beat descriptions.
  2. Draft a filter, run it, show the output:

```bash

echo "$INPUT" | jq '<FILTER>'

```

  1. Iterate until the output matches. Show each diff — never silently change the filter.

Common patterns

Pick a field

.users[].email

Filter

.users[] | select(.active == true)

Map / transform

.users | map({id, full_name: (.first + " " + .last)})

Group by

.events | group_by(.user_id) | map({user: .[0].user_id, count: length})

Top N by field

.products | sort_by(-.price) | .[0:5]

Flatten nested arrays

.[] | .items[]

Convert to CSV

(.[0] | keys), (.[] | [.[]]) | @csv

Date/time math

.events | map(.timestamp |= (fromdateiso8601))

Recursive search for any "id" key

[.. | objects | .id? | values]

Explain a filter

If the user pastes a complex filter, break it into pipe stages and explain each stage with a 1-sentence narration plus what the data looks like after it.

Output formatting flags

  • -r raw output (strings without quotes)
  • -c compact output (one object per line)
  • -s slurp input into an array
  • -n null input (for synthesizing JSON)

Anti-patterns

  • Don't guess at the schema. Always run jq 'paths(scalars) | join(".")' input.json | sort -u first to see what fields exist.
  • Don't produce a 5-pipe one-liner without testing. Build it stage by stage.
  • Don't use tostring when the value is already a string — it's a no-op that confuses readers.

When jq isn't the right tool

  • Heavy aggregation across files: use DuckDB.
  • Schema-aware transforms: write Python with json + dataclasses.
  • > 1GB JSON: stream with jq --stream or use a real parser.