How to Export macOS Network Usage Data
How to export and analyze macOS network usage data: built-in commands, common formats, and useful one-liners.
- Developer tools
- macOS
- Bandwidth
- Tutorial
You finally caught the spike. For three days you've been wondering why your home internet usage jumped at random hours, and you finally have a tool open at the right moment to see it. Now what? You want this data out of the live UI and into something you can analyze, share with a colleague, or correlate against a server log. The ability to export macos network usage matters more than people realize, and macOS has decent options once you know where to look.
This post walks through the practical paths: nettop -L for short captures, the unified log for system-side networking events, sampling scripts for CSV, and where ova stores its data on disk.
Why export macos network usage at all
A few real cases:
- Anomaly investigation — you saw a 2 GB upload at 3 AM, you want to graph the source process across the night.
- Capacity planning — you're on a metered or capped connection (Starlink Roam, hotel Wi-Fi, mobile tether) and want to know what's safe to leave on.
- Performance debugging — your server team is asking when the slow requests started, and you want to overlay client-side network usage on their server logs.
- Bandwidth budgeting per project — you bill clients hourly and want a sanity check on what their build pipelines uploaded yesterday.
- Curiosity — you simply want to look at your own data.
For all of these, "open Activity Monitor and stare" doesn't cut it. You need data on disk, in a format you can manipulate.
Option 1: nettop log mode
The simplest export path is built in. nettop -L <count> runs in log mode for <count> samples, dumping each sample as a line of text, and exits. Combined with -J to choose columns and -s to set interval, you get clean output you can pipe to a file.
nettop -L 600 -s 1 -P -J bytes_in,bytes_out,interface,state \
> ~/Desktop/nettop-10min.txtThat's 600 samples at one-second intervals — ten minutes of capture. Each sample lists every active process with the columns you asked for.
The output isn't quite CSV — it has a header per sample, blank lines between samples, and process names with spaces. But it's parseable. A short awk or Python script will turn it into a clean table.
Limits of nettop logging
- It only captures while running. If you wanted to know what happened yesterday, you're out of luck.
- It reports cumulative-since-process-start by default; you compute deltas yourself.
- Helper processes show as separate rows (no folding).
- The sample format isn't first-class CSV; expect to write a parser.
For ad-hoc captures of a specific time window — "I'm about to push to GitHub, let me capture five minutes around it" — nettop -L is great. For ongoing data, you want something else.
Option 2: the unified log
macOS's unified log captures structured events from system frameworks, including networking. CFNetwork (the URLSession layer) and Network.framework both emit log lines for connection lifecycle, TLS handshake, retries, and failures. You can extract those after the fact.
To see what's there now, query for the last hour:
log show --last 1h --predicate 'subsystem == "com.apple.CFNetwork"' \
--info --debugTo export to a file:
log show --last 24h --predicate 'subsystem == "com.apple.CFNetwork"' \
--style compact > ~/Desktop/cfnetwork-day.logUseful predicates:
subsystem == "com.apple.CFNetwork"— URLSession requests, TLS, redirectssubsystem == "com.apple.network"— Network.framework path changes, connection stateprocess == "YourApp"— restrict to one appeventMessage CONTAINS "443"— text-search inside log messages
The unified log keeps roughly the last several days of system events, depending on volume. It's not designed for byte accounting — it's designed for event auditing. But if your question is "did the connection to api.example.com fail at 14:23," the unified log knows.
log show vs log stream
log show reads the historical log. log stream watches new events live. Use log stream when you want to leave a terminal running and watch events as they happen:
log stream --predicate 'subsystem == "com.apple.network"' --level debugPipe to a file with >> to append a rolling capture.
Option 3: a custom sampling script
If you want CSV output of per-process bandwidth — the actual goal for most people — you can build it in 20 lines of shell. The idea: poll every N seconds, diff cumulative byte counts, emit CSV.
#!/usr/bin/env bash
# Naive per-process bandwidth sampler.
INTERVAL=5
echo "timestamp,pid,process,delta_in,delta_out"
declare -A prev_in prev_out
while true; do
ts=$(date +%s)
while IFS=, read pid name in_bytes out_bytes; do
pi=${prev_in[$pid]:-0}
po=${prev_out[$pid]:-0}
di=$((in_bytes - pi))
do_=$((out_bytes - po))
if (( di > 0 || do_ > 0 )); then
echo "$ts,$pid,$name,$di,$do_"
fi
prev_in[$pid]=$in_bytes
prev_out[$pid]=$out_bytes
done < <(nettop -P -L 1 -J pid,interface,bytes_in,bytes_out 2>/dev/null \
| awk 'NR>2 {print $2","$1","$3","$4}')
sleep $INTERVAL
doneThis is a sketch — production code would handle process exits, helper-process folding, log rotation, and the fact that nettop's output format is annoying to parse — but it shows the shape. You sample, you diff, you emit CSV. Run it under caffeinate or as a launchd agent if you want it to survive sleep.
See ova in action
A glance-able menu bar bandwidth monitor — local, signed, ~3 MB.
Option 4: ova's local database
A purpose-built bandwidth monitor saves you from writing the script above. ova keeps a SQLite database in:
~/Library/Application Support/ova/The contents are the same time-series data you see in the UI: per-app bytes in and out, sampled at roughly 1 Hz, with helper processes folded under their parent app. It's local, no cloud sync, no telemetry. You own the file.
Because it's SQLite, anything that can read SQLite can read it: the sqlite3 CLI, Python's sqlite3 module, DB Browser for SQLite, or a quick query in DuckDB. You can:
- Export the entire history to CSV with one command
- Run aggregations ("which app used the most bandwidth this week, by hour")
- Join against your own logs (build pipelines, server access logs, calendar events)
- Back it up to your normal backup target
A typical export to CSV using sqlite3:
sqlite3 -header -csv \
~/Library/Application\ Support/ova/<file>.sqlite \
"SELECT timestamp, app, bytes_in, bytes_out FROM samples \
WHERE timestamp > strftime('%s','now','-7 days') \
ORDER BY timestamp" > ~/Desktop/last-week.csvPractical recipes, joins, and privacy
Once you have a CSV — from any source — a few queries pay for the effort.
Top apps by week
SELECT app,
SUM(bytes_in) / (1024*1024) AS mb_down,
SUM(bytes_out) / (1024*1024) AS mb_up
FROM samples
WHERE timestamp > strftime('%s','now','-7 days')
GROUP BY app
ORDER BY (mb_down + mb_up) DESC
LIMIT 20;Almost always tells a clear story. Browser at the top, sync apps in the middle, system services at the bottom.
Hourly heatmap
SELECT strftime('%H', timestamp, 'unixepoch', 'localtime') AS hour,
SUM(bytes_in + bytes_out) / (1024*1024) AS mb
FROM samples
WHERE timestamp > strftime('%s','now','-30 days')
GROUP BY hour
ORDER BY hour;Shows you when your traffic peaks. For most people: 9 AM, 1 PM, and 4 PM, with a long tail of cloud sync overnight.
Anomaly detection
SELECT app,
date(timestamp, 'unixepoch', 'localtime') AS day,
SUM(bytes_out) / (1024*1024) AS mb_up
FROM samples
GROUP BY app, day
HAVING mb_up > 500
ORDER BY mb_up DESC;Flags any app-day with more than 500 MB of upload. A handful is normal (Time Machine to a network target, photo sync, large file transfers). A whole list of unfamiliar apps is worth investigating.
Combining sources
The strongest workflow uses multiple sources at once.
- ova for byte accounting — what was used, by which app, when
- Unified log for events — when did connections start, fail, retry
tcpdumpfor the wire — when something is genuinely mysterious
You can join them on timestamp. If ova shows a 200 MB upload by cloudd at 3:14 AM, the unified log shows what cloudd was syncing, and (if you had a packet capture running) tcpdump shows the destination IP space confirming it was iCloud.
A note on privacy
Anything that exports network data can leak information you didn't intend to share. Hostnames, even paths in the unified log, can reveal what services you use. Before sending logs to a colleague or pasting into a chat:
- Strip IP addresses if they identify your home network
- Redact hostnames that reveal personal services
- Remove process names that reveal apps you'd rather not advertise
This is also why local-only tools matter. ova doesn't ship your data anywhere — it stays on your disk. What you export is your decision.
Wrapping up
To export macos network usage data, you have several reasonable paths: nettop -L for short captures, the unified log for event auditing, custom sampling scripts for full control, and a local SQLite-backed monitor like ova for ongoing per-app accounting. Pick based on whether you need events or bytes, and how long a window you care about.
For a low-effort path that captures continuously and lets you query later, install ova — about 3 MB, macOS 14+, Apple Silicon and Intel, samples at roughly 1 Hz. The data lives in your ~/Library/Application Support/ova/ directory in SQLite, so any analysis tool you already know can read it.