Jupyter Notebook is the tool of choice for analysts and developers who want to combine code, data, and commentary in a single, shareable document. This guide walks you through a complete end-to-end workflow: installing the FXMacroData Python SDK, setting up your API key, fetching macro indicator series, performing analysis, and producing publication-ready charts — all inside a single notebook. By the end you will have a working, reproducible notebook you can extend with any of the 80+ indicators FXMacroData exposes.
What You Will Build
A self-contained Jupyter Notebook that authenticates against the FXMacroData REST API, pulls policy rate and inflation time series for four G10 currencies, computes real rate spreads, and renders interactive multi-series charts — ready to share or schedule as a recurring report.
Prerequisites
- Python 3.9 or later
- Jupyter Notebook or JupyterLab — install with
pip install jupyterlab - The following packages:
requests,pandas,matplotlib,seaborn - A FXMacroData API key — sign up at /subscribe to get one
You can install all dependencies in one command:
pip install requests pandas matplotlib seaborn jupyterlab
Then launch JupyterLab from your project folder:
jupyter lab
Step 1 — Secure Your API Key
Never hard-code credentials in a notebook file — notebooks are easy to share accidentally, and a leaked API key can be misused. The safest approach is to store the key in an environment variable and read it at runtime.
On Linux/macOS, add this line to your ~/.bashrc or ~/.zshrc (then restart your
terminal or run source ~/.bashrc):
export FXMD_API_KEY="your_actual_api_key_here"
On Windows (PowerShell):
$env:FXMD_API_KEY = "your_actual_api_key_here"
Then read the key at the top of your notebook in a dedicated setup cell:
import os
API_KEY = os.environ.get("FXMD_API_KEY")
if not API_KEY:
raise EnvironmentError(
"FXMD_API_KEY environment variable is not set. "
"See https://fxmacrodata.com/subscribe to get a key."
)
print("API key loaded ✓")
🔒 Tip: python-dotenv for project-level secrets
If you prefer a .env file per project, install pip install python-dotenv and add
from dotenv import load_dotenv; load_dotenv() before the os.environ.get() call.
Keep the .env file in your .gitignore.
Step 2 — Write a Reusable Fetch Helper
Every FXMacroData indicator endpoint follows the same URL structure:
GET https://fxmacrodata.com/api/v1/announcements/{currency}/{indicator}?api_key=YOUR_API_KEY
The JSON response contains a data array where each object has a date field,
a val field, and (for precise timing) an announcement_datetime field.
The following helper converts that response directly into a pandas DataFrame:
import requests
import pandas as pd
BASE_URL = "https://fxmacrodata.com/api/v1"
def fetch_indicator(currency: str, indicator: str,
start: str = None, end: str = None) -> pd.DataFrame:
"""Fetch a macro indicator time series from FXMacroData.
Parameters
----------
currency : ISO currency code, e.g. 'usd', 'eur', 'gbp'
indicator : Indicator slug, e.g. 'policy_rate', 'inflation', 'gdp'
start : Optional start date 'YYYY-MM-DD'
end : Optional end date 'YYYY-MM-DD'
Returns
-------
pd.DataFrame with columns: date (datetime64), val (float64),
announcement_datetime (datetime64, nullable),
currency (str), indicator (str)
"""
params = {"api_key": API_KEY}
if start:
params["start"] = start
if end:
params["end"] = end
url = f"{BASE_URL}/announcements/{currency}/{indicator}"
resp = requests.get(url, params=params, timeout=15)
resp.raise_for_status()
rows = resp.json().get("data", [])
if not rows:
return pd.DataFrame(columns=["date", "val", "announcement_datetime",
"currency", "indicator"])
df = pd.DataFrame(rows)
df["date"] = pd.to_datetime(df["date"])
df["val"] = pd.to_numeric(df["val"], errors="coerce")
if "announcement_datetime" in df.columns:
df["announcement_datetime"] = pd.to_datetime(
df["announcement_datetime"], errors="coerce", utc=True
)
df["currency"] = currency.upper()
df["indicator"] = indicator
return df.sort_values("date").reset_index(drop=True)
The helper raises an HTTPError on 4xx/5xx responses, converts the date column
to a proper datetime64 type, and attaches currency and indicator labels — making downstream
joins trivial.
Step 3 — Fetch Your First Series
Let's verify the helper by fetching US Federal Reserve policy rate decisions since 2022:
usd_rate = fetch_indicator("usd", "policy_rate", start="2022-01-01")
usd_rate.head(10)
You should see a DataFrame with columns like:
date val announcement_datetime currency indicator
0 2022-03-16 0.25 2022-03-16T18:00:00+00:00 USD policy_rate
1 2022-05-04 0.75 2022-05-04T18:00:00+00:00 USD policy_rate
2 2022-06-15 1.50 2022-06-15T18:00:00+00:00 USD policy_rate
3 2022-07-27 2.25 2022-07-27T18:00:00+00:00 USD policy_rate
...
Notice the announcement_datetime column — this gives you second-level release precision
for event-driven strategies or backtests that depend on exact announcement timing. Explore the full
catalogue of available indicators at /api-data-docs/usd/policy_rate.
Step 4 — Fetch Multiple Currencies
One of the most powerful patterns is to pull the same indicator for several currencies simultaneously and stack the results. The loop below fetches policy rates for the USD, EUR, GBP, and AUD in one go:
currencies = ["usd", "eur", "gbp", "aud"]
START = "2022-01-01"
policy_rates = pd.concat(
[fetch_indicator(ccy, "policy_rate", start=START) for ccy in currencies],
ignore_index=True
)
print(f"Fetched {len(policy_rates)} rows across {policy_rates['currency'].nunique()} currencies")
policy_rates.groupby("currency").tail(2)
Available indicator slugs
The full indicator catalogue is at fxmacrodata.com/api-data-docs.
Key series for FX analysis include
policy_rate,
inflation,
gdp,
unemployment,
pmi, and
trade_balance.
Every series uses the same fetch pattern — swap currency code and indicator slug.
Step 5 — Combine Multiple Indicators
To compute real interest rates you need both the policy rate and headline inflation for each currency. Fetch inflation alongside policy rates, then concatenate into a single tidy DataFrame:
inflation = pd.concat(
[fetch_indicator(ccy, "inflation", start=START) for ccy in currencies],
ignore_index=True
)
# Stack all observations into one long-form DataFrame
macro = pd.concat([policy_rates, inflation], ignore_index=True)
print(macro.groupby(["currency", "indicator"]).size().to_string())
Step 6 — Reshape and Forward-Fill
Central bank decisions are sparse — they happen 6–12 times a year. To align them with other monthly series, build a monthly date spine and forward-fill the last known value:
import numpy as np
# Floor each observation to month start for alignment
macro["month"] = macro["date"].dt.to_period("M").dt.to_timestamp()
# Pivot to wide: one column per currency+indicator combination
wide = (
macro
.groupby(["month", "currency", "indicator"])["val"]
.last() # latest reading within the month
.unstack(["currency", "indicator"])
)
# Build a complete monthly date spine and forward-fill
date_spine = pd.date_range(START, pd.Timestamp.today(), freq="MS")
wide = wide.reindex(date_spine).ffill()
wide.tail(3)
Step 7 — Compute Real Rate Spreads
A real rate is the nominal policy rate minus headline inflation. A positive spread signals restrictive monetary policy; a negative spread means the central bank is still accommodative relative to consumer price growth. EUR–USD real rate differentials are among the strongest medium-term drivers of EUR/USD:
for ccy in [c.upper() for c in currencies]:
try:
wide[(ccy, "real_rate")] = (
wide[(ccy, "policy_rate")] - wide[(ccy, "inflation")]
)
except KeyError:
pass # skip if either series is missing
# EUR minus USD real rate differential
wide[("spread", "eur_usd")] = (
wide[("EUR", "real_rate")] - wide[("USD", "real_rate")]
)
wide[[("USD", "real_rate"), ("EUR", "real_rate"), ("spread", "eur_usd")]].tail(6)
Step 8 — Visualise with Matplotlib
With the DataFrame ready, a multi-panel chart takes only a few lines. Use a step chart for policy rates — they are discrete staircase decisions, not continuous series:
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
fig, axes = plt.subplots(3, 1, figsize=(13, 11), sharex=True)
fig.patch.set_facecolor("#F8FAFC")
COLORS = {"USD": "#2563EB", "EUR": "#16A34A", "GBP": "#7C3AED", "AUD": "#F97316"}
# Panel 1: Policy rates
ax1 = axes[0]
for ccy in [c.upper() for c in currencies]:
try:
series = wide[(ccy, "policy_rate")].dropna()
ax1.step(series.index, series.values, where="post",
label=ccy, color=COLORS[ccy], linewidth=1.8)
except KeyError:
pass
ax1.set_ylabel("Policy rate (%)")
ax1.set_title("Central Bank Policy Rates — G4", fontweight="bold")
ax1.legend(loc="upper left", fontsize=9)
ax1.grid(axis="y", alpha=0.3)
# Panel 2: Real rates
ax2 = axes[1]
for ccy in [c.upper() for c in currencies]:
try:
series = wide[(ccy, "real_rate")].dropna()
ax2.plot(series.index, series.values,
label=ccy, color=COLORS[ccy], linewidth=1.8)
except KeyError:
pass
ax2.axhline(0, color="#94A3B8", linewidth=0.8, linestyle="--")
ax2.set_ylabel("Real rate (%)")
ax2.set_title("Real Interest Rates (Policy Rate − Inflation)", fontweight="bold")
ax2.legend(loc="upper left", fontsize=9)
ax2.grid(axis="y", alpha=0.3)
# Panel 3: EUR–USD spread
ax3 = axes[2]
spread = wide[("spread", "eur_usd")].dropna()
ax3.fill_between(spread.index, spread.values, 0,
where=spread.values >= 0,
color="#16A34A", alpha=0.30, label="EUR favoured")
ax3.fill_between(spread.index, spread.values, 0,
where=spread.values < 0,
color="#2563EB", alpha=0.30, label="USD favoured")
ax3.plot(spread.index, spread.values, color="#374151", linewidth=1.6)
ax3.axhline(0, color="#94A3B8", linewidth=0.8, linestyle="--")
ax3.set_ylabel("Spread (pp)")
ax3.set_title("EUR−USD Real Rate Differential", fontweight="bold")
ax3.legend(loc="upper left", fontsize=9)
ax3.grid(axis="y", alpha=0.3)
ax3.xaxis.set_major_formatter(mdates.DateFormatter("%b %Y"))
ax3.xaxis.set_major_locator(mdates.MonthLocator(interval=4))
plt.setp(ax3.xaxis.get_majorticklabels(), rotation=30, ha="right")
fig.tight_layout(h_pad=1.6)
plt.savefig("macro_analysis.png", dpi=150, bbox_inches="tight")
plt.show()
Tip: Use step charts for policy rate series
ax.step(..., where="post") represents central bank decisions correctly as discrete
staircase moves rather than a smooth interpolated line. For all other macro series (inflation, GDP, PMI)
a standard ax.plot() is appropriate.
Step 9 — Check the Release Calendar
Before scheduling recurring fetches, it is useful to know exactly when the next data releases are due. The release calendar endpoint returns upcoming announcement dates for any currency, so you can time your notebook refreshes to run minutes after each release rather than polling on a fixed schedule:
def fetch_calendar(currency: str) -> pd.DataFrame:
"""Fetch upcoming release dates for a currency from the FXMacroData calendar."""
url = f"{BASE_URL}/calendar/{currency}"
resp = requests.get(url, params={"api_key": API_KEY}, timeout=15)
resp.raise_for_status()
events = resp.json().get("data", [])
df = pd.DataFrame(events)
if df.empty:
return df
df["release_date"] = pd.to_datetime(df["release_date"], errors="coerce")
return df.sort_values("release_date").reset_index(drop=True)
# Upcoming USD releases
usd_calendar = fetch_calendar("usd")
upcoming = usd_calendar[usd_calendar["release_date"] >= pd.Timestamp.today()]
print(upcoming[["release_date", "indicator"]].head(10).to_string(index=False))
Step 10 — Save and Schedule
For a repeatable analysis you can export the wide DataFrame to CSV and run the notebook on a schedule
via papermill, which executes notebooks from the command line with parameter injection:
pip install papermill
# At the end of your notebook: persist the dataset
wide.to_csv("macro_data.csv")
print(f"Saved {wide.shape[0]} rows × {wide.shape[1]} columns to macro_data.csv")
# Execute the notebook from the command line (e.g. from cron or a CI job)
papermill macro_analysis.ipynb macro_analysis_output.ipynb \
-p START "2022-01-01"
Pair this with the release calendar so your scheduled job only runs when new indicator data is actually expected — keeping API usage efficient on low-activity days.
Complete notebook in one place
All the cells above form a single self-contained notebook. Drop them in order, set
FXMD_API_KEY in your environment, and run all cells — you will have a fully
interactive macro analysis in under two minutes.
Summary
You have built a complete Jupyter Notebook workflow that:
- Authenticates with FXMacroData securely via an environment variable
- Implements a reusable
fetch_indicator()helper that converts API responses to pandas DataFrames - Pulls and stacks policy rate and inflation series for multiple G10 currencies
- Reshapes sparse central bank data onto a regular monthly date spine with forward-filling
- Computes real rate spreads and cross-currency differentials
- Produces a three-panel Matplotlib chart ready for reports or sharing
- Queries the release calendar to time recurring refreshes intelligently
- Exports results to CSV and schedules notebook execution with
papermill
As a next step, extend the notebook with additional FXMacroData series such as
unemployment,
trade balance, and
PMI to build a fuller macro scorecard. The same
fetch_indicator() helper and reshape workflow applies unchanged — only the indicator slug
needs updating.