We publish realist geopolitical analysis backed by global geopolitical intelligence datasets — a corpus of 340,000+ dispatches, structured events, and entity profiles, all queryable.
Multi-capital lens · neither Washington’s nor Beijing’s team uniform · every claim cited against primary sources or our own open dataset.
Geopolitical intelligence, as it’s published in 2026, is split into two kinds of bad. The first is the daily feed — Reuters, Bloomberg, NYT World, a hundred regional wires — which moves fast, prioritizes novelty over structure, and dissolves the signal the moment the cycle turns. You can read a thousand dispatches on Iran in a year and still not have a defensible view of where the program is. The second is the long-form shop — Carnegie, RAND, IISS, the good think tanks — which moves slowly, produces depth, but rarely shows its work at the dataset level and almost never publishes a counter-narrative against its own house view.
Neither format solves the reader’s actual problem, which is: given a specific question — will Russia run out of armor by Q3, is the rial’s black-market rate telling us something real, what is China’s posture on Taiwan this week — where do I go to find a piece that (a) is argued at length, (b) cites primary sources in more than one language, (c) tells me where the author might be wrong, and (d) lets me see the dataset behind the claim? The answer today is almost nowhere.
Below that, there’s a structural bias no one names: most English-language geopolitical analysis is written from either a Washington or a Beijing mental model, and readers in Delhi, Tehran, Brasília, Ankara, Johannesburg, Jakarta can feel it. It reads as a team uniform. That matters when the audience for geopolitical analysis is genuinely global — the market for this work is not one capital, and the current offering pretends otherwise.
The thing that was missing wasn’t more writers. It was the scaffolding underneath the writers. Every long-form claim a good analyst makes is, at bottom, a claim over a dataset they assembled in their head from years of reading. If you built that dataset explicitly — dispatches ingested daily, entities extracted and disambiguated, events timestamped and geocoded, publications scored for credibility, author profiles tracked across outlets, all of it queryable — then the writing becomes a different job. Not typing faster. Arguing from structure.
The second thing we saw was that the methodology is the product. Bellingcat proved this: you can publish middling prose and still be the most trusted source in a category if you show how you arrived at every claim. The show-your-work ethic is a competitive moat, not an aesthetic choice. Most think tanks won’t adopt it because their house view is a client-facing asset; they can’t afford to publish the counter-narrative against themselves. A publication with no donors to satisfy can.
The third thing was the multi-capital posture. Not neutrality — neutrality is cowardice dressed as balance — but non-alignment: we don’t owe a view to Washington, to Beijing, to any capital’s team. That posture is cheap to claim and expensive to execute, because it requires reading news in languages other than English, every day, and that requires ingesting every language, not “the big four.” The scaffolding makes that tractable.
GeoMemo today runs on a set of global geopolitical intelligence datasets — 252,483 dispatches, 162,176 structured events, 304 published intelligence reports, and 9,595 tracked author profiles across roughly 3,218 publications scored for credibility. Dispatches are ingested continuously from a few hundred sources in 47 languages. Each dispatch passes through an entity-extraction pipeline that emits geocoded events; those events aggregate into country and conflict profiles; those profiles feed a report generator that produces daily country briefs, weekly op-risk assessments, sovereign and sanctions watchlists, and escalation analyses on demand.
The generator is a three-model stack. Anthropic’s Claude Sonnet 4.6 is the primary author — the model does the reasoning and writes the report. Claude Haiku is the fallback for cost-sensitive batches and for the composer’s side-car tasks. A Llama model serves as the planner — it decomposes a prompt into the structured queries the generator runs against our corpus before writing. We do not use LLMs to fabricate facts. Every claim in a report has a row-level citation back to either an external dispatch (with a publication credibility score) or a structured record in our own dataset. This is the same rule that applies to our human writers.
Scoring — the credibility of a source, the confidence of a report, the stability of a country — runs through explicit formulas, not model judgment. The publication_scores table composites recency of presence, longevity, byline volume, cross-reference patterns, and known-outlet whitelists into a single number we stand behind and publish. The seven-pillar stability score uses a similar method on country-level inputs. When our scores are wrong — and they are wrong, in specific, known ways — we publish what we know and what we’re fixing.
We do not, today, solve minor-language coverage well. Our 47 languages include the ones with a significant English-reading diaspora; they do not yet include Pashto, Uzbek, Tigrinya, Māori, Quechua, or a dozen others where the news is either behind a state firewall or published in formats our ingester doesn’t parse cleanly. We know where the gap is. We’re hiring for it.
Our stability score over-rewards Gulf monarchies — the formula reads authoritarian stability as stability, full stop, and that’s wrong in ways we can see in the residuals. The fix is not a patch; it’s a rewrite, and it’s on the roadmap for Q3.
We are one person writing the editorial, today, with contributors incoming. The fact that the masthead has one card is a fact, not a bug; the fact that it should have five by 2027 is the roadmap. We are not hiding this.
Reports are generated by models, edited by humans, and shipped with versioning. We have not yet surfaced a public corrections log; when we do (see the quarterly “State of the dataset” post on /editorial), it will be itemized, not aggregated. That’s the standard we want to meet.
