MOSAICOS
A place for memories

CXPORTER: A SAFE HOLIDAY VAULT MIGRATION SIDE-PROJECT (GO 1.26 + CXP/CXF)

Published: 2026-01-01

During this holiday break I finally got the kind of uninterrupted time that is hard to find the rest of the year: a few quiet mornings, a hot drink, and a small “scratch-your-own-itch” project. I used that window to tackle two things at once.

First: I wanted to migrate my credential data from a couple of legacy providers to Bitwarden. Not because those tools were terrible, but because I’ve been gradually consolidating on Bitwarden and wanted to do it carefully instead of copy/pasting my way into a half-broken vault. Second: I had been looking for a practical excuse to play with Go's experimental runtime/secret feature (available via GOEXPERIMENT=runtimesecret), and "a tool that has to touch secrets by definition" felt like the perfect (and slightly terrifying) use case.

That’s how cxporter was born. It’s a small Go CLI that converts credential exports with two priorities in mind: correctness (no silent data loss) and safety (minimize the time and surface area where plaintext secrets exist). It also ended up being a great excuse to go deeper on the FIDO Alliance’s CXP/CXF work, which I think is one of the more promising attempts to make password-manager portability less painful.

Why migrations are awkward (and a bit risky)

Credential exports are a “worst possible” artifact from a security perspective. They’re high-value, easy to mishandle, and frequently represented as big plaintext blobs that get copied around: into temp folders, into shell history, into cloud sync directories, or into a forgotten downloads folder. Even when you do everything right, the export format itself tends to be loosely defined.

On the correctness side, every provider has its own model: different field names, different ideas of what a “URI” means, different custom-field semantics, and slightly incompatible representations for things like TOTP seeds or attachments. The scary part is that many migration paths fail silently: the import succeeds, but your data is subtly wrong. You don’t notice until the first time you need it.

CXP vs CXF: the part that made me excited

The FIDO Alliance is working on a set of specifications under the “Credential Exchange” umbrella. At a high level, the goal is to make it realistic for users to move between credential managers without losing information or being locked into proprietary export quirks.

The way I think about it is:

Reading the drafts, what stood out to me is that they’re trying to be practical about real vault data, not just a “username + password” toy model. Modern vaults contain a mix of structured and semi-structured material: multiple URIs per item, custom fields, metadata you don’t want to lose (labels, notes), and sometimes authentication-related values that have their own quirks (like shared secrets that need careful encoding/normalization). The CXP/CXF work tries to give that mess a shared vocabulary and a transport so tooling can be more deterministic.

CXP (the model) reads like a contract for meaning: when two tools say “this field is X”, they should agree on what X is supposed to represent. CXF (the format) is the other half: it’s about a portable artifact you can actually generate and consume. I also like the way this separation encourages better engineering: you can validate your mapping at the CXP layer, and treat CXF as “just” serialization/packaging.

The important point (and the reason it helped cxporter) is that a standard exchange model gives you a neutral “middle layer”. Instead of writing N×M converters between providers, you can aim for “provider → CX → provider”. Even when you only care about a single destination (Bitwarden in my case), having an intermediate representation forces you to be explicit about mapping decisions.

It also changes the tone of a migration project. You stop thinking in terms of “does this JSON parse?” and you start thinking in terms of “is this field meaningfully preserved?” For example: if an export contains multiple URIs with different matching rules, or a custom field with a type hint, or an authentication factor that isn’t just a password, you want those differences to survive the trip.

A security-first migration tool

cxporter is intentionally conservative. The goal isn’t to be the fastest converter on Earth; it’s to be the converter that doesn’t surprise you later. In practice that meant paying attention to both how data is processed and how it is handled.

Design goals

Experimenting with runtime/secret

The part I was most curious about was Go's experimental runtime/secret package (available via the GOEXPERIMENT=runtimesecret build flag). It doesn't solve the "you had to decrypt it at some point" reality of migrations, and it's not a substitute for OS hardening. But it can help with a very common class of mistakes: leaving sensitive data in memory longer than necessary, accidental exposure through debugging, and unintended retention in stack or heap.

Unlike traditional secret management approaches that provide special container types, runtime/secret works by wrapping sensitive computations. The API is minimal: just secret.Do(f func()), which executes a function while ensuring automatic memory erasure afterward—clearing registers, zeroing stack memory, and marking heap allocations for erasure once the garbage collector determines they're unreachable.

In cxporter, this pattern is used to contain sensitive operations during the conversion pipeline:

// Wrap sensitive conversion logic in secret.Do to ensure automatic cleanup
secret.Do(func() {
    // Decrypt and parse the export file
    decryptedData := decryptExport(encryptedBytes, masterKey)

    // Convert to intermediate format
    items := parseCredentials(decryptedData)

    // Map to target format
    targetFormat := convertToBitwarden(items)

    // All temporary sensitive data automatically erased after this scope
})

The practical benefits are not magical, but they're meaningful:

It's important to note the limitations: runtime/secret currently only works on Linux (amd64/arm64), doesn't protect global variables, and can't be used with goroutines inside the wrapped function (it will panic). Heap erasure also depends on garbage collection timing, not immediate cleanup. But for single-threaded, self-contained sensitive operations like credential conversion, it provides a useful layer of protection with minimal code changes.

This felt especially appropriate because migration tooling is often written as one-off scripts. Scripts are great, but they're also where you tend to accept risks you wouldn't accept in a library: quick logging, quick debugging, quick "just load the whole file". Using runtime/secret nudged me toward better defaults.

Correctness: mapping and validation

The other half of cxporter is correctness. Migration tooling fails in boring ways: a missing URL scheme changes matching behavior, TOTP seeds get mangled by normalization, notes get truncated, attachments disappear, or custom fields quietly drop. The UI says “Import successful” and you only find out months later that something important is missing.

cxporter treats conversion as a schema-mapping problem with validation points:

The end result is a tool that’s intentionally boring. It should either produce an import that matches what you expect, or stop and tell you why it can’t.

What I liked about building it

cxporter ended up being an ideal break project: small enough to ship, but deep enough to stay interesting. I got to combine Go ergonomics with security-minded data handling, and I got to read an emerging standard (CXP/CXF) with an implementer’s eye instead of just skimming headlines.

If you’re planning a migration yourself, my biggest recommendation is still: treat exports as hazardous material. Keep them short-lived, keep them local, and use tooling that makes correctness and safety the default. If that resonates, the project is here: https://github.com/nvinuesa/cxporter.