descriptionPostgreSQL Logical Decoding replication to dynamic DNS UPDATE
last changeFri, 23 Dec 2016 16:51:22 +0000 (16:51 +0000)


PostgreSQL Logical Decoding replication to dynamic DNS UPDATE.

The tl;dr is pg_recvlogical | nsupdate

The aim of this project is continuous streaming replication of changes from a PostgreSQL IPAM (IP address management) database to the DNS with minimal delay. Instead of coupling the DNS server directly to the database (like BIND-DLZ) we use high-level replication protocols to read a continuous stream of changes from the databse and write them to the DNS.


You have a pre-existing database which is exported to the DNS using a periodic batch ETL job. You want to make updates happen instantly, without extensive rework of the user interface code or the database internals.

The idea is to replace the periodic batch job with streaming replication, with no other changes.

To give you confidence that it works correctly, it should be possible to verify that you get the same results from the old and new setups, by pausing replication, running a batch job, and comparing the live DNS (from streamed updates) with the batch job output.


On the PostgreSQL side, there is a Logical Decoding output plugin which translates from the PostgreSQL replication protocol to DNS UPDATE descriptions. These DNS UPDATE descriptions are in the style of the input to nsupdate, except that the boundaries between updates are determined by PostgreSQL transactions.

There will eventually be an intermediary program which is a PostgreSQL replication client and a DNS UPDATE client. It reads the output from the logical decoding plugin, splits each transaction into a separate DNS UPDATE message per zone, and sends the updates to the master DNS server.

Logical decoding

PostgreSQL's high-level replication support is called "logical decoding". It is based on decoding the low-level replication stream of write-ahead logs into high-level logical transactions. The main reference documentation is

Logical decoding uses output plugins which can write the decoded transactions in whatever format they like. Our plugin decode_nsupdate produces output that looks like the input to nsupdate.

Building the plugin

You need a PostgreSQL development environment installed. On Debian that is the postgresql-server-dev-9.6 package.

You should be able to build the plugin by typing make. (You need GNU make.)

Logical decoding plugin configuration

The configuration consists of a list of translations, each of which consists of a DNS record type, a SQL table name, and a list of SQL table columns. The number of columns depends on the record type. It is stored in a table in the database, like

    CREATE TABLE config
    (       rrtype  TEXT
    ,       tbl     REGCLASS
    ,       cols    NAME ARRAY

Record types mostly match standard DNS resource record types, though with some generalizations to simplify address type handling.

This plugin supports PostgreSQL's built-in inet type, which can hold an IPv4 or an IPv6 address. It also supports the popular ip4r plugin, which provides ip4, ip6, and ipaddress types.

There are two special translation record types, ADDR and PTR, which adapt to the column type of the table that was modified.

For the other translation record types, the plugin is oblivious to the column types, and just prints its value in PostgreSQL's normal format. In many cases the column is expected to contain a DNS name, so you should ensure it makes sense when interpreted that way.

You can have multiple translations for the same table, for example ADDR and PTR to update both forward and reverse DNS from one database change. But note that this requires a filter between pg_recvlogical and nsupdate to split transactions that affect multiple zones.

Database schema caveats

To replicate from PostgreSQL to DNS UPDATE we need details of exactly which DNS records have been deleted or updated (i.e. replaced). The details that we get from PostgreSQL Logical Decoding depend on how the table's REPLICA IDENTITY is configured.

When the table's primary key covers all the columns needed to produce the DNS records, the default replica identity is enough. Otherwise you probably want to use FULL. (The other option is USING INDEX which might or might not be useful for you.)


Testing example

The ansible subdirectory contains some simple playbooks for creating and destroying a test environment on Debian. It creates a new database cluster and grants superuser access to the user that runs the playbook.

The PostgreSQL version number, port number, cluster name, and database name can be configured in ansible/inventory.


    cd ansible
    ansible-playbook create.yml
    cd ..

Set up the example database schema

    psql -d dns <example.sql

To load the plugin and prepare for replication

    pg_recvlogical --port=5342 --dbname=dns --slot=dns \
                   --create-slot --if-not-exists \

To allow dynamic DNS UPDATE in your DNS server, it is easiest to configure a zone with update-policy local like

    zone {
        type master;
        file "";
        update-policy local;

To stream changes to your DNS server, use the following pipeline. The middle stage sets the TTLs of the records cerated by nsupdate and allows you to see the replication stream. You can use nsupdate -ld to get a trace of DNS packets. You can freely kill it and restart it. It will continue at the point you stopped, and output any changes that happened while it was not running.

    pg_recvlogical --port=5342 --dbname=dns --slot=dns \
                   --start --no-loop --file=- \
                   -o config-table=nsupdate_config |
    (echo 'ttl 3600'; tee /dev/tty) |
    nsupdate -l

You might want to watch your DNS server logs in one window while you do the following in another

    $ psql -d dns
    dns=# insert into cname values ('','localhost');
    INSERT 0 1
    dns=# delete from cname;
    DELETE 1

To stop PostgreSQL from keeping any pending replication data, run

    pg_recvlogical --port=5342 --dbname=dns --slot=dns \

Switching between batch and streaming updates

Here are some notes on how I think this could work...

To verify that streaming updates work correctly, we need to get a batch snapshot of the database that is synchronized with a known point in the streamed updates. The PostgreSQL logical decoding system has pglogicalemit_message() which we can use to co-ordinate with the stream of DNS UDPATE requests.

So, roughly, I think...

It is also necessary to be able to start streaming from a particular point. This bootstrap mode is similar to verification.

2016-12-23 Tony FinchSplit updates per zone master
2016-12-23 Tony FinchPlaying with DNS UPDATE from Perl
2016-12-22 Tony FinchUse pg_logical_emit_message() for co-ordinating between...
2016-12-21 Tony FinchA few more RR types
2016-12-21 Tony FinchAvoid emitting an update verb with no RR.
2016-12-21 Tony FinchSome thoughts about memory allocaton
2016-12-21 Tony FinchNo need to keep cfgtbl around
2016-12-21 Tony FinchCommentary
2016-12-21 Tony FinchSupport ip4r column types
2016-12-20 Tony FinchUpdate column types of config table
2016-12-20 Tony FinchAdd missing example SQL
2016-12-20 Tony FinchGC old Jsonb gubbins
2016-12-20 Tony FinchConstruct new internal version of configuration table
2016-12-20 Tony FinchNote that safe switching from batch mode needs sentinel...
2016-12-19 Tony FinchNotes on safe switching from batch to streaming updates
2016-12-19 Tony FinchBetter rationale
2 years ago master