Message queue appenders

This page guides you through message queue appenders that forward log events to a message broker.

Flume Appender

Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating, and moving large amounts of log data from many different sources to a centralized data store. The Flume Appender takes log events and sends them to a Flume agent as serialized Avro events for consumption.

The Flume Appender supports three modes of operation.

AVRO

It can act as a remote Flume client which sends Flume events via Avro to a Flume Agent configured with an Avro Source.

EMBEDDED

It can act as an embedded Flume Agent where Flume events pass directly into Flume for processing.

PERSISTENT

It can persist events to a local BerkeleyDB data store and then asynchronously send the events to Flume, similar to the embedded Flume Agent but without most of the Flume dependencies.

Usage as an embedded agent will cause the messages to be directly passed to the Flume Channel, and then control will be immediately returned to the application. All interaction with remote agents will occur asynchronously. Setting the type attribute to EMBEDDED will force the use of the embedded agent. In addition, configuring agent properties in the appender configuration will also cause the embedded agent to be used.

Table 1. Flume Appender configuration attributes
Attribute Type Default value Description

Required

name

String

The name of the appender.

Optional

type

enumeration

AVRO

One of AVRO, EMBEDDED or PERSISTENT to indicate which variation of the Appender is desired.

ignoreExceptions

boolean

true

If false, logging exception will be forwarded to the caller of the logging statement. Otherwise, they will be ignored.

Logging exceptions are always also logged to Status Logger

connectTimeoutMillis

int

0

The connect timeout in milliseconds. If 0 the timeout is infinite.

requestTimeoutMillis

int

0

The request timeout in milliseconds. If 0 the timeout is infinite.

agentRetries

int

0

The number of times the agent should be retried before failing to a secondary. This parameter is ignored when type="persistent" is specified (agents are tried once before failing to the next).

batchSize

int

1

It specifies the number of events that should be sent as a batch.

compress

boolean

false

When set to true the message body will be compressed using gzip.

dataDir

Path

Directory where the Flume write-ahead log should be written. Valid only when embedded is set to true and Agent elements are used instead of Property elements.

eventPrefix

String

""

The character string to prepend to each event attribute to distinguish it from MDC attributes.

lockTimeoutRetries

int

5

The number of times to retry if a LockConflictException occurs while writing to Berkeley DB.

maxDelayMillis

int

60000

The maximum number of milliseconds to wait for batchSize events before publishing the batch.

mdcExcludes

String[]

A comma-separated list of mdc keys that should be excluded from the FlumeEvent.

This is mutually exclusive with the mdcIncludes attribute.

mdcIncludes

String[]

A comma-separated list of mdc keys that should be included in the FlumeEvent. Any keys in the MDC not found in the list will be excluded.

This option is mutually exclusive with the mdcExcludes attribute.

mdcRequired

String[]

A comma-separated list of mdc keys that must be present in the MDC. If a key is not present, a LoggingException will be thrown.

mdcPrefix

String

mdc:

A string that should be prepended to each MDC key to distinguish it from event attributes.

Table 2. Flume Appender nested elements
Type Multiplicity Description

Agent

zero or more

An array of Agents to which the logging events should be sent. If more than one agent is specified, the first Agent will be the primary and subsequent Agents will be used in the order specified as secondaries should the primary Agent fail. Each Agent definition supplies the Agent’s host and port.

The specification of agents and properties are mutually exclusive.

Filter

zero or one

Allows filtering log events just before they are formatted and sent.

See also appender filtering stage.

FlumeEventFactory

zero or one

Factory that generates the Flume events from Log4j events.

The default factory is the appender itself.

Layout

zero or one

Formats log events. If not provided, Rfc5424 Layout is used.

See Layouts for more information.

Property

zero or more

One or more Property elements that are used to configure the Flume Agent. The properties must be configured without the agent name, the appender name is used for this, and no sources can be configured. Interceptors can be specified for the source using "sources.log4j-source.interceptors". All other Flume configuration properties are allowed. Specifying both Agent and Property elements will result in an error.

When used to configure in Persistent mode, the valid properties are:

1. keyProvider to specify the name of the plugin to provide the secret key for encryption.

The specification of agents and properties are mutually exclusive.

Additional runtime dependencies are required to use the Flume Appender:

  • Maven

  • Gradle

We assume you use log4j-bom for dependency management.

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-flume-ng</artifactId>
  <scope>runtime</scope>
</dependency>

We assume you use log4j-bom for dependency management.

runtimeOnly 'org.apache.logging.log4j:log4j-flume-ng'

To use the Flume Appender PERSISTENT mode, you need the following additional dependency:

  • Maven

  • Gradle

<dependency>
  <groupId>com.sleepycat</groupId>
  <artifactId>je</artifactId>
  <version>18.3.12</version>
  <scope>runtime</scope>
</dependency>
runtimeOnly 'com.sleepycat:je:{je-version}'

If you use the Flume Appender in EMBEDDED mode, you need to add the flume-ng-embedded-agent dependency below and all the channel and sink implementation you plan to use.

See Flume Embedded Agent documentation for more details.

  • Maven

  • Gradle

<dependency>
  <groupId>org.apache.flume</groupId>
  <artifactId>flume-ng-embedded-agent</artifactId>
  <version>1.11.0</version>
  <scope>runtime</scope>
</dependency>
runtimeOnly 'org.apache.flume:flume-ng-embedded-agent:{flume-version}'

Agent Addresses

The address of the Flume server is specified using the Agent element, which supports the following configuration options:

Table 3. Agent configuration attributes
Attribute Type Default value Description

host

InetAddress

localhost

The host to connect to.

port

int

35853

The port to connect to.

Flume event factories

Flume event factories are Log4j plugins that implement the org.apache.logging.log4j.flume.appender.FlumeEventFactory and allow to customize the way log events are transformed into `org.apache.logging.log4j.flume.appender.FlumeEvent`s.

Configuration examples

A sample Flume Appender which is configured with a primary and a secondary agent, compresses the body and formats the body using the RFC5424 Layout:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<Flume name="FLUME">
  <Rfc5424Layout enterpriseNumber="18060"
                 includeMDC="true"
                 appName="MyApp"/>
  <Agent host="192.168.10.101" port="8800"/> (1)
  <Agent host="192.168.10.102" port="8800"/> (2)
</Flume>
Snippet from an example log4j2.json
"Flume": {
  "name": "FLUME",
  "Rfc5424Layout": {
    "enterpriseNumber": 18060,
    "includeMDC": true,
    "appName": "MyAPP"
  },
  "Agent": [
    { (1)
      "host": "192.168.10.101",
      "port": "8800"
    },
    { (2)
      "host": "192.168.10.102",
      "port": "8800"
    }
  ]
}
Snippet from an example log4j2.yaml
Flume:
  name: "FLUME"
  Rfc5424Layout:
    enterpriseNumber: 18060
    includeMDC: true
    appName: MyApp
  Agent:
    (1)
    - host: "192.168.10.101"
      port: 8800
    (2)
    - host: "192.168.10.102"
      port: 8800
Snippet from an example log4j2.properties
appender.0.type = Flume
appender.0.name = FLUME

appender.0.layout.type = Rfc5424Layout
appender.0.layout.enterpriseNumber = 18060
appender.0.layout.includeMDC = true
appender.0.layout.appName = MyApp

(1)
appender.0.primary.type = Agent
appender.0.primary.host = 192.168.10.101
appender.0.primary.port = 8800
(2)
appender.0.secondary.type = Agent
appender.0.secondary.host = 192.168.10.102
appender.0.secondary.port = 8800
1 Primary agent
2 Secondary agent

A sample Flume Appender, which is configured with a primary and a secondary agent, compresses the body, formats the body using the RFC5424 Layout, and persists encrypted events to disk:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<Flume name="FLUME"
       type="PERSISTENT"
       compress="true"
       dataDir="./logData">
  <Rfc5424Layout enterpriseNumber="18060"
                 includeMDC="true"
                 appName="MyApp"/>
  <Property name="keyProvider" value="org.example.MySecretProvider"/>
  <Agent host="192.168.10.101" port="8800"/>
  <Agent host="192.168.10.102" port="8800"/>
</Flume>
Snippet from an example log4j2.json
"Flume": {
  "name": "FLUME",
  "type": "PERSISTENT",
  "compress": true,
  "dataDir": "./logData",
  "Rfc5424Layout": {
    "enterpriseNumber": 18060,
    "includeMDC": true,
    "appName": "MyAPP"
  },
  "Property": {
    "name": "keyProvider",
    "value": "org.example.MySecretProvider"
  },
  "Agent": [
    {
      "host": "192.168.10.101",
      "port": "8800"
    },
    {
      "host": "192.168.10.102",
      "port": "8800"
    }
  ]
}
Snippet from an example log4j2.yaml
Flume:
  name: "FLUME"
  type: "PERSISTENT"
  compress: true
  dataDir: "./logData"
  Rfc5424Layout:
    enterpriseNumber: 18060
    includeMDC: true
    appName: MyApp
  Property:
    name: "keyProvider"
    value: "org.example.MySecretProvider"
  Agent:
    - host: "192.168.10.101"
      port: 8800
    - host: "192.168.10.102"
      port: 8800

This example cannot be configured using Java properties.

A sample Flume Appender, which is configured with a primary and a secondary agent compresses the body, formats the body using RFC5424 Layout, and passes the events to an embedded Flume Agent.

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<Flume name="FLUME"
       type="EMBEDDED"
       compress="true">
  <Rfc5424Layout enterpriseNumber="18060"
                 includeMDC="true"
                 appName="MyApp"/>
  <Agent host="192.168.10.101" port="8800"/>
  <Agent host="192.168.10.102" port="8800"/>
</Flume>
Snippet from an example log4j2.json
"Flume": {
  "name": "FLUME",
  "type": "EMBEDDED",
  "compress": true,
  "Rfc5424Layout": {
    "enterpriseNumber": 18060,
    "includeMDC": true,
    "appName": "MyAPP"
  },
  "Agent": [
    {
      "host": "192.168.10.101",
      "port": "8800"
    },
    {
      "host": "192.168.10.102",
      "port": "8800"
    }
  ]
}
Snippet from an example log4j2.yaml
Flume:
  name: "FLUME"
  type: "EMBEDDED"
  compress: true
  Rfc5424Layout:
    enterpriseNumber: 18060
    includeMDC: true
    appName: MyApp
  Agent:
    - host: "192.168.10.101"
      port: 8800
    - host: "192.168.10.102"
      port: 8800

This example cannot be configured using Java properties.

JMS Appender

The JMS Appender sends the formatted log event to a Jakarta Messaging API destination.

As of Log4j 2.17.0 you need to enable the JMS Appender explicitly by setting the log4j2.enableJndiJms configuration property to true.

Due to breaking changes in the underlying API, the JMS Appender cannot be used with Jakarta Messaging API 3.0 or later.

Table 4. JMS Appender configuration attributes
Attribute Type Default value Description

Required

name

String

The name of the appender.

factoryBindingName

Name

The JNDI name of the ConnectionFactory.

Only the java: protocol is supported.

destinationBindingName

Name

The JNDI name of the Destination, which can be either a Queue or a Topic.

Only the java: protocol is supported.

JNDI configuration (overrides system properties)

factoryName

String

It specifies the InitialContextFactory.

See INITIAL_CONTEXT_FACTORY for details.

urlPkgPrefixes

String[]

A colon-separated list of package prefixes that contain URL context factories.

See URL_PKG_PREFIXES for details.

providerURL

String

A configuration parameter for the InitialContextFactory.

See PROVIDER_URL for details.

securityPrincipalName

String

The name of the principal to use for the InitialContextFactory.

See SECURITY_PRINCIPAL for details.

securityCredentials

String

null

The security credentials for the principal.

See SECURITY_CREDENTIALS for details.

Optional

userName

String

The username for the ConnectionFactory.

password

String

The password for the ConnectionFactory.

ignoreExceptions

boolean

true

If false, logging exception will be forwarded to the caller of the logging statement. Otherwise, they will be ignored.

Logging exceptions are always also logged to Status Logger

reconnectIntervalMillis

long

5000

The request timeout in milliseconds. If 0 the timeout is infinite.

Table 5. JMS Appender nested elements
Type Multiplicity Description

Filter

zero or one

Allows filtering log events just before they are formatted and sent.

See also appender filtering stage.

Layout

one

Used in the mapping process to get a JMS Message.

See Mapping events to JMS messages below for more information.

Mapping events to JMS messages

The mapping between log events and JMS messages has two steps:

  1. First, the layout is used to transform a log event into an intermediary format.

  2. Then, a Message is created based on the type of object returned by the layout:

    String

    Strings are converted into TextMessages.

    MapMessage

    The Log4j MapMessage type is mapped to the JMS MapMessage type.

    Serializable

    Anything else is converted into an ObjectMessage.

Configuration examples

Here is a sample JMS Appender configuration:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<JMS name="JMS"
     factoryBindingName="jms/ConnectionFactory"
     destinationBindingName="jms/Queue">
  <JsonTemplateLayout/>
</JMS>
Snippet from an example log4j2.json
"JMS": {
  "name": "JMS",
  "factoryBindingName": "jms/ConnectionFactory",
  "destinationBindingName": "jms/Queue",
  "JsonTemplateLayout": {}
}
Snippet from an example log4j2.yaml
JMS:
  name: "JMS"
  factoryBindingName: "jms/ConnectionFactory"
  destinationBindingName: "jms/Queue"
  JsonTemplateLayout: {}
Snippet from an example log4j2.properties
appender.0.type = JMS
appender.0.name = JMS
appender.0.factoryBindingName = jms/ConnectionFactory
appender.0.destinationBindingName = jms/Queue

appender.0.layout.type = JsonTemplateLayout

To map your Log4j MapMessage to JMS javax.jms.MapMessage, set the layout of the appender to MessageLayout:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<JMS name="JMS"
     factoryBindingName="jms/ConnectionFactory"
     destinationBindingName="jms/Queue">
  <MessageLayout/>
</JMS>
Snippet from an example log4j2.json
"JMS": {
  "name": "JMS",
  "factoryBindingName": "jms/ConnectionFactory",
  "destinationBindingName": "jms/Queue",
  "MessageLayout": {}
}
Snippet from an example log4j2.yaml
JMS:
  name: "JMS"
  factoryBindingName: "jms/ConnectionFactory"
  destinationBindingName: "jms/Queue"
  MessageLayout: {}
Snippet from an example log4j2.properties
appender.0.type = JMS
appender.0.name = JMS
appender.0.factoryBindingName = jms/ConnectionFactory
appender.0.destinationBindingName = jms/Queue

appender.0.layout.type = MessageLayout

Kafka Appender

This appender is planned to be removed in the next major release! If you are using this library, please get in touch with the Log4j maintainers using the official support channels.

The KafkaAppender logs events to an Apache Kafka topic. Each log event is sent as a ProducerRecord<byte[], byte[]>, where:

  • the key is provided by the byte representation of the key attribute.

  • the value is provided by the byte representation produced by the layout.

This appender is synchronous by default and will block until the record has been acknowledged by the Kafka server. The maximum delivery time can be configured using the Kafka delivery.timeout.ms property. Wrap the appender with an Async Appender or set syncSend to false to log asynchronously.

Table 6. Kafka Appender configuration attributes
Attribute Type Default value Description

Required

name

String

The name of the appender.

topic

String

The Kafka topic to use.

Optional

key

String

The key of the Kafka ProducerRecord.

Supports runtime property substitution and is evaluated in the global context.

ignoreExceptions

boolean

true

If false, logging exception will be forwarded to the caller of the logging statement. Otherwise, they will be ignored.

Logging exceptions are always also logged to Status Logger

syncSend

boolean

true

If true, the appender blocks until the record has been acknowledged by the Kafka server. Otherwise, the appender returns immediately, allowing for lower latency and significantly higher throughput.

If set to false any failure sending to Kafka will be reported as an error to Status Logger and the log event will be dropped. The ignoreExceptions setting will not be effective.

Log events may arrive out of order on the Kafka server.

Table 7. Kafka Appender nested elements
Type Multiplicity Description

Filter

zero or one

Allows filtering log events just before they are formatted and sent.

See also appender filtering stage.

Layout

one

Formats the log event as a byte array using Layout.toByteArray().

See Layouts for more information.

Property

one or more

These properties are forwarded directly to the Kafka producer. See Kafka producer properties for more details.

bootstrap.servers

This property is required.

key.serializer
value.serializer

These properties should not be used.

Additional runtime dependencies are required to use the Kafka Appender:

  • Maven

  • Gradle

<dependency>
  <groupId>org.apache.kafka</groupId>
  <artifactId>kafka-clients</artifactId>
  <version>3.9.0</version>
</dependency>
runtimeOnly 'org.apache.kafka:kafka-clients:3.9.0'

Configuration examples

Here is a sample Kafka Appender configuration snippet:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<Kafka name="KAFKA"
       topic="logs"
       key="$${web:contextName}"> (1)
  <JsonTemplateLayout/>
</Kafka>
Snippet from an example log4j2.json
"Kafka": {
  "name": "KAFKA",
  "topic": "logs",
  "key": "$${web:contextName}", (1)
  "JsonTemplateLayout": {}
}
Snippet from an example log4j2.yaml
Kafka:
  name: "KAFKA"
  topic: "logs"
  key: "$${web:contextName}" (1)
  JsonTemplateLayout: {}
Snippet from an example log4j2.properties
appender.1.type = Kafka
appender.1.name = KAFKA
appender.1.topic = logs
(1)
appender.1.key = $${web:contextName}

appender.1.layout.type = JsonTemplateLayout
1 The key attribute supports runtime lookups.

Make sure to not let org.apache.kafka log to a Kafka appender on DEBUG level, since that will cause recursive logging:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<Root level="INFO">
  <AppenderRef ref="KAFKA"/>
</Root>
<Logger name="org.apache.kafka"
        level="WARN"
        additivity="false"> (1)
  <AppenderRef ref="FILE"/>
</Logger>
Snippet from an example log4j2.json
"Root": {
  "level": "INFO",
  "AppenderRef": {
    "ref": "KAFKA"
  }
},
"Logger": {
  "name": "org.apache.kafka",
  "level": "WARN",
  "additivity": false, (1)
  "AppenderRef": {
    "ref": "FILE"
  }
}
Snippet from an example log4j2.yaml
Root:
  level: "INFO"
  AppenderRef:
    ref: "KAFKA"
Logger:
  name: "org.apache.kafka"
  level: "WARN"
  additivity: false (1)
  AppenderRef:
    ref: "FILE"
Snippet from an example log4j2.properties
rootLogger.level = INFO
rootLogger.appenderRef.0.ref = KAFKA

logger.0.name = org.apache.kafka
logger.0.level = WARN
(1)
logger.0.additivity = false
logger.0.appenderRef.0.ref = FILE
1 Remember to set the additivity configuration attribute to false.

ZeroMQ/JeroMQ Appender

This appender is planned to be removed in the next major release! Users should consider switching to a third-party ZMQ appender.

The ZeroMQ appender uses the JeroMQ library to send log events to one or more ZeroMQ endpoints.

Table 8. JeroMQ Appender configuration attributes
Attribute Type Default value Description

Required

name

String

The name of the appender.

Optional

ignoreExceptions

boolean

true

If false, logging exception will be forwarded to the caller of the logging statement. Otherwise, they will be ignored.

Logging exceptions are always also logged to Status Logger

affinity

long

0

The I/O affinity of the sending thread.

See Socket.setAffinity() for more details.

backlog

int

100

The maximum size of the backlog.

See Socket.setBacklog() for more details.

delayAttachOnConnect

boolean

false

Delays the attachment of a pipe on connection.

See Socket.setDelayAttachOnConnect() for more details.

identity

byte[]

It sets the identity of the socket.

See Socket.setIdentity() for more details.

ipv4Only

boolean

true

If set, only IPv4 will be used.

See Socket.setIPv4Only() for more details.

linger

long

-1

It sets the linger-period for the socket. The value -1 mean infinite.

See Socket.setLinger() for more details.

maxMsgSize

long

-1

Size limit in bytes for inbound messages.

See Socket.setMaxMsgSize() for more details.

rcvHwm

int

1000

It sets the high-water mark for inbound messages.

See Socket.setRcvHWM() for more details.

receiveBufferSize

long

0

It sets the OS buffer size for inbound messages. A value of 0 uses the OS default value.

See Socket.setReceiveBufferSize() for more details.

receiveTimeOut

int

-1

It sets the timeout in milliseconds for receive operations.

See Socket.setReceiveTimeOut() for more details.

reconnectIVL

int

100

It sets the reconnection interval.

See Socket.setReconnectIVL() for more details.

reconnectIVLMax

long

0

It sets the maximum reconnection interval.

See Socket.setReconnectIVLMax() for more details.

sendBufferSize

int

0

It sets the OS buffer size for outbound messages. A value of 0 uses the OS default value.

See Socket.setSendBufferSize() for more details.

sendTimeOut

int

-1

It sets the timeout in milliseconds for send operations.

See Socket.setSendTimeOut() for more details.

sndHwm

int

1000

It sets the OS buffer size for outbound messages. A value of 0 uses the OS default value.

See Socket.setSendBufferSize() for more details.

tcpKeepAlive

int

-1

A value of:

0

disables TCP keep-alive packets.

1

enables TCP keep-alive packets.

-1

uses the OS default value.

See Socket.setTCPKeepAlive() for more details.

tcpKeepAliveCount

long

-1

It sets the maximum number of keep-alive probes before dropping the connection. A value of -1 uses the OS default.

See Socket.setTCPKeepAliveCount() for more details.

tcpKeepAliveIdle

long

-1

It sets the time a connection needs to remain idle before keep-alive probes are sent. The unit depends on the OS and a value of -1 uses the OS default.

See Socket.setTCPKeepAliveIdle() for more details.

tcpKeepAliveInterval

long

-1

It sets the time between two keep-alive probes. The unit depends on the OS and a value of -1 uses the OS default.

See Socket.setTCPKeepAliveInterval() for more details.

xpubVerbose

boolean

false

If true, all subscriptions are passed upstream.

See Socket.setXpubVerbose() for more details.

Table 9. JeroMQ Appender nested elements
Type Multiplicity Description

Filter

zero or one

Allows filtering log events just before they are formatted and sent.

See also appender filtering stage.

Layout

one

Formats the log event as a byte array using Layout.toByteArray().

See Layouts for more information.

Property

one or more

Only properties with an endpoint name are supported. At least one is required to provide the address of the endpoint to connect to.

See Socket.connect JavaDoc for more details.

Additional runtime dependencies are required to use the JeroMQ Appender:

  • Maven

  • Gradle

<dependency>
  <groupId>org.zeromq</groupId>
  <artifactId>jeromq</artifactId>
  <version>0.6.0</version>
</dependency>
runtimeOnly 'org.zeromq:jeromq:0.6.0'

Configuration examples

This is a simple JeroMQ configuration:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<JeroMQ name="JEROMQ">
  <JsonTemplateLayout/>
  <Property name="endpoint" value="tcp://*:5556"/>
  <Property name="endpoint" value="ipc://info-topic"/>
</JeroMQ>
Snippet from an example log4j2.json
"JeroMQ": {
  "name": "JEROMQ",
  "JsonTemplateLayout": {},
  "Property": [
    {
      "name": "endpoint",
      "value": "tcp://*:5556"
    },
    {
      "name": "endpoint",
      "value": "ipc://info-topic"
    }
  ]
}
Snippet from an example log4j2.yaml
JeroMQ:
  name: "JEROMQ"
  JsonTemplateLayout: {}
  Property:
    - name: "endpoint"
      value: "tcp://*:5556"
    - name: "endpoint"
      value: "ipc://info-topic"
Snippet from an example log4j2.properties
appender.0.type = JeroMQ
appender.0.name = JEROMQ

appender.0.layout.type = JsonTemplateLayout

appender.0.endpoint[0].type = Property
appender.0.endpoint[0].name = endpoint
appender.0.endpoint[0].value = tcp://*:5556

appender.0.endpoint[1].type = Property
appender.0.endpoint[1].name = endpoint
appender.0.endpoint[1].value = ipc://info-topic