Skip to content

Incorrect or corrupted CSV log lines when multiple cluster nodes write to the same log file with prudent mode enabled #973

@Dushyant-GitHub

Description

@Dushyant-GitHub

We have a cluster-based application running on multiple nodes. Each node uses Logback to write logs in CSV-based format to the same shared log file.
Under high load (when multiple requests are processed across nodes at the same time), we observed that some log lines become corrupted — e.g., lines get mixed up or partially written, resulting in broken or half-written CSV entries.

It seems that Logback is not correctly handling concurrent writes across multiple nodes when using this configuration.

Configuration Details

We are programmatically configuring Logback as follows:

context = new LoggerContext();
context.stop();
context.setMDCAdapter(new LogbackMDCAdapter());

encoder = new PatternLayoutEncoder();
encoder.setContext(context);
encoder.setPattern("%msg%n");
encoder.start();

RollingFileAppender<ILoggingEvent> rollingFileAppender = new RollingFileAppender<>();
rollingFileAppender.setContext(context);
rollingFileAppender.setName(appName);
rollingFileAppender.setAppend(true);
rollingFileAppender.setImmediateFlush(true);
rollingFileAppender.setBufferSize(new FileSize(bufferSize));  // 8192 bytes
rollingFileAppender.setPrudent(true);

rollingPolicy = new CustomTimeBasedRollingPolicy<>();
rollingPolicy.setContext(context);
rollingPolicy.setParent(rollingFileAppender);
rollingPolicy.setFileNamePattern(logFilePath);
rollingPolicy.setMaxHistory(retentionPeriod);

String previousLogfilePath = resolvePreviousLogFilePath(logFilePath, interval); // interval = Monthly
rollingPolicy.setTimeBasedFileNamingAndTriggeringPolicy(
    new CustomTimeBasedFileNamingAndTriggeringPolicy<>(previousLogfilePath)
);
rollingPolicy.start();

rollingFileAppender.setRollingPolicy(rollingPolicy);
rollingFileAppender.setEncoder(encoder);
rollingFileAppender.start();

AsyncAppender asyncAppender = new AsyncAppender();
asyncAppender.setName("async_" + appName);
asyncAppender.setContext(context);
asyncAppender.setQueueSize(5000);
asyncAppender.addAppender(rollingFileAppender);
asyncAppender.start();

logger = context.getLogger("ROOT");
logger.addAppender(asyncAppender);
logger.setLevel(Level.INFO);
context.start();

Observed Behavior

  • Some log lines are partially written (only part of the CSV line appears).

  • Some lines are merged or mixed between log entries written from different nodes.

  • This happens only when multiple cluster nodes write to the same file around the same time.

Expected Behavior

Each log line should be written atomically and remain intact (no mixing or truncation), even when multiple nodes are writing to the same file with prudent=true.

Environment

  • Logback version: 1.5.17

  • Java version: Java 21

  • Application server: Clustered setup (multiple JVM nodes writing to shared log file)

  • File system: NFS

Can you please confirm:

  1. Whether prudent=true fully supports concurrent writes across multiple JVMs/nodes?

  2. If not, is there a recommended approach for cluster-safe logging?

Example Screenshot of Corrupted CSV Output

Image Image Image Image Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions