-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
We have a cluster-based application running on multiple nodes. Each node uses Logback to write logs in CSV-based format to the same shared log file.
Under high load (when multiple requests are processed across nodes at the same time), we observed that some log lines become corrupted — e.g., lines get mixed up or partially written, resulting in broken or half-written CSV entries.
It seems that Logback is not correctly handling concurrent writes across multiple nodes when using this configuration.
Configuration Details
We are programmatically configuring Logback as follows:
context = new LoggerContext();
context.stop();
context.setMDCAdapter(new LogbackMDCAdapter());
encoder = new PatternLayoutEncoder();
encoder.setContext(context);
encoder.setPattern("%msg%n");
encoder.start();
RollingFileAppender<ILoggingEvent> rollingFileAppender = new RollingFileAppender<>();
rollingFileAppender.setContext(context);
rollingFileAppender.setName(appName);
rollingFileAppender.setAppend(true);
rollingFileAppender.setImmediateFlush(true);
rollingFileAppender.setBufferSize(new FileSize(bufferSize)); // 8192 bytes
rollingFileAppender.setPrudent(true);
rollingPolicy = new CustomTimeBasedRollingPolicy<>();
rollingPolicy.setContext(context);
rollingPolicy.setParent(rollingFileAppender);
rollingPolicy.setFileNamePattern(logFilePath);
rollingPolicy.setMaxHistory(retentionPeriod);
String previousLogfilePath = resolvePreviousLogFilePath(logFilePath, interval); // interval = Monthly
rollingPolicy.setTimeBasedFileNamingAndTriggeringPolicy(
new CustomTimeBasedFileNamingAndTriggeringPolicy<>(previousLogfilePath)
);
rollingPolicy.start();
rollingFileAppender.setRollingPolicy(rollingPolicy);
rollingFileAppender.setEncoder(encoder);
rollingFileAppender.start();
AsyncAppender asyncAppender = new AsyncAppender();
asyncAppender.setName("async_" + appName);
asyncAppender.setContext(context);
asyncAppender.setQueueSize(5000);
asyncAppender.addAppender(rollingFileAppender);
asyncAppender.start();
logger = context.getLogger("ROOT");
logger.addAppender(asyncAppender);
logger.setLevel(Level.INFO);
context.start();Observed Behavior
-
Some log lines are partially written (only part of the CSV line appears).
-
Some lines are merged or mixed between log entries written from different nodes.
-
This happens only when multiple cluster nodes write to the same file around the same time.
Expected Behavior
Each log line should be written atomically and remain intact (no mixing or truncation), even when multiple nodes are writing to the same file with prudent=true.
Environment
-
Logback version: 1.5.17
-
Java version: Java 21
-
Application server: Clustered setup (multiple JVM nodes writing to shared log file)
-
File system: NFS
Can you please confirm:
-
Whether
prudent=truefully supports concurrent writes across multiple JVMs/nodes? -
If not, is there a recommended approach for cluster-safe logging?
Example Screenshot of Corrupted CSV Output
