Kafka Streams application opening too many files on kafka servers
up vote
0
down vote
favorite
I've been working on an application based on the java kafka-streams API, whose goal is to process a stream of data coming from one kafka topic, and produce it into another topic.
As it seems, whenever I start producing messages using the kafka-streams application, file handles just keep opening on the kafka brokers I'm using, and they are never closed, meaning eventually the kafka server ends up with too many open files, and the kafka and zookeeper daemons crash.
I'm using kafka-streams-1.0.1
API jar for Java, and running on JDK 11. The kafka cluster is of Kafka version 1.0.0.
My application's configuration includes the following kafka producer configs:
batch.size
: set to 100,000 messages.linger.ms
: set to 1,000 milliseconds.buffer.memory
: set to the byte equivalent of 5 MegaBytes.
The stream processing itself is very simple, and is composed:
stream.map((k,v) -> handle(k,v)).filter((k,v) -> v != null).to(outgoingTopic);
I would appreciate any suggestions you guys might have.
java apache-kafka apache-kafka-streams
add a comment |
up vote
0
down vote
favorite
I've been working on an application based on the java kafka-streams API, whose goal is to process a stream of data coming from one kafka topic, and produce it into another topic.
As it seems, whenever I start producing messages using the kafka-streams application, file handles just keep opening on the kafka brokers I'm using, and they are never closed, meaning eventually the kafka server ends up with too many open files, and the kafka and zookeeper daemons crash.
I'm using kafka-streams-1.0.1
API jar for Java, and running on JDK 11. The kafka cluster is of Kafka version 1.0.0.
My application's configuration includes the following kafka producer configs:
batch.size
: set to 100,000 messages.linger.ms
: set to 1,000 milliseconds.buffer.memory
: set to the byte equivalent of 5 MegaBytes.
The stream processing itself is very simple, and is composed:
stream.map((k,v) -> handle(k,v)).filter((k,v) -> v != null).to(outgoingTopic);
I would appreciate any suggestions you guys might have.
java apache-kafka apache-kafka-streams
The recommendation by Confluent is to configure the kernel of the host to allow 100.000+ open file handles. Normally the default is way below that.
– daniu
yesterday
And it is configured to that level. Still, files keep opening
– Stav Saad
yesterday
I don't think Java 11 is officially supported by Kafka community, or officially tested. Not sure if that's the problem, though
– cricket_007
yesterday
Java 11 is used as the runtime of my own application and should not affect stuff happening on the Kafka servers
– Stav Saad
18 hours ago
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I've been working on an application based on the java kafka-streams API, whose goal is to process a stream of data coming from one kafka topic, and produce it into another topic.
As it seems, whenever I start producing messages using the kafka-streams application, file handles just keep opening on the kafka brokers I'm using, and they are never closed, meaning eventually the kafka server ends up with too many open files, and the kafka and zookeeper daemons crash.
I'm using kafka-streams-1.0.1
API jar for Java, and running on JDK 11. The kafka cluster is of Kafka version 1.0.0.
My application's configuration includes the following kafka producer configs:
batch.size
: set to 100,000 messages.linger.ms
: set to 1,000 milliseconds.buffer.memory
: set to the byte equivalent of 5 MegaBytes.
The stream processing itself is very simple, and is composed:
stream.map((k,v) -> handle(k,v)).filter((k,v) -> v != null).to(outgoingTopic);
I would appreciate any suggestions you guys might have.
java apache-kafka apache-kafka-streams
I've been working on an application based on the java kafka-streams API, whose goal is to process a stream of data coming from one kafka topic, and produce it into another topic.
As it seems, whenever I start producing messages using the kafka-streams application, file handles just keep opening on the kafka brokers I'm using, and they are never closed, meaning eventually the kafka server ends up with too many open files, and the kafka and zookeeper daemons crash.
I'm using kafka-streams-1.0.1
API jar for Java, and running on JDK 11. The kafka cluster is of Kafka version 1.0.0.
My application's configuration includes the following kafka producer configs:
batch.size
: set to 100,000 messages.linger.ms
: set to 1,000 milliseconds.buffer.memory
: set to the byte equivalent of 5 MegaBytes.
The stream processing itself is very simple, and is composed:
stream.map((k,v) -> handle(k,v)).filter((k,v) -> v != null).to(outgoingTopic);
I would appreciate any suggestions you guys might have.
java apache-kafka apache-kafka-streams
java apache-kafka apache-kafka-streams
asked yesterday
Stav Saad
541413
541413
The recommendation by Confluent is to configure the kernel of the host to allow 100.000+ open file handles. Normally the default is way below that.
– daniu
yesterday
And it is configured to that level. Still, files keep opening
– Stav Saad
yesterday
I don't think Java 11 is officially supported by Kafka community, or officially tested. Not sure if that's the problem, though
– cricket_007
yesterday
Java 11 is used as the runtime of my own application and should not affect stuff happening on the Kafka servers
– Stav Saad
18 hours ago
add a comment |
The recommendation by Confluent is to configure the kernel of the host to allow 100.000+ open file handles. Normally the default is way below that.
– daniu
yesterday
And it is configured to that level. Still, files keep opening
– Stav Saad
yesterday
I don't think Java 11 is officially supported by Kafka community, or officially tested. Not sure if that's the problem, though
– cricket_007
yesterday
Java 11 is used as the runtime of my own application and should not affect stuff happening on the Kafka servers
– Stav Saad
18 hours ago
The recommendation by Confluent is to configure the kernel of the host to allow 100.000+ open file handles. Normally the default is way below that.
– daniu
yesterday
The recommendation by Confluent is to configure the kernel of the host to allow 100.000+ open file handles. Normally the default is way below that.
– daniu
yesterday
And it is configured to that level. Still, files keep opening
– Stav Saad
yesterday
And it is configured to that level. Still, files keep opening
– Stav Saad
yesterday
I don't think Java 11 is officially supported by Kafka community, or officially tested. Not sure if that's the problem, though
– cricket_007
yesterday
I don't think Java 11 is officially supported by Kafka community, or officially tested. Not sure if that's the problem, though
– cricket_007
yesterday
Java 11 is used as the runtime of my own application and should not affect stuff happening on the Kafka servers
– Stav Saad
18 hours ago
Java 11 is used as the runtime of my own application and should not affect stuff happening on the Kafka servers
– Stav Saad
18 hours ago
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53237372%2fkafka-streams-application-opening-too-many-files-on-kafka-servers%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
The recommendation by Confluent is to configure the kernel of the host to allow 100.000+ open file handles. Normally the default is way below that.
– daniu
yesterday
And it is configured to that level. Still, files keep opening
– Stav Saad
yesterday
I don't think Java 11 is officially supported by Kafka community, or officially tested. Not sure if that's the problem, though
– cricket_007
yesterday
Java 11 is used as the runtime of my own application and should not affect stuff happening on the Kafka servers
– Stav Saad
18 hours ago