Monitoring NiFi – Introduction

Apache NiFi 1.2.0 has just been released with a lot of very cool new features… and I take this opportunity to start a series of articles around monitoring. This is a recurring subject and I often hear the same questions. This series won’t provide an exhaustive list of the ways you can use to monitor NiFi (with or without HDF) but, at least, it should get you started!

Here is a quick summary of the subjects that will be covered:

For this series of article, I will use, as a demo environment, a 4-nodes HDF cluster (running on CentOS 7):

I’m using HDF to take advantage of Ambari to ease the deployment but this is not mandatory for what I’m going to discuss in the articles (except for stuff around the Ambari reporting task obviously).

I will not cover how setting up this environment but if this is something you are looking after, feel free to ask questions (here or on the Hortonworks Community Connection) and to have a look into Hortonworks documentation about HDF.

I don’t want to write a single (very) long article and for the sake of clarity there is one article per listed subject. Also, I’ll try to update the articles to stick as best as possible to latest features provided by NiFi over time.

Also, if you feel that some subjects should be added to the list, let me know and I’ll do my best to cover other monitoring-related questions.

11 thoughts on “Monitoring NiFi – Introduction

  1. Hi
    Thankyou for such an amazing post. I would like to know :
    Is there a way to monitor details of the CPU, memory, Heap, I/O, Threads that each NiFi processor uses?
    Also how can I take the monitoring a bit further by examining how the queue length at each processor varies with the rate.
    How to figure out what is the upper bound for the input rate at which the flow will start to become overwhelmed (which obviously means whats the weakest link in the flow which essentially determines the max rate the flow can handle )



    • Hi!
      Regarding monitoring of resources for each processor, there is no easy way. What you can monitor is the duration of each task execution of a processor (it can be a good way to detect a memory leak for instance)., the throughput of the processor, etc (basically anything that you can see by right-clicking on a processor and going into status history). To see what part of a workflow could be the bottleneck, I’d suggest performing performance tests using GenerateFlowFile processor. There are too much parameters in play to give a general answer to that question without doing some tests.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.