How others handle hourly averages?

I am hoping the community can help me out. How do others / what is best practice for dealing with hourly averages for PI tags?

Here is my setup. We have lots of tags that we seem to need hourly averages for. Examples include, what the hourly average of the river level was, the average MW a meter used, the hourly average of generation out of a unit… The list goes on and on, and so does my tag count. A lot of these tags are used as inputs into bigger analysis (in A.F. Asset Framework) and also some show up in spreadsheets that key people look at (using the PI Datalink).

Is it best to have "control" or standardize on what an hourly average is based off the PI admin working with the business unit to determine how the business defines hourly average (time or event weighted…) and put that calculation into an AF analysis to run every hour? Then whenever an upstream calculation come into play the PI admin would reference that average tag. Also business users would not use the raw tag to get hourly averages, instead they would seek out the hourly average tag that is already used.

The other thing I am worried about is having AF run several different analysis every hour at the top of the hour for all these hourly average tags. We are small so I am thinking it would be about 200 tags maybe.

Or am I overthinking this, and I should just create an hourly average whenever it is required in an AF Analysis calculation.

Any insight would be greatly appreciated.

  • The analysis service can handle 150,000-200,000 analyses, and possibly more. It's always recommended to do analyses, and especially aggregates, closer to the data source (SCADA -> RTU). That being said, you're not going to have any problems templatizing hourly averages for different asset types and deploying that for a small count, as long as you build them correctly.

     

    As for using the output in other analyses, always directly reference the attribute in the downstream analysis itself because it's more efficient.

  • Thank you for your insight ​ . It sounds like I will be updating a few templates this week.

    If anyone else has anything to add to the discussion, I am all ears still.

  • I agree with ​  recommendations. You should easily be able to do 200 analysis on the hour. To make them more efficient make sure you are using PI Exception and PI Compression. You might find that calculations start to skip, especially if you have many event triggered calculations that trigger often. In general, the "EvalationsToQueueBeforeSkipping" parameter can be increased from 50 to something like 500 to start with.

    My company has performed thousands of such hourly averages and depending on your use case, you might be able to provide several standard hourly average templates. Often for the business cases I work with (a ton of environmental compliance) the hourly average is written at the start of the hour.

    Things to consider are how accurate does this data have to be? If you have using a time-weighted averages and the next value has not arrived yet, your result could change slightly when you recalculate. Ways to handle this are to offset by a small amount of time. Other ways are to recalculate the data periodically and that means anything else that used the 1 hour average needs to be recalculated. From what I have seen, many business cases such as situational awareness do not require this accuracy.