How others handle hourly averages?

I am hoping the community can help me out. How do others / what is best practice for dealing with hourly averages for PI tags?

Here is my setup. We have lots of tags that we seem to need hourly averages for. Examples include, what the hourly average of the river level was, the average MW a meter used, the hourly average of generation out of a unit… The list goes on and on, and so does my tag count. A lot of these tags are used as inputs into bigger analysis (in A.F. Asset Framework) and also some show up in spreadsheets that key people look at (using the PI Datalink).

Is it best to have "control" or standardize on what an hourly average is based off the PI admin working with the business unit to determine how the business defines hourly average (time or event weighted…) and put that calculation into an AF analysis to run every hour? Then whenever an upstream calculation come into play the PI admin would reference that average tag. Also business users would not use the raw tag to get hourly averages, instead they would seek out the hourly average tag that is already used.

The other thing I am worried about is having AF run several different analysis every hour at the top of the hour for all these hourly average tags. We are small so I am thinking it would be about 200 tags maybe.

Or am I overthinking this, and I should just create an hourly average whenever it is required in an AF Analysis calculation.

Any insight would be greatly appreciated.

Parents
  • The analysis service can handle 150,000-200,000 analyses, and possibly more. It's always recommended to do analyses, and especially aggregates, closer to the data source (SCADA -> RTU). That being said, you're not going to have any problems templatizing hourly averages for different asset types and deploying that for a small count, as long as you build them correctly.

     

    As for using the output in other analyses, always directly reference the attribute in the downstream analysis itself because it's more efficient.

Reply
  • The analysis service can handle 150,000-200,000 analyses, and possibly more. It's always recommended to do analyses, and especially aggregates, closer to the data source (SCADA -> RTU). That being said, you're not going to have any problems templatizing hourly averages for different asset types and deploying that for a small count, as long as you build them correctly.

     

    As for using the output in other analyses, always directly reference the attribute in the downstream analysis itself because it's more efficient.

Children
No Data