4976 Bear Mountain Drive

Evergreen, CO  80439

office: 303-670-8392 

cell:  303-670-9092            

e-mail:  bob@procontrol.net            

 
 ProControl, Inc.                   

  Process Control Education and Technology                  

  Robert V. Bartman, Ph.D., President                              

 

 

 


Appreciating and Strengthening ARC

 

If your philosophy is applying MVC (constrained multivariable control, such as DMC) everywhere, preferably bringing CV’s and MV’s by the score into one matrix, and perhaps decommissioning one-second stabilizing PID’s in favor of ten-second (or slower) MVC’s talking directly to valves, then read no further.  This is for folks interested in breaking their complex process into separable pieces, each solvable by ARC (Advanced Regulatory Control, fully implementable in a DCS without vendor support), provided there’s no loss of credits for ARC vs. MVC.  This is also meant for those who’ve tried ARC beyond the basic PID level, and hit implementation roadblocks.

 

I’ve broken the discussion into three parts.  This one lists some of the critical skills needed to embark on ARC with assurance – without throwing darts.  Think of these skills (e.g., knowing whether your process understanding is adequate) as a subset of what’s needed to be a versatile Advanced Control Engineer.  The second defines the spectrum of ARC controls relevant to a real-world, two-product, distillation tower.  The last critiques the classical ARC elements defined for tower control, finds them wanting in some key ways, and fixes the problems.  We can still call the result ARC if you like;  while it’s smarter, it’s fully implementable in a DCS.

  

ARC’s classical toolset:  PI and PID feedback algorithms (for killing unmeasurable disturbances rapidly, without rocking the boat);  a ratio algo (for steady-state feedforward, primarily to cancel feedrate-change impacts);  a cascade structure incorporating all these algos (to kill a variety of disturbances before they’re seen by the highest-level objective, such as quality control);  capability to add an increment to a PID’s output (for more general feedforward disturbance cancellation);  dynamic delay, lag, and lead blocks (for feedforward synchronization in both incremental and ratio structures, and our version of model-based control);  a selector (for maximizing/minimizing a variable such as feedrate against multiple process constraints, preferably with added logic inhibiting bouncing among selections);  and a control language (such as CL), e.g. to assure that all feedforward blocks initialize in the right sequence during recovery from a signal failure;  or to assess signal validity;  or to adaptively retune PIDs and feedforwards based on changing process conditions;  or to implement targets arising from offline optimization.

 

Everything in ARC – structure, PID tuning, feedforward disturbance cancellation – is based on process understanding  (along with economics, which help dictate structure).  We can’t get this process understanding by guesswork.  Thus our minimum requirements for a process analysis tool for ARC implementation:

 

1)  “Best Fit” model parameters provided by statistics for each independent variable:  Process Gain, Deadtime, at least two Lagtimes, and Leadtime (to fit overshoots and inverses).  While these classical parameters suffice for the majority of petrochemical responses, additive “fit anything” vectors should also be available, complementing this classical structure.

Standard process analysis tools provide Gain, Deadtime, and one Lagtime.  But these are often inadequate;   feedforward cancellation outcomes, up-front PID/feedforward adequacy studies, and model-based control relieving ARC’s limitations (as we’ll see later) all benefit from modeling a more complex process with additional dynamic parameters. 

 

2)  “Worst Case” (for PID and feedforward stability) model parameters, yielding a fit just at the margin of model credibility, but with dynamics purposely skewed to cause stability problems.  The murkier the process data, the further these parameters diverge from Best Fit values.  Assessing this alternate Worst Case model should be intuitive, requiring no statistical knowledge – preferably just a mouse click.  So what … ?

 

3)  Easy carryover of these two models into optimal PID and feedforward tuning, then to PID and feedforward case study comparisons assessing adequacy of the current process data.  A good tool should guide us to a “The data’s OK”, or “Not” decision, and if “OK”, to tuning which minimizes vulnerability without being unduly conservative – preferably, all this with the user ignorant of statistics.  If “Not OK”, of course, we either look for another window of ‘opportunistic’ data, or we run a sharper plant test.

 

“Optimal” PID tuning yields minimal PV cycling, with no more than 20% MV overshoot.  Tuning should cover an unlimited range of process dynamics, all PID algos, and all PID control intervals. Variable intervals let us view the serious degradation in constants, in PID structure itself, and in disturbance-killing capacity when we shift loops such as pressures from one-second ARC to much slower MVC frequencies (!)   

 

4)  Ditto for carryover of these two sets of model parameters into an ARC-compatible model-based controller, rather than a PID (for reasons covered below).

 

5)  Multivariable inputs: I’ve needed as many as five to sort things out.  Here it’s essential that the analysis tool lets us know, not just as a simple percentage, but also graphically, how much each independent variable contributes to the overall response.  Without such comparative graphics we may build feedforward controllers based on a “Best Fit” model for a particular MV or Load that’s absolute nonsense (since another variable’s much more significant in securing a good fit). 

 

6)  Use of what we’ve already learned:   As a corollary to (5) , a good analysis tool allows the impact of minor contributors (in the current multivariable data) to be backed-out using historical, already- known, model parameters – not those arising from current “Best Fit” statistics. 

 

7)  Graphics:  a good tool promotes comparisons by easy grouping and scale-sharing of both measured  and calculated variables.  

 

The message:  good process analysis software doesn’t simply yield a multivariable model;  it encourages probing this model’s adequacy for ARC.

 

We have a rather nice process analysis and PID/feedforward exploration tool, Discover, which offers much more than this, including ability to draw continuous tray temperature-based models through discontinuous GC or Lab data, and online Help for every class of process analysis problem in my experience.  http://www.procontrol.net/discover_reviews.htm contains reviews from many corporations.  Our clients enjoy self-sufficiency in quickly solving major tuning problems, as you’ll find in:  http://www.procontrol.net/tower_pressure_retuned_by_discover.htm, and

http://www.procontrol.net/discovers_fcc_tuning_stabilizes_gasoline_blending.htm.

 

 

What you should know for ARC implementation:

 

If you’re daunted at the thought of a three or four-level-deep cascade – or a little unsure on the translation of process parameters into a disturbance-cancelling feedforward controller, and would certainly not use “Lead” there since you don’t trust it – or you can’t assess the tuning compromises needed if imperfect feedforward control exists in concert with a PID above – or if you’re hesitant on using Derivative, and would never use it in any event on levels or pressures –  or if you’re a petrochemicals control engineer and don’t know how to optimize a two-product tower both offline and online, or how to configure a tower’s basic controls – or know when DMC’s really needed, and when not – I could go on! – then you’re not ready for ARC played at a competitive level, and you might be content with MVC as an umbrella solution.

 

Or you might take a practical control class, learning where, and how, ARC can capture credits matching or beating those from MVC (on problems suitable for either strategy) with much lower cost and maintenance risk.  Again we have a rather nice one, Revelations on Dynamic Process Analysis, Advanced Control, and Online Optimization described in www.procontrol.net.

 

A short-course in ARC design follows.   

 

 

Tower Control / Optimization, utilizing ARC

 

Maximizing most-valuable product yield, via dedicated cascade control 

 

With few exceptions, a feedback controller sits at the top of an ARC control scheme.  In classical ARC, this controller’s either a PI or PID algorithm.  While this PID may output directly to a valve (as with fast pressure or level control aimed at process stabilization), the highest-return loops (often product  quality-oriented) typically have one or more underlying feedback loops, each charged with killing disturbances before they’re seen by the feedback loop just above.

 

As an example, let’s consider a two-product tower with a valuable bottoms product having a max “% Light” spec , and a lower-value top product without any corresponding “% Heavy” spec.  We’re amazed to find an analyzer in the bottoms measuring “% Light”, with its signal both reliable and continuous.  We find a tray temperature transmitter located a few trays up from the reboiler return line, and our process modeling tool confirms that this temperature’s a good inference of the analyzer reading, with the temperature response faster than the analyzer.  (The correlation coefficient will be > 0.98 if the tray’s a good one, our eyeball will be happy with the fit, and no plant test will be needed.  Running analyzer/temperature correlations on randomly-gathered data can be a real mood-booster.  We do lots of this in class, for some reason, using client-donated data.)

 

We translate this knowledge into a three-level ARC cascade:  a top-level analyzer PI outputting to the SP of a tray temperature PID, which outputs to the SP of a reboiler steam flow PI, thence to the reboiler valve.  (Why choose the reboiler as our MV?  How must overhead drum level be controlled?  Why’s the flow a PI, the temperature a PID, and the analyzer a PI?)

 

This cascade system maximizes upgrade of less valuable overhead to more valuable bottoms;  it’s typically our tower’s major money-maker.  We’ll pick on its glaring flaw in the final section.

 

Tower optimization

 

In the absence of process constraints, the reflux/feed ratio, our other degree of freedom, is set by energy/yield optimization.  Modern ARC has enough language capability to implement the outcome of offline optimization based on a fundamental, validated model.  The condensed online ARC version can react to periodic console updates in energy cost and relative product worth, to observed changes in feed composition (how measured, with no feed analyzer?), and to changes in bottoms product spec – anything known to be relevant from the offline step, backed by historical data. 

 

While offline tower optimization for subsequent online application should be a Process Engineers’ job (I think), good luck on that – thus we teach the entire subject from scratch to our Control Engineers.

 

Reflux-related process constraints

 

As needed, the economically-optimal reflux rate can be constrained by an ARC selector to respect reflux-related process constraints, such as a tray hydraulic limitation.  If the reboiler steam valve sometimes goes wide open, we add a 95% valve-open controller as another input to this reflux-maximizing selector, thus keeping our tray temperature in control and our valuable bottoms product on-spec.  Note that the selector’s direction relegates optimization to the lowest priority;  process protection comes first.

 

Breaking dynamic process interactions by ARC Feedforward (no matrix needed here!)

 

While there’s an interaction between energy optimization (using the reflux MV) and yield-maximizing quality control (using the steam MV), it’s one-way, and easily broken.  Since changes in reflux affect our tray temperature PID’s PV, we add an incremental feedforward controller adjusting steam flow to cancel reflux’s impact.  Given relative dynamics this feedforward cancellation objective’s easily met, and reflux changes shouldn’t disturb tray temperature. 

 

Because reflux in this configuration is synonymous with fractionation, and fractionation’s impact on bottoms’ impurity has diminishing returns, the reflux’s process gain in this feedforward controller may require adaptation as a function of current conditions, e.g. reflux/feed.  Nonlinear gain functionality comes primarily from our rigorous offline energy/yield optimization model.  This, plus historical variability in reflux/feed, would alert us to need (or not) for online feedforward tuning gain adaptation.

 

[We teach every known ARC adaptation trick, including shifting of flow-valve characteristics with delta-P.  Adaptive ARC retuning relies on dynamic process understanding, which often can be grown only over time as operations and feedrates change;  our Discover analysis tool has features encouraging such growth.  When adaptation’s needed, but not used, we struggle to reinvent tuning constants, and many bad things happen.  Note relevant comments on DMC in the Addendum to this paper.]

   

Back to our tower:   Feedrate changes, if strong enough to disturb tray temperature, are cancelled by another ARC incremental feedforward controller adjusting the reboiler steam flow SP.  In our tower example, need for this feedforward would be especially strong if the feed were subcooled, dumping spec-violating Lights into the bottoms when its rate increased.

  

Maximizing tower feedrate, recognizing feed vs. fractionation economics

 

If economics or operations warrant, one simple ARC controller can maximize feedrate while respecting scheduler-specified limits, tower physical constraints, and even economics.  Note that our optimal-fractionation controller already handles physical constraints;  its reflux/feed PV will fall below its optimal SP if needed to protect the tower.  Our feedrate-maximization controller’s PV is the current departure of reflux/feed below its optimal SP.  Our feedrate-maximization controller’s SP is the amount of this departure we’re willing to suffer to squeeze in another barrel of feed.   This isn’t a “blind” throughput push;  it’s subject to whatever fractionation (thus valuable product yield) debit, below optimum, we’re willing to tolerate in our feed maximization drive.

 

Keeping valuable bottoms’ quality in control, despite reboiler limitations

 

If the reboiler steam valve often hangs out at 95% open, and we have no independent control over feedrate, we can use another ARC capability – a configuration switch – to fully open this valve instead,  and switch tray temperature control to the reflux, switching temperature back to the reboiler again when appropriate.  (What’s the required logic?)  This switch between two feedback controllers, each tuned for their own MV’s dynamics, wrings the last yield-maximization nickel from our tower when the reboiler’s maxed out.  The net return is the worth of incremental bottoms yield arising from the bit of extra fractionation gained by moving the steam valve from 95% to 100%, minus the cost of the incremental steam consumed.  But given a nonlinear diminishing return for extra fractionation, the economic motivation for such a configuration switch is likely small.  And the control debit’s huge, since reflux dynamics impacting tray temperature are terrible vs. reboiler steam.  Thus, while ARC allows it, we’d likely shun this configuration-change complexity.  You should find a way to shun it as well, I think, inside a DMC matrix.  But this might not be so easy!

 

Handling a condenser limitation

 

Finally:  tower pressure’s always controlled by a simple PID.  What this PID adjusts depends on tower design at minimum;  and in a flooded condenser design, it can also depend on our choice of the MV we’ve already selected for yield maximization.  We cover all the alternatives in class;   here let’s assume the pressure PID simply adjusts a valve releasing incondensibles. 

 

Handling either an overhead condenser or a reboiler constraint forces thought about What to Give Up, and in what sequence.  Our premise so far’s been that we’ll hold on to yield maximization, i.e. control of valuable bottoms product quality against its max spec limit, to the bitter end.  If we hit a reboiler constraint, we’ll sacrifice fractionation.  Makes sense to treat a condenser constraint the same way … so we’ll add “max 95% open on the vent valve” to our reflux-maximization constraint controller, tasked to achieve optimum fractionation in the absence of reboiler and/or condenser limitations.

 

This strategy for relieving a condenser constraint by reducing fractionation (reflux) works only if a reflux cut reduces vapor into the condenser (i.e., reboiler steam).  Our reflux-to-steam feedforward controller performs this conversion, reducing steam while keeping bottoms quality constant.  But a cautionary note: this requires the feedforward controller to be operational, and vent valve controller dynamics therefore contain embedded feedforward tuning dynamics.  This is unavoidable if we insist on retaining yield maximization as our top priority.  A good ARC analysis/tuning tool handles this apparent dynamic complexity. 

 

The alternative is to sacrifice bottoms quality control (yield maximization) when the pressure control vent valve’s wide open, controlling pressure directly with steam.  But since this risks off-spec bottoms product, I’d consider it only if serious condenser and/or feed composition disturbances couldn’t be handled by directly cutting fractionation (reflux).

 

*********

 

The most important message:  in an ARC system, we know exactly What’s controlling (or overriding) What – it’s evident in each element, which you’ve designed.  Throwing all this into a constrained dynamic matrix instead might be tempting.  But could you force this matrix to dedicate the best MV (steam flow) to the highest profit objective (controlling tray temperature, thus maximizing yield), a dedication at the core of our ARC design?  Nope – and the improved yield we get from that dedicated link alone might tip the balance to ARC.        

 

*********

 

Our Revelations Course provides many in-depth examples of the above ARC technology’s implementation and success in process plants, starting with the testing and process analysis needed to eliminate guesswork.  Just the feedrate-feedforward element above made millions for a very happy chemicals client, reducing tray temperature variability by a factor of twenty in a dynamically-hostile tower of this same configuration.  Its implementation is a Final Exam question.

   

Have you been applauded by your Refinery Manager, and his Technical and Operations Managers, who formed a welcoming committee for you at the control house door?  Implementing another feedforward controller (on a crude preheat furnace) had this outcome.  This is a spectacular feedforward success story;  the seemingly unsolvable process problem forced the refinery to send most of its mogas, jet, and diesel products to the slop tank, for another go-round later.  Our students solve it and become feedforward converts (I think) for life.

 

After we make a model-based control fix to the above classical ARC scheme (in the section below), its credit-capture should equal anything DMC can attain – even in its most recent version, which fixes a long-standing problem (see the Addendum further below).

 

Let’s consider control scheme separability, and (related) ease of implementation.  If every process issue and economic drive mentioned above is relevant, requiring attention, with ARC we have:  1) a quality control cascade requiring tuning for each of three feedback controllers;  2) calculation of the economically-optimum reflux ratio, using equations derived offline;   3) a constraint controller pushing fractionation (reflux) up to this optimum if achievable, while protecting against hydraulic,  reboiler, and condenser overloads;  4) a feedforward controller trimming reboiler steam to cancel reflux’s impact on tray temperature;   5) a second feedforward trimming steam to cancel feedrate impacts on tray temperature;   6) possible adaptive adjustment of the reflux-cancelling feedforward controller gain, and/or the reflux and steam flow proportional gains.;  and  7) a controller maximizing feedrate, recognizing fractionation-vs.-throughput economics. 

 

Wow – a lot of stuff!  But each of these separate ARC schemes has an on/off switch, is comprehensible (I think), and tunable by our plant test and process analysis procedures. 

 

On the Interface side, we should automatically turn on the reflux and feedrate feedforward controllers when the Operator turns on their respective maximizers.  (Why?)

 

If there’s a potential challenge in our design, it’s in these two flow maximizers’  each being limited by reboiler, tray loading, and condenser constraints.  We’ve avoided “double dipping” by:  a)  giving the reflux maximizer direct responsibility for these constraints, allowing the reflux ratio to fall below its economic optimum if necessary to protect the tower, and to keep bottoms quality in control … then  b) defining how much of an optimal fractionation sacrifice (on the reflux/feed ratio) we’re willing to accept to push another barrel of feed, and maximizing feedrate to hit this (reduced) reflux/feed fractionation target.  This assures we push the tower against its most limiting physical constraint, without considering such constraints twice. 

 

So, not a problem here   but we probably wouldn’t care to maximize/minimize a third variable subject to these constraints.   A matrix-based LP’s a great tool for handling multivariable constraints, as most of you know.

 

So, when might a MVC tool such as DMC be warranted on this tower, and ARC ruled out?  It’s when:  1)  multiple max/min drives against multiple process constraints become daunting in one-variable-at-a-time ARC ,  or  2) forgetting constraints, when two control objectives (say a top and bottom temperature on a tower) are each influenced by two MV’s (say reboiler steam and reflux), and the steam and reflux MV’s have different dynamic impacts on at least one of these temperatures, and this MV difference is not the same for the other temperature.  (2) makes the problem not just fully interactive, but truly dynamically unequal, requiring the “D” part of DMC to sort things out.    

 

 In our tower case, (2) occurs if the top product has a restrictive max “% Heavy” spec, and honoring a low optimum reflux ratio would violate this spec.  In this case, looking solely at tower economics, the minimum-cost solution is dual quality (or dual temperature) control, which minimizes tower energy consumption.  If our “% Heavy” signal is continuous, and we’re striving for the tightest possible quality control at each end of the tower to truly minimize energy cost – well, Good Luck.  This is a vastly more difficult interactive control problem than our case above, one I’d cheerfully cede to DMC.  We can let DMC work its constrained dynamic decoupling magic in this world of nonlinear fractionation response, varying both a dynamically-inferior variable (reflux) and a better one (steam) to satisfy both quality objectives. 

 

Even with its interactive skills, DMC will not due a perfect decoupling job, though (it just can’t due to MV dynamics), and both qualities will vary.

 

Or we can overfractionate just a bit to assure the top product’s always on-spec (while lobbying for spec relaxation), and achieve tighter control on our key money-making quality (% Light in the bottoms), via ARC.  

 

We needn’t be dumb in how we “overfractionate a bit” – we use an ARC-compatible model-based controller (covered below) for “% Heavy in the top product”.  This controller recognizes the steady-state gains for both reflux and steam impacts on both top and bottoms qualities, and the dynamics for both MV’s impacting % Heavy quality.  Assuming no valid top-section tray temperature or analyzer exists, it’s automatically updated by Lab results.  Its SP is a bit on the conservative side of the actual % Heavy spec limit, since it’s not as dynamically aggressive as DMC might be.  It’s inserted into the ARC scheme in parallel with our optimal reflux controller, followed by a selector which honors top product quality control over optimal fractionation.

 

This is a pretty nifty design, I think.  But I would, since I teach this stuff, along with how to implement it.  Your comments welcome!

 

 

Fixing ARC’s Feedback Control Shortcomings

 

The section above used “classical” ARC to solve a non-trivial, real-world tower control problem.  Here we’ll focus on the most critical element:  the Analyzer-to-Temperature-to-Flow cascade, responsible for maximizing low-value top product in higher-value bottoms, despite variations in feedrate, feed composition, feed preheat, fractionation, and sunspots.  You might make a ton of money in your unit using just this much, without all the other stuff above.  But there’s a problem lurking in the analyzer PID (for sure), and perhaps as well in the temperature PID.  We’re going to replace the Analyzer PID with a model-based controller, and will consider doing the same for the Temperature PID.

 

First, a word on behalf of the PID: it’s an absolutely excellent feedback controller, far more capable at handling hostile dynamics than most folks appreciate.  We’re not replacing a PID with a Smith Predictor for deadtime compensation reasons – there’s no practical need, just risk of cycling due to model error if we try to outperform our old friend’s dynamics.  For others’ (amazed) views on what a well-tuned PID can achieve, scroll a little ways down at www.procontrol.net, and read the two unsolicited “Click Here” e-mails.

 

We spend the better part of a day in class illustrating cascade procedures and pitfalls, after coming to know and love PID’s.  Our Prime Directive:  kill disturbances, without threatening process stability.  In one case study sequence, we tune both PID’s in a two-level cascade perfectly (starting where?), with the secondary’s dynamics relatively quick vs. the primary’s.  We confirm that the primary’s SP response is great, then inject a disturbance seen by the secondary’s PV.  Life being easy, the secondary’s PID kills this disturbance nicely, and the primary’s PV hardly moves.

 

We then increase the secondary’s deadtime and lagtime vs. the primary until our scenario matches the dynamics we’d expect from the analyzer primary and temperature secondary in our tower problem above.  We retune both PID’s, and confirm that the analyzer’s SP response still looks great – just slower, reflecting the more sluggish temperature process (and temperature PID tuning) beneath.

 

Now, injecting the same (step) disturbance into the temperature PV triggers a serious cycle in both the analyzer and temperature controllers (why is this cycle double-sided?).  The problem’s simple:  it’s not the analyzer PID’s tuning, but any PID’s inability to let its secondary controller solve its own problems (for the temperature eventually recovers, and with it the analyzer PV later, provided the analyzer PID doesn’t improperly get in the act).  In the real world, unavoidable tower temperature variations (e.g., due to changing feed composition) yield this exact cycling problem, since the analyzer PID primary above “can’t keep its hands off” whenever its own PV moves.  This cycling’s assured to be non-stop, since unmeasurable tower disturbances never stop.

 

The best answer:  replace the analyzer PID with an analyzer DR (Dynamic Reconciliation) Model-Based Controller, whose “% Light in the Bottoms” model is based on the tray temperature PV.  I invented and wrote a book on this tool back in my Exxon days;  it’s simple, well-proven and accepted, and completely implementable in any modern DCS .  This controller ignores temperature-caused variations in the analyzer PV, correcting only true disturbances (and reacting to model error, but this won’t be a problem if our tray temperature is a good one).  You’ll find DR’s many merits touted in:  http://www.procontrol.net/dr_addendum_to_procontrols_courses.htm.

   

Similarly, a DR controller should be considered as a replacement for the tray temperature PID if steam flow control’s poor due to valve stick, to our failure to adapt its tuning to shifting delta-P (if relevant), or frankly to our novice colleague’s mistuning.  DR’s invulnerable to secondary tuning problems, provided we’ve a good model relating secondary (steam flow) and primary (temperature) PV’s.  Our clients have generated lots of money by this algorithm replacement, in just ”simple” two-level temperature-to-flow cascades.

 

In our example above, we’d also consider a DR tray temperature controller if either relative dynamics, or the undesirability of sharp steam flow SP moves, made perfect feedforward control (in reaction to either feedrate or reflux changes) impossible.  Again, the temperature PID isn’t smart enough to know that (dynamically late) feedforward will eventually restore its PV to SP;  it has no choice but to improperly change the steam flow SP, triggering cycling.  A DR temperature controller based on feedrate and/or reflux, as well as on steam flow, understands this, making even feedforward that’s “a bit late” useful.

 

If making a feedforward-related decision on DR vs. PID sounds vague and complex – no, it’s a simple science.  Our Discover tool, for example, provides:  a) “Best Fit” and “Worst Case” dynamic models for the Load and MV;  b) excellent feedback and feedforward controller tuning for a PID-based solution to cancelling Load disturbances, for both Best Fit and Worst Case models;  and  c) graphical case studies assessing the PID’s compatibility with feedforward.  These will exhibit cycling if the Load arrives before the MV, or if you’ve purposely detuned feedforward aggressiveness to calm MV jumps and protect the process.

 

If there’s no cycling, rerun with a Worst Case assumption, pick our conservative PID tuning if you like (since PID’s are also vulnerable to “model error”), and you’re done – a PID’s fine!  Else, save your graphical PID cycles for subsequent comparison, and repeat (b) and (c) using DR.  Since DR’s model-based, a Worst Case sensitivity study, purposely invoking model error, is a good idea before you abandon the PID in DR’s favor. 

 

Note that DR includes a handle for calming the impact of such error, and another for increasing aggressiveness when “life is easy” dynamically.  While our Discover tool gives a great balance of both (based on process dynamics, and reasonable model error), these supplemental handles are modifiable in your case studies.  This is as close as you’ll come to guesswork.

 

All this decision-making’s achievable in a morning, and almost all of that with well-guided mouse clicks.  Not bad if, two weeks earlier, you weren’t certain what feedforward meant.

 

Another model-based control motivation:  Cascade windup’s impossible with DR.  In our example the tray temperature could be completely out of control due to reboiler and reflux limitations, yet the DR analyzer controller would keep the temperature SP where it needs to be to satisfy the analyzer’s SP.  We’ve an impressive war story on a money-making analyzer-to-analyzer-to-temperature cascade which would have been impossible using PID windup “protection”.

 

Finally:  DR brings use of discontinuous GC’s, lab data, and validation of both analyzer and lab data, into ARC’s realm.

 

You’ll find strong parallels to our PV-based modeling in “Charlie Cutler's Latest Ideas for Multivariable Controllers”, in LinkedIn’s APC Forum, where I’ve put a post.  While I concur with most of Charlie’s ideas (having taught them for decades!), I’ve raised concerns about this “latest” DMC’s takeover of disturbance-killing formerly done by PID’s at a fast frequency.   

 

Hope this ARC example helps.  If you can digest more (and there’s lots more on actually accomplishing all this), and your company’s in a nurturing mood, consider joining us for some life-changing education in beautiful Evergreen, Colorado.

 

Bob Bartman, Ph.D.,  President, ProControl, Inc.

cell:  303-670-9092

e-mail:  bob@procontrol.net

 

If you’re curious about me, and my past Advanced Control leadership with Exxon:

http://www.procontrol.net/Bobs_Profile.htm .

   

 

 

 

Addendum:

 

ProControl’s response to “Charlie Cutler’s Latest Ideas for Multivariable Controllers”, in LinkedIn’s APC (Advanced Process Control Professionals) Forum.    

 

This is a comparison of DMC with our version of ARC, on Charlie’s themes of Process Variable Models and Adaptive Transforms. 

 

Some thoughts, and questions, on “Charlie’s Latest” ideas on DMC.  I’ve advised my “Revelations in Control” Course students to seek MVC tools using PV-based, not SP-based, models from Day 1 of ProControl’s creation (way back in 1986), and fully agree with the controller-mistuning and valve-stick motivations for PV models in Charlie’s paper.

 

There’s one subtle motivation unrecognized in the paper:  inability of the MV to fully cancel disturbances despite this MV’s being perfectly tuned, these translating into disturbances in the CV above, then back into improper MV setpoint swings.  This “disturbance seepage” upward, causing unwarranted MV SP moves, happens not only in classical SP-based DMC (Charlie’s stalking horse), but as well in any primary-to-secondary ARC cascade, when lag-related dynamics of the MV secondary aren’t a lot faster than those of the CV primary.  Analyzer-to-tray temperature cascades (in ARC terms) are a good example.

 

Just as with MV PID mistuning and valve stick, the outcome is cycling of the entire system, which can be hard to differentiate from the MV control issues in Charlie’s paper.  But here the tuning’s already as-good-as-it-gets for the MV, and let’s say for the primary as well in the absence of this “disturbance seepage” problem.  (We can tune the primary PID in a cascade to look great upon SP change – it will still cycle when the secondary controller can’t adequately kill disturbances.)

 

More generally, in ARC cascade terms, cycling occurs when the “dumb” primary PID won’t let its secondary controller solve its own problems.  The same visualization applies to SP-based DMC.

 

Similar cycling occurs when a feedforward controller’s inserted between a primary PID and a secondary to cancel a measurable disturbance hitting the primary’s CV, and either unlucky process dynamics, or need to restrict MV moves to protect the process, prevent perfect feedforward cancellation.  Here the primary PID isn’t smart enough to know the feedforward will eventually get the job done – it has to change the MV SP itself (thus the cycles).

 

The top-level solution to all this mess is a CV model based on the MV PV (and on measurable disturbance PV’s, if relevant), as in Charlie’s paper.  In the ARC world this means replacing the primary’s PID (only where warranted) with a simple model-based controller.  We have a proven one (see www.procontrol.net), our Revelations in Control Course has many real-world examples to back it, and our process analysis tool (Discover) supports it. 

 

Our controller ignores what it should ignore (assuming the model’s OK, a general need), and keeps the secondary MV setpoint where it needs to be to meet the primary’s need.  It solves all the above cycling problems, just as your does, and eliminates ‘windup’, same as yours.  Not being competitive here – with the exception of its GC and Lab handling, our Dynamic Reconciliation model-based controller is meant for ARC applications, not MVC.

 

Here’s where I’m going:  While it’s great having a primary controller (or a DMC system) that’s tolerant to lower-level problems, this doesn’t free us to ignore MV control.  Without arguing dynamics, generally the tighter our MV control, the better our CV outcome.  MV PID’s kill some pretty troublesome disturbances – it’s why they’re there.

 

Which brings me to Charlie’s adaptive control topic, open control valves, and DMC’s coexistence with ARC.

 

As I understand it, the “latest” DMC  uses an updated flow – valve characterization curve to adapt the DMC matrix gain for how the valve impacts MV flow.   An accurate gain’s especially critical when the valve’s nearly open, given a potential economic drive to open the valve fully.  At this point, or possible earlier (?), DMC puts the MV PID controller in manual to hold the valve open.

 

Let me respectfully suggest another approach, keeping the flow controller operational.  As background, I’ve taught the same translation of flow-valve characteristic curves to any combination of upstream/downstream pressures from Day 1, and have a Revelations Course war story on how this adaptive technique rescued a major online optimization project way back in my Exxon days, in the ‘80’s.   There the flow-valve characteristic was extremely nonlinear, and delta-P variations were about the highest imaginable.

 

But we used the adaptive outcome differently, for:  1) accurately updating the flow’s achievable max limit when the valve’s fully open, and  2) for adaptively retuning the Proportional Gain on the MV flow controller itself.   Together these get the same “open-the-valve” job done in MVC applications, but (to me) in a more straightforward way, with a cleaner operator interface, and with better disturbance suppression.

 

It’s the flow PV’s dynamic response to SP that’s important and preserved by adapting MV PID gain.  It’s this PV vs. SP response that counts in ARC cascades, and as Charlie notes, in SP-based DMC as well.  (Response of the valve itself to an MV PID SP change necessarily changes as we adapt controller gain to a nonlinear, shifting flow-valve characteristic – but the flow controller handles this, not us.)  We’ve “horrible before, great after” class exercises illustrating adaptive control’s stabilization in precisely this world of shifting flow-valve nonlinearities. 

 

I agree there’s profit in driving certain valves wide open (rather than just 97% open), but in my view it’s regrettable that MV PID’s might be sacrificed to pull this off.  If we know the max limit on a flow when its valve is wide open, despite shifts in valve delta-P, from (1), and know this flow’s PV responds to its SP the same way across a spectrum of valve openings and valve delta-P’s from (2), then it seems we might grant the poor MV flow controller a pardon, and find a way to keep its valve fully open by adjusting its SP to lie just above its maximum (assuming the solution drives it there), not by putting its controller in manual. 

 

ProControl’s suggestion:  Let MVC economics, real or artificial, maximize or minimize flows, not valves.  The flow controller itself takes care of opening the valve, now in a predictably stable way, with the flow vs. valve relationship perhaps not needed in the matrix at all.    

 

A bit more on MV PID’s being placed in manual:  an earlier FCC paper by Charlie, et al., containing the same PV-based and adaptive control material, suggests that DMC place all MV PID’s in manual, always, to facilitate driving valves wide open, and to eliminate problems with those mistuned PID controllers.  He states that DMC can do a reasonable MV control job itself, given its new ten-second frequency.  Perhaps this idea’s been scrapped (although there were only 27 MV’s in the FCC case to be placed in manual), but I’ve some further thoughts on consequences if not.

 

The current paper brings up valve-stick, always a nasty complication.  I suspect a PI controller running each second can face this problem better than DMC running at ten seconds, or slower.  And I’d have one-second flow controllers always ready to neutralize those delta-P shifts.

 

But I’m especially troubled at DMC, not a PID, ever handling pressure and level controls, whose performance suffers most upon slowdown (due to their integrating nature).  For views on what a well-tuned pressure controller PID running once-a-second can mean to a process, consider:  http://www.procontrol.net/tower_pressure_retuned_by_discover.htm

 

and similarly for level:

http://www.procontrol.net/discovers_fcc_tuning_stabilizes_gasoline_blending.htm

 

Buried in all the graphics and propaganda:  it took a full-blown PID, not just a PI, running once a second to pull off these stability enhancements.  Slowing control to a DMC frequency forces removal of essential Derivative, and a serious reduction in controller gain – yielding significantly higher pressure and level variability.   For some processes this matters, as we illustrate in class.

 

In my Perfect World, if tight pressure or level control had process importance, a functional PID would always be in charge.  If the pressure’s normal valve were forced open for optimization reasons, by DMC acting on other MV’s, then I’d like to see DMC switch the PID pressure controller’s OP to an alternate in-range valve, with PID tuning consistent with this new valve’s response, and the controller still running – and of course to switch the OP back when this makes sense.  Such switching is in ARC’s toolkit.  Not sure if it’s made the cut in DMC.

 

Guess I’d just generally prefer to see DMC “play well”, not the opposite, with a regulatory ARC world better suited to killing disturbances.  I realize the frustration implicit in the paper’s portrayal of many mistuned PID curves, and appreciate that “off with all their PID heads” might be a tempting response on DMC’s side, but it needn’t be this way, and won’t if plant sites offer their people the right ARC control tools and education.

 

Comments / corrections / rebuttals welcome!   bob@procontrol.net