Stock-flow consistent (SFC) models become complex and hence rather intractable once they seek to incorporate more features of reality. Solving such models numerically for preselected parameter values can help to overcome this problem. But how should the parameters be selected given that there often exists a host of economically plausible values?

In order to address this problem, this paper suggests using a Monte Carlo approach to examine which combinations of parameters and starting values (feasibility regions) produce economically meaningful equilibria for the short and long run, and whether the long-term equilibria thus identified are in fact stable. In addition, we undertake a sensitivity analysis for all parameters which allows us to gauge the extent to which model results are driven by certain parameters and starting values.

Full Text

1 INTRODUCTION

Stock-flow consistent (SFC) models are an active area of research in post-Keynesian economics and many researchers consider them to provide an alternative paradigm to standard textbook macro models. Their main advantage is – as the name suggests – that they fully take into account the implications of flows for the development of stocks and vice versa – something which standard macro-models usually do not do.

Moreover, SFC models allow the analyst to distinguish clearly between short- and long-run equilibria, where a short-run equilibrium is defined in terms of market clearing (or more broadly, in terms of the mutual consistency of spending and production plans), whereas a long-term equilibrium is considered to be reached whenever endogenous variables such as growth rates or capacity utilization become constants; that is, when they do not change from one period to the next.

However, using SFC models for theoretical analysis is made rather complicated by the fact that these models become complex and hence intractable once they seek to incorporate more features of reality. Thus analytical solutions are difficult to obtain, if at all. This is true in particular once non-linearity is introduced in the model and hence the possibility of multiple equilibria emerges.

Solving the model numerically for preselected parameter values can help to overcome the first problem and this approach is therefore frequently used by modellers. Accordingly, the researcher selects one or several sets of parameter values which are economically plausible and then evaluates the model using these values for both short- and long-run equilibria.

Alas, this approach leads to another difficulty: how should parameters be selected in the first place? After all, for most parameters there exists a host of economically plausible values and, by implication, there is also an infinite number of (in that sense) plausible parameter combinations in the n-dimensional hyperspace of assumptions. Picking just one such combination would therefore seem rather arbitrary while the properties of the model for other parameter configurations would remain basically unknown. This implies that any conclusion that is derived from the model on the basis of some randomly selected parameter configuration has to be taken with a whole trainload of salt.

However, with modern computer power, it is now possible to process large amounts of data within short periods of time. This makes it possible to explore the properties of models in a much more systematic fashion than was hitherto the case. Thus, rather than picking specific assumptions out of the blue, we make only very general assumptions about parameters in this paper. That is, only an economically/logically plausible range is determined for each parameter. We then use a Monte Carlo approach to examine which combinations of parameters and starting values (feasibility regions) produce economically meaningful equilibria for the short and long run, and whether the long-term equilibria thus identified are in fact stable. In addition we undertake a sensitivity analysis for all parameters which allows us to gauge the extent to which model results are driven by certain parameters and starting values.

We do so for two different models. The first model has been presented in Dos Santos/Zezza (2008), while the second has been developed by the authors of the present paper. The Dos Santos–Zezza model has been chosen because it constitutes a kind of benchmark case for SFC macro models and because its structure is still relatively simple. Thus the received method for examining this kind of model takes us relatively far. Nevertheless we examined some of the claims Dos Santos and Zezza made for their model and found that these claims are not fully supported by the results of the approach we have adopted.

The second model we have examined is a modification of the Dos Santos–Zezza model, to which we have added a couple of features that we consider relevant for sustainability analysis, in particular the possibility of inheriting a durable consumption good from previous generations and the inclusion of a pension system. This model has been described and analysed in more detail in a companion paper (Rosenbaum/Ciuffo 2015).

2 MODELS AND METHODS

In the present section, we will briefly present the two models which we have investigated. However, we will refrain from explaining the underlying rationale in much detail as this would exceed both the available space and the purpose of this paper.

Moreover we shall discuss in more detail the method that is used for exploring the domain of the model, and for sensitivity and stability analysis.

2.1 A benchmark stock-flow consistent model

Note first that, technically, the two models can be seen as two systems of difference equations having the objective of determining, for a given closed economy, the evolution, over time, of the output–capital ratio (u_{t}), the growth in the stock of capital (g_{t}), the government-bills-to-capital ratio (b_{t}) and the households’-wealth-to-capital ratio (vh_{t}). In the modified version, there is a fifth variable, namely the stock of a durable consumer good in relation to the capital stock (q_{t}) that characterizes the system.

The system of difference equations of the original model can be written as follows (Dos Santos/Zezza 2008):

The symbols and their meaning are reported in Table 1. The table also gives the initial ranges we have chosen for the parameters and starting values. Following the above equations, capacity utilization is essentially a function of the set of parameters that determines distribution, and investment and consumption spending. The rate of accumulation (that is, the growth rate of the capital stock) then depends on capacity utilization and interest rates. Government debt (relative to the capital stock in the previous period) is equal to the (relative) debt in the previous period plus the deficit in the current period, defined as the difference between autonomous expenditures and tax revenues. Finally, relative household wealth equals household wealth in the previous period plus whatever income households receive minus their spending on consumption.

Table 1

Input variables and parameters of the original version of the SFC model (Dos Santos/Zezza 2008)

As can be seen from Table 1, we impose only very mild restrictions on the initial parameter values in order to use the entire flexibility allowed by the model as much as possible without simultaneously creating an excessive amount of data. Thus the accelerator effect of profits was assumed to be equal to or below 1 and the same holds for the exogenous accelerator effect. The mark-up rate was assumed to be between 0 and 1 and the bank's mark-up on the interest on government bills (itself assumed to be between 0 and 0.5) was assumed to be between 0 and 0.1.^{1} Autonomous government expenditures and autonomous growth in the stock of capital were set between 0 and 1.

For the remaining parameters, the upper and lower bounds have been chosen against the background of the economic meaning of each parameter. Thus the tax rate has to be between 0 per cent and 100 per cent, implying a parameter value between 0 and 1. The same considerations hold for the dividends-to-profit ratio and the propensity to consume out of household wealth. In each case, values beyond 1 (100 per cent) simply do not make sense from an economic point of view.

2.2 An SFC model with pensions and government

For what concerns the modified version of the model, it was motivated by the aim to analyse in detail the possibility of intergenerational transfers in the context of sustainability assessment. For this purpose a number of modifications were made to the model. Thus it was assumed first of all that consumers purchase two types of consumption goods: a normal consumption good that is acquired and consumed in each period and vanishes thereafter; and a durable consumption good (which we took as a proxy for environmental goods) that is accumulated over time and thus inherited by future generations.

Another important addition is the pension system we have introduced. This pension system has two main tasks and hence components. Its first task is to provide pensions to retired workers where these pensions are calculated as a proportion of wages. In addition, however, pensioners also receive compensation for the durable consumption good they bequeath to their children. The latter can either be in part or in full. That is, pensioners are paid a certain percentage of the value of the accumulated environmental good up to a maximum of 100 per cent of that value. Irrespective of the motivation of the pension, all pensions are paid out of the public budget and hence financed by taxes (and debt). The pension system is thus a pay-as-you-go system.

A more explicit treatment of the public budget is the final element we have added. In doing so, we assume that the government does not only buy goods and pay pensions (both of which are financed by taxes and debt), but that it also provides a proportion of the capital stock. This assumption is rather straightforward considering that a considerable part of the infrastructure (road, rail, energy, etc.) is usually provided for by the government. Almost equally uncontroversial should be the assumption that the level of investment in public infrastructure often depends on the budget surplus of the government. That is to say, infrastructure expenditures are more pro-cyclical than, say, transfers for which there exist statutory rights.

Making these assumptions does not only render the existing system of equations somewhat more complex, it also implies the necessity to introduce a new equation which gives us the accumulation rate of the durable good (q_{t}). The modified system of difference equation can then be written in the following form:

In doing so it has been assumed, as described above, that pension payments consist of two parts: a wage-related part and an inheritance-related part. The former means that pensioners receive a certain proportion of wages as pensions, whereas the latter means that pensioners receive some compensation for the durable consumer good, which they leave to the next generation. Formally:

As noted above, pensions are paid by the government.

As regards the accumulation of the durable consumer good, it has simply been assumed that a certain share cd of consumption spending is devoted to the acquisition of the durable consumer good. This share is not endogenously established by the model but is also one of the ‘randomly’ determined parameters. The only purely logical restriction we have imposed is that this share cannot exceed 1.

As before, the symbols are reported in Table 2 together with upper and lower bounds, and (ranges of) starting values.

Table 2

Input variables and parameters of the modified version of the SFC model

Note that in both versions of the model, in order to be independent of the size of the specific economy in the starting period, all analyses have been carried out for endogenous variables that have been normalized by the capital stock in the previous period. Moreover, the value of the capital stock at time t–1 (K_{t–1}) has been set to 1 and the same goes for the value of the price level at time t (p_{t}), which also equals 1.

3 EXPLORATION OF MODEL DOMAIN

3.1 The methodological approach

Once the model has been specified, it is extremely important to identify the regions in the domain of parameters and starting values which allow the models to produce meaningful outputs. In principle, such an exercise can be carried out analytically or numerically. However, in both cases, it is fairly difficult to identify well-defined sub-regions in the space of the model parameters and starting values, especially when the model dimensionality is high (in general more than 10–15 parameters depending on the non-linearity of the model). Although the model is available in explicit form in the present case, we adopted a numerical approach to examine the domain of the model. Our prime objective was to modulate the input ranges in order to exclude those regions where the outputs of the model resulted in clearly meaningless values.

For this purpose, we regarded results as meaningless whenever the model produced negative values that do not make any sense (for example, for capacity utilization) or whenever the model produced very high positive values (again for capacity utilization), which we considered implausible given the starting value for that variable. The reasoning was that, reverting to the example of capacity utilization, very high values within only a few periods would imply unrealistically high growth rates. Concomitantly, we have also excluded parameter values that would lead directly to unrealistically high growth rates. This requires of course that the meaning of each parameter and of the logic in the way the model as a whole operates is duly taken into account. Thus it is here that economic reasoning more than anything else comes into play. Against this background, as will be shown in the next section, those regions were then excluded where the analysis of two- and three-dimensional scatter plots showed that they were clearly unfeasible or meaningless.

In practice, a Monte-Carlo-based approach was adopted. Thus, for each numerical iteration, a point in the hypercube defined by the lower and upper bounds of the model parameters and starting values was randomly selected and the model was evaluated on it. In order to ensure a good coverage of the entire domain, we used a sequence of random numbers with low discrepancy, where discrepancy is a measure of the distance from uniformity; that is, from equally spaced numbers. This is a crucial property in order to achieve sufficient coverage of the input space with the lowest possible number of input combinations, as low discrepancy means that large unexplored ‘holes’ do not remain in the hypercube. In the present study, Sobol sequences of quasi-random numbers were selected. This family of sequences is widely used in Monte Carlo applications such as the calculation of integrals, numerical explorations, etc.,^{3} as it has been proven to allow a very quick convergence in numerical applications.

The minimum discrepancy of a Sobol sequence is achieved any time the number of points determined by the series is equal to a power of 2. In the present application, we evaluated the model for 2^{18} input combinations as we found this to be a sufficiently high number in order to achieve clear patterns in the model outputs.

3.2 Exploration of parameters and starting values domain

In the present section, the main results obtained from the analysis of the two models are shown. As already pointed out above, the exploration of the domain of the two models has been carried out by evaluating each on several random combinations (2^{18}) of their parameters and starting values. The subsequent analysis has mainly been visual and has examined two- and three-dimensional scatter plots. Possible further reductions of the input space due to the interaction of more than two parameters and starting values have not been carried out due to the intrinsic complexity of this process and in order to avoid discarding possible useful combinations of parameters and starting values because of a too-coarse identification of the feasibility regions.

Bearing these caveats in mind, we were able to reduce the hyperspace identified by ub_{1} and lb_{1} to that identified by ub_{2} and lb_{2} as given in Table 3.

Table 3

Domains in original and modified model

In Figure 1, we show four examples of how the range of variability of the parameters and starting values has been reduced.

Illustration of model explorationNote: Scatter plots of four different input factors with respect to different outputs of the two versions of the models. The dashed line identifies the limit defined for the input range. Arrows identify unfeasible values in the model outputs.

For instance, in the top-left chart it is evident that, for the original version of the SFC model, a value of the parameter a higher than 0.5 may produce negative values for the households’ wealth to capital ratio vh_{t}. For the modified version of the model, it can be seen in the top-right chart that values of the parameter θ lower than 0.4 may produce unrealistically high and low (negative) values of the output–capital ratio u_{t}. In the bottom-left chart it emerges that, in the modified version of the model, negative values of the output–capital ratio u_{t} are only achieved for values of the input θ_{1} higher than 1. Finally, the bottom-right chart suggests a limitation of the value of the parameter a to 0.5 in order not to have too high values of the marginal growth in the stock of capital (g_{t}).

In Table 4, the reduction in output variability resulting from a reduction in the range of the parameters and starting values resulting from the above analysis is reported.

Table 4

Variation in the overall variability of the model outputs due to a reduction in the range of parameters and starting values

Table 4 highlights some outstanding issues. First of all, one can clearly see that the combination of economically meaningful parameters and starting values may still produce economically meaningless outputs for exactly the same model. This is something to take into consideration when using whatever model without a careful examination of both its parameters as such and the effects of possible interactions among parameters. In fact, while the identification of meaningful values for a certain input may be rather straightforward, the identification of the correlation structure that exists among the different inputs is far from simple. This is the reason why a preliminary QMC exploration of the space in which the inputs are allowed to vary is a viable and effective way to limit the possibility that the model produces unexpected and unrealistic results despite meaningful parameters.

Second, the aforementioned phenomenon is less evident in the modified version of the model. In this case, it seems that the augmented complexity of the modified model is indeed also able to confine the model within a more reliable region. But even then, the reduced bounds may still produce unrealistic results (both negative values and high values of the outputs). As already pointed out, this is due to the fact that, in the exploration, we preferred not to instrumentally reduce the variability in the inputs unless this was strictly self-evident in order not to lose many potentially useful input combinations. However this strategy then entails the necessity to carefully calibrate the model before applying it to the analysis of economic issues.

4 ANALYSIS OF MODEL DYNAMICS

4.1 The methodological approach

In dynamic modelling, it is also very important to understand how the model mimics the evolution of a system subject to sudden changes (shocks). This is the objective of stability analysis.

The analysis of the stability of a system of difference equations is not a trivial task and usually entails sophisticated mathematical analyses. Some ‘simpler’ analysis is possible when dealing with the fixed points of a map. A map is defined as a function putting in relation the state of a system ${x}_{t+1}$ at time $t+1$ with the state of the same system at time t, so that:

$${x}_{t+1}=f({x}_{t}).$$

A fixed point of a map is a point for which ${x}_{t-1}={x}_{t}$; that is, an equilibrium condition for the dynamic system. In this specific case, an analysis of the linear stability of the system can be carried out by evaluating the Jacobian matrix of the system and by checking whether the absolute values of its terms are lower or greater than 1. If all the elements of the Jacobian matrix have absolute values lower than 1, then the fixed point is an asymptotically stable equilibrium point, whereas when at least one element is greater than 1, the point is an unstable equilibrium point (for the proof of this theorem, see Elaydi (2005)).

For the original version of the SFC model we have the maps

The value of the different elements of the two matrices identifies different stability/instability regions. In particular, in the first case there is one stability region (when all the four partial derivatives are lower than 1 in absolute value) and 15 instability ones (identified by the 15 different permutations for which any of the four derivatives can be higher than 1 in absolute value). In the second case, there are 512 regions from which, again, only one is stable.

The further analysis of model dynamics is made a bit more cumbersome by the fact that our maps depend upon the value of all the other model parameters and starting values. Also in this case, therefore, the analysis requires a thorough exploration of the input space. As in the previous analyses, we used a sequence of 2^{18} Sobol quasi-random numbers to cover the hypercube of the input domain taking into account the restrictions on the range of parameters that had been identified in the previous step. For each parameter combination we then iteratively simulated the two systems of difference equations until a fixed point was reached (b_{t} = b_{t–1}, vh_{t} = vh_{t–1} and q_{t} = q_{t–1}). Finally, we numerically derived the elements of the two Jacobian matrices to determine the type of equilibrium that is associated with the fixed point.

At the end of the analysis we were able to identify the percentage of fixed points belonging to each linear stability/instability region and also to localize where higher instabilities arise.

4.2 Model stability in comparison

As already pointed out in Section 4.1, we are analysing in the present work the stability of the fixed points allowed by our models by calculating the value of the elements composing the Jacobian matrices of the two systems of difference equations. As also outlined, the original version of the model leads to the identification of 16 regions of stability/instability, while its modified version leads to 512. Just to make this point clearer, the 16 regions of the original version of the SFC model are depicted in Table 5.

Table 5

Identification of the 16 regions of model stability/instability on the basis of the value attained by the elements of the Jacobian matrix as of equation (21)

In Table 5 the occurrence of the different stability/instability regions among the 2^{18} samples is also reported. Quite interestingly, the finding reported in Dos Santos/Zezza (2008), concerning the value of $\frac{\partial {b}_{t}}{\partial v{h}_{t-1}}$ does not necessarily hold and several cases of this quantity being higher than 1 have been found. By contrast, no occurrences of $\frac{\partial v{h}_{t}}{\partial {b}_{t-1}}>1$ have been found, which also contrasts with what is hypothesized in Dos Santos/Zezza (ibid.). However, given that our procedure may not have selected the exact parameter configuration for which the above inequality holds, our results do not imply that such a configuration does not exist at all. It may simply occur rather infrequently and is thus not captured by the specific Sobol sequence we have chosen.

In Figure 2, scatter plots of stable and unstable fixed points are reported. In the scatter plots, all the points outside the meaningful range have been removed – that is, where vh_{t–1} takes on negative values or where b_{t–1} becomes very large. The reader can easily verify that the reported domain b_{t–1} – vh_{t–1} is much larger than the domain reported in Table 3. This is due to the fact that the fixed point connected to a sampled combination of the two parameters can be very far from the original sample values. In other words, short and long-term equilibria may differ significantly.

Scatter plots, in the domain b_{t–1} – vh_{t–1}, of stable and unstable fixed points of the original version of the SFC model in the different regions identified in Table 5Note: The size of the points in the charts is proportional to the value of the partial derivatives in the Jacobian Matrix related to equation (12).

It is also quite interesting to note that almost all stable fixed points arise for values of b_{t–1} lower than 0 (that is, for negative government debt) and small values of vh_{t–1} (that is, small values of household wealth). On the contrary the vast majority of unstable points arises for values of b_{t–1} higher than 0. Although not clear from the figure, the entity of the instability (that is, the value of the partial derivative) is much higher in the case of both $\frac{\partial {b}_{t}}{\partial {b}_{t-1}}$ and $\frac{\partial {b}_{t}}{\partial v{h}_{t-1}}$ being higher than 1. For these cases it is expected that even a small perturbation in the system can produce quite a high deviation from the equilibrium state.

These results suggest that the original version of the SFC model is rather unstable for economically reasonable parameter configurations. If correct, this would surely cast some doubts on the applicability of the model.

In Figure 3, the results of the stability analysis for the modified version of the model are reported. In contrast to the previous case, stability plots would now have to be reported in a three-dimensional space. In order to improve readability, this space has been projected on two-dimensional planes. The figure immediately shows that the proportion of stable solutions is much higher for the modified version of the model. In addition, most of the stable solutions are centred around zero for all the variables. Unlike in the original model, however, it is not possible to identify unambiguously regions of instability as these overlap with stable ones. For example, from the figure (in the [vh_{t–1} – q_{t–1}] domain), it is quite interesting to notice a combination of inputs and parameters for which the model proves to be highly unstable, while this is not necessarily the case for the combinations in the immediate surroundings. For this reason the analyst should check the property of the solution found on a case-by-case basis as similar parameter configurations may either lead to stable or unstable results. This can be done by (numerically) simulating the evolution of the state of the system and checking visually whether this evolution leads to a stable or unstable condition for any given parameter configuration. This is the approach that was adopted in Rosenbaum/Ciuffo (2015).

Scatter plots, in the three domains [b_{t–1} – vh_{t–1}], [b_{t–1} – q_{t–1}] and [vh_{t–1} – q_{t–1}], of stable and unstable fixed points of the modified version of the SFC modelNote: The size of the points in the charts is proportional to the value of the partial derivatives in the Jacobian Matrix related to equation (13).

Also the differentiation of the unstable solutions among the different regions as done for the original version of the model was not carried out due to the high number of possible combinations of values for the elements of the Jacobian matrix. However, it can be argued that the modified version of the model considerably outperforms the original one with regard to its overall stability and the meaningfulness of the feasible regions identified for both parameters and outputs.

5 SENSITIVITY ANALYSIS OF THE TWO VERSIONS OF THE SFC MODEL

5.1 Sensitivity analysis of model outputs

Sensitivity analysis (SA) or importance ranking aims to understand ‘how uncertainties in the model outputs can be apportioned to different sources of uncertainties in the model inputs’ (Saltelli et al. 2008: 1). The objective of sensitivity analysis is to inform the modeller about the relative importance of the uncertain parameters in determining the variable of interest. Knowing this relative importance is crucial, for example in order to simplify the model or to identify those parameters whose estimation would require more attention.

Several methods can be fruitfully used to analyse the sensitivity of a model. In the present paper, we focus on variance-based methods, in which the variance of model outputs is considered to be a proxy of their uncertainty. This technique is usually considered to be particularly appropriate when a single model evaluation can be carried out relatively quickly (Saltelli et al. 2008).

The variance-based sensitivity analysis technique applied in the present study consists of the evaluation of two sensitivity indices (also referred to as the Sobol sensitivity indices): (i) first order or main effect; and (ii) total effect indices (Cukier et al. 1973; Homma/Saltelli 1996; Saltelli 2002).

To illustrate the approach, let us consider the general model of equation (11) with r input factors (Z) and one output (Y).

Both indices have been normalized by the total variance of the output and for this reason they can only have values in the range [0, 1]. For the first order index, $V\left(Y\right)$ represents the unconditional variance of the model output Y; ${E}_{{Z}_{~i}}\left(Y|{Z}_{i}\right)$ is the average of the value attained by the output Y for different input factors and given ${Z}_{i}$; and ${V}_{{Z}_{i}}\left({E}_{{Z}_{~i}}\left(Y|{Z}_{i}\right)\right)$ is the variance of this average over the factor ${Z}_{i}$. Thus ${S}_{i}$ gives the fraction of the total variance which is due to the direct effect of any individual input variable (Archer et al. 1997). The first order sensitivity index therefore represents a measure that helps us to understand how much the correct definition of an input may reduce the overall variance of the output. The sum of all the first order indices is always less than or equal to 1, where ‘1’ is achieved only for perfectly additive models; that is, models for which the input variables present no mutual interactions.

In-keeping with the above notation, ${V}_{{Z}_{i}}\left(Y|{Z}_{~i}\right)$ is the variance over the factor Z_{i} of the output Y given the value of all the factors, and ${E}_{{Z}_{~i}}\left({V}_{{Z}_{i}}\left(Y|{Z}_{~i}\right)\right)$ is the average of this variance over all the factors but Z_{i}. ${S}_{{T}_{i}}$ is thus the total contribution of each variable to the output variation, hence higher order (indirect effects) are included. The reader may refer to Saltelli et al. (2008) for the derivation of these two indices. Total effect indices represent the sum of the variance conditioned to any input combination in which the input Z_{i} is included. As a result, when the total index is ${S}_{{T}_{i}}=0,$ the ith factor can be fixed without affecting the output's variance. The sum of all the total order indices is always equal to or greater than 1 where ‘1’, again, is only achieved for perfectly additive models.

Coming back to our objectives, it is clear from this definition that, in order to simplify a model without artificially reducing its reliability, it is possible to remove all the factors with negligible total order indices. In addition, if the first order index for an important input approximates that of the total order for the same input, model archetypes can be created based on the range of this input, while, if this is not the case, archetypes should be created considering a multidimensional space composed of all the important interacting parameters and starting values.

5.2 Calculation of Sobol sensitivity indices

When a closed-form analytical formulation of the model is not available, the computation of the indices in equations (12) and (13) requires approximate methods. In the present work we use the methodology described in Saltelli et al. (2008; 2010), as briefly described below:

Accordingly, two (N, r) matrices of quasi-random numbers (in this case we are also using Sobol sequences) are generated. Using the distributions of the r input factors, two matrices of their values are then generated (called A and B in the following):

A set of r matrices, C, is obtained by assembling r matrices equal to A except for the ith column (with i varying from 1 to r among the r matrices) that is taken from B.

the model is evaluated for all the [N∙(r+2)] combinations of input variables as given by matrices A, B and C so as to produce the vectors of outputs ${y}_{A}=f\left(A\right)$, ${y}_{B}=f\left(B\right)$ and ${y}_{{C}_{i}}=f\left({C}_{i}\right)$ for i = 1 … r. These vectors are sufficient for the evaluation of all first order and total effects indices.

The above sensitivity indices can then be evaluated using the following formulations:

The choices of N and the distribution of input variables are key factors of the methodology. Unfortunately, there are no universal recipes for both cases but N should be sufficiently big in order to achieve stable sensitivity indices. As we are using Sobol sequences in this case also, a power of 2 has been considered. Thus we took 2^{18} samples and, in order to check whether the resulting indices are stable, we numerically evaluated (for example, by means of parametric bootstrapping) their upper and lower bounds.

The distribution of the input variables can only be determined using a priori information (economic meaning of the variables, previous studies, expert opinion, etc.). If this is not available, some preliminary tests should be performed to find the best settings. As already pointed out, in this study we used the previous phase of model exploration to define suitable intervals for the model parameters and starting values and then assumed that model parameters are evenly distributed over these intervals.

In the present section, the results of the sensitivity analysis for both versions of the model are reported. Results are shown in terms of bar plots. Each bar plot refers to a specific output of the model. In each bar plot, first and total order sensitivity indices are reported together with their 90 per cent confidence intervals (evaluated by means of a parametric bootstrapping on the same indices). The charts allow the identification of those inputs whose estimation requires particular attention and of those that might be fixed to any value without affecting the outputs of the model.

5.3 Sensitivity analysis of the original version of the SFC model

Results of the sensitivity analysis of the original version of the SFC model for the four outputs are reported in Figure 4. Results vary from output to output and no parameter or starting value can be neglected completely for all the outputs. In general we can say that the model has a tendency to behave as an additive model, except for vh_{t}, as interactions between parameters including starting values (black bars) account for a much lower variability than first order effects.

First order and total order sensitivity indices and their confidence intervals for the four outputs of the original version of the SFC model (by Dos Santos/Zezza 2008)

More specifically, the variability in the value of u_{t} is explained for more than the 90 per cent by 6 out of 14 parameters and starting values (θ, a, vh_{t–1}, β, g_{0}, G_{t}). g_{t} is mainly influenced by seven factors (β, g_{0}, θ, a, α, θ_{1}, ib_{t}). The model behaviour for b_{t} is by contrast totally different, with one starting value (b_{t–1}) accounting for 70 per cent of the output variability (with a and vh_{t–1} accounting for another 20 per cent). In plain English, this means that – not surprisingly – current public debt is mainly determined by past public debt. vh_{t} is the output requiring more interactions (this is also expected due to the formula). In this case, however, four model parameters and starting values account for almost 90 per cent of the output variability (vh_{t–1}, a, ib_{t}, ρ). As a result, there are three inputs accounting for a very small share of the output variability (μ, τ_{b}, τ). As a consequence, these might be fixed to whatever their value in their range of feasibility without greatly affecting the results of the analysis. If a specific value needs to be defined, one can use the mid-point of their bounds or take the results of all the model evaluations and fix the non-important parameters to the values for which the model has provided stable and reasonable outputs.

5.4 Sensitivity analysis of the modified version of the SFC model

Results of the sensitivity analysis of the modified version of the SFC model for the five outputs are reported in Figure 5. Also in this case, results vary from output to output and no input can be neglected at all for all the outputs. In contrast to the previous case, interactions of parameters and starting values seem to have a higher impact and this means that the model is more parsimonious, being able to capture more complex effects.

First order and total order sensitivity indices and their confidence intervals for the five outputs of the modified version of the SFC model

More specifically, the variability in the value of u_{t} is explained, for more than 90 per cent by 6 out of 14 parameters and starting values (θ, σ_{1}, a, G_{t}, β, τ). g_{t} is mainly influenced by five factors (g_{0}, β, ɑ, θ, a) and q_{t} is also influenced by just few factors (cd and q_{t–1} in particular and by θ and τ to a minor extent). Also in this case, the model behaviour for b_{t} is more asymmetric with two parameters/starting values (b_{t–}_{1} and σ_{1}) accounting for 80 per cent of output variability. vh_{t} is the output requiring more interactions (also in this case this is expected due to the formulation), which here accounts for approximately 20 per cent of output variability. Furthermore, in this case, five model parameters and starting values account for almost 80 per cent of the output variability (vh_{t–}_{1}, a, ρ, g_{0} and ib_{t}).

Most of the new parameters introduced in the modified model only have a limited impact on the outputs of the model. However, overall, it apparently shows a more robust behaviour, with 5 out of 19 parameters exerting a very small effect on the outputs of the model.

It is worth remembering that the results of the sensitivity analysis are strongly connected to the ranges assigned to the parameters. This is the reason why it was so important to reduce these ranges in order not to consider an excessive artificial variability, but also why it was important not to exaggerate in this reduction.

At the end of these two phases the analysis is well instructed about the domain in which the model inputs should be estimated and to which parameters and starting values more attention should be devoted for a reliable use of the model. It cannot be overstated that this information is crucial for a correct and effective calibration of the model and in order to achieve reliable results from its use.

6 CONCLUSIONS

In the present work, the behaviour of two stock-flow consistent post-Keynesian growth models has been thoroughly investigated and described. The two models (developed in Dos Santos/Zezza (2008)), and by the authors in a related paper) are strongly interconnected, the latter mainly being an advancement of the former one. Three types of analysis were carried out, namely (i) an exploration of the multidimensional domain of inputs, (ii) an analysis of the linear stability of the model, and (iii) sensitivity analysis. Their results shed light on many peculiarities of the two models. More importantly, they have allowed us to identify the operative space for these models, as they have defined (i) a region in the domain of the inputs that leads to numerically meaningful outputs, (ii) regions of model stability, and (iii) the inputs deserving more attention (and whose calibration therefore requires particular attention by the model users).

The results of the study have shown that the new formulation of the model proposed in Dos Santos/Zezza (2008), which entails the possibility of inheriting a durable consumption good from previous generations and the inclusion of a pension system, produces a model that is far more stable and, apparently, more easily leading to meaningful results. Regarding the stability of the model in particular, results showed that for its original version to be stable it is necessary to impose a negative value for public debt. This entails that a linearly stable system is possible when a government has significant savings, which is not necessarily true in reality to say the least. These limitations of the model are overcome by the modified version of the model, which shows stability for different possible combinations of public debt, household wealth and the accumulation rate of durable goods.

This improved behaviour comes, of course, at the cost of a more complex formulation. The increased complexity, however, is not reflected in a significant increase in the number of inputs. Indeed, although the modified version has a significantly higher number of input factors (19 versus 14), the sensitivity analysis has shown that the difference in terms of ‘important’ parameters (that is, those accounting for at least the 10 per cent of the variability of at least one of the outputs) is of one parameter only (10 vs 9). This means that the calibration of just one additional parameter can provide a practitioner in the field with a significantly more reliable and stable tool.

The arguments presented throughout this paper have also shown, once again, that any new model development needs to be supplemented with a series of guidelines and information to simplify the lives of those who need to apply the model to reproduce reality. Without them, any new modelling effort is of no practical use.

Note that we assume (as do Dos Santos/Zezza) a closed economy. Government expenditures can therefore not be affected by exports as discussed in Godley/Rowthorn (1994). The assumption of a closed economy can be justified in the present context by arguing that sustainability is a global issue and the globe surely is a closed economy.

The sequence was introduced for the first time in 1967 by I.M. Sobol. For further information, the interested reader may refer Sobol (1967; 1976). The implementation used for the derivation of the quasi-random sequences can be found in European Commission (2012).

REFERENCES

G.E.B. Archer, A. Saltelli and I.M. Sobol, 'Sensitivity measures, Anova-like techniques and the use of bootstrap' (1997) 58Journal of Statistical Computation and Simulation: 99-120.

R.I. Cukier, C.M. Fortuin, K.E. Schuler, A.G. Petsheck and J.H. Schailbly, 'A study of the sensitivity of coupled reaction systems to uncertainties in rate coefficients' (1973) 59Journal of Chemical Physics: 3873-3878.

W. Godley and B. Rowthorn, 'The dynamics of public sector deficits and debt', in J. Michie and J.G. Smith(eds), Unemployment in Europe, (Academic Press, London and San Diego1994) 199-209.

T. Homma and A. Saltelli, 'Importance measures in global sensitivity analysis of nonlinear models' (1996) 52Reliability Engineering and System Safety: 1-17.

Rosenbaum, E., Ciuffo, B. (2015): Sustainability and intergenerational transfers in a stock-flow-consistent model. Unpublished. (The reader can request a copy of the paper from the authors.)

A. Saltelli, 'Making best use of model evaluations to compute sensitivity indices' (2002) 145Computer Physics Communications: 280-297.

A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana and S. Tarantola, Global Sensitivity Analysis: The Primer, (John Wiley,, Chichester, UK2008).

A. Saltelli, P. Annoni, I. Azzini, F. Campolongo, M. Ratto and S. Tarantola, 'Variance based sensitivity analysis of model output: design and estimator for the total sensitivity index' (2010) 181Computer Physics Communications: 259-270.

I.M. Sobol, 'On the distribution of points in a cube and the approximate evaluation of integrals' (1967) 7Computational Mathematics and Mathematical Physics: 86-112.