source: papers/FDL2012/ordering_filter_properties.tex

Last change on this file was 101, checked in by syed, 12 years ago

final

File size: 13.6 KB
Line 
1 We propose a
2heuristic to order the properties  depending on the structure
3of each component.
4%Before generating an abstract model to verify a global property $\phi$, the verified properties of all the components in the concrete model are ordered according to their pertinency in comparison to a global property $\phi$.
5In order to do so, the variable dependency of the variables present in global property has to be analyzed.
6After this point, we refer to the variables present in the global property  as \emph{primary variables}.
7
8%\bigskip
9
10%The ordering of the properties is based on the variable dependency graph
11%where the roots are primary variables.
12%The variables in the model are weighted according to their dependency level
13%\emph{vis-à-vis} primary variables and the properties is weighted according to the sum of the weights
14%of the variables present in it. We want to select the properties specifying
15%behaviors that may have an impact on the global property.
16We observed that
17the closer a variable is to the primary
18variable, the higher influence it has on it.
19%Hence, a property
20%has higher priority according to the number of primary or close to primary variables it
21%contains.
22Moreover, a global property often specifies the behavior at the interface of
23components. Typically, a global property ensures that a message sent is
24always acknowledged or the good target gets the message. This kind of behavior
25relates the input-output behaviors of components.
26We have decided to allocate an extra weight for interface variables
27whereas variables which do not interfere with a primary variable are weighted 0.
28Here is how we proceed:
29%\vspace*{-3mm}
30\begin{enumerate}
31\itemsep -0.3em
32\item Build the structural dependency graph for all primary variables.
33\item Compute the depth of all variables
34in all dependency graphs.
35Note that a variable may belong to more than one dependency graph, in that case
36we consider the minimum depth.
37\item Give a weight to each variable (see Algorithm  \ref{algo:weight}).
38\item Compute the weight of properties for each component: sum of the
39property variables weight.
40\end{enumerate}
41
42The Algorithm \ref{algo:weight} gives weight according to the variable distance to the
43primary variable with extra weight for interface variable and primary variable.
44
45\begin{algorithm}[ht]
46\caption{Compute Weight}
47\label{algo:weight}
48
49\KwIn{ $G$, the set of all dependency graph variable}
50\hspace{3.5em}{ $V$, the set of variables}
51
52\KwOut{$\{(v,w)| v \in V, w \in N\}$, The set of variables with their weight}
53
54\Begin{
55$p = $ max(depth(G))  \\
56\For{$v\in V$}{
57        $d$ = depth($v$) \;
58        $w = 2^{p-d}*p$\;
59        \If($v$ is primary variable){$d == 0$}
60        {
61                $w = 5 * w$\;
62        }
63        \If($v$ is an interface variable){$v\in{I\cup O}$}
64        {
65                $w = 3 * w $
66        }
67}
68}
69\end{algorithm}
70%\begin{enumerate}
71%
72%\item {\emph{Establishment of primary variables' dependency and maximum graph depth}\\
73%Each primary variable will be examined and their dependency graph is elobarated. The maximum graph depth among the primary variable dependency graphs will be identified and used to calibrate the weight of all the variables related to the global property.
74%Given the primary variables of $\phi$, $V_{\phi} =  \langle v_{\phi_0}, v_{\phi_1}, ... , v_{\phi_k}, ... , v_{\phi_n} \rangle$ and $G{\_v_{\phi_k}}$ the dependency graph of primary variable $v_{\phi_k}$, we have the maximum graph depth $max_{d} = max(depth(Gv_{\phi_0}), depth(Gv_{\phi_1}), ... , depth(Gv_{\phi_k}), ... ,$\\$ depth(Gv_{\phi_n})) $.
75%
76%}
77%
78%\item {\emph{Weight allocation for each variables} \\
79%Let's suppose $max_d$ is the maximum dependency graph depth calculated and $p$ is the unit weight. We allocate the variable weight as follows:
80%\begin{itemize}
81%\item{All the variables at degree $max_d$ of every dependency graph will be allocated the weight of $p$.}
82% \\ \hspace*{20mm} $Wv_{max_d} = p$
83%\item{All the variables at degree $max_d - 1$ of every dependency graph will be allocated the weight of $2Wv_{max_d}$.}
84%\\ \hspace*{20mm} $Wv_{max_d - 1} = 2Wv_{max_d}$
85%\item{...}
86%\item{All the variables at degree $1$ of every dependency graph will be allocated the weight of $2Wv_{2}$.}
87% \\ \hspace*{20mm} $Wv_{1} = 2Wv_{2}$
88%\item{All the variables at degree $0$ (i.e. the primary variables) will be allocated the weight of $10Wv_{1}$.}
89% \\ \hspace*{20mm} $Wv_{0} = 10Wv_{1}$
90%\end{itemize}
91%
92%We can see here that the primary variables are given a considerable
93%ponderation due to their pertinency \emph{vis-à-vis} global  property. Furthermore, we will allocate a supplementary weight of $3Wv_{1}$ to variables at the interface of a component as they are the variables which assure the connection between the components if there is at least one variable in the dependency graph established in the previous step in the property. All other non-related variables have a weight equals to $0$.
94%}
95%
96%
97%\item {\emph{Ordering of the properties} \\
98%Properties will be ordered according to the sum of the weight of the variables in it. Therefore, given a property $\varphi_i$ which contains $n+1$ variables, $V_{\varphi_i} =  \langle v_{\varphi_{i0}}, v_{\varphi_{i1}}, ... , v_{\varphi_{ik}}, ... , v_{\varphi_{in}} \rangle$, the weight  of $\varphi_i$ , $W_{\varphi_i} = \sum_{k=0}^{n} Wv_{\varphi_{ik}}$ .
99%After this stage, we will check all the properties with weight $>0$ and allocate a supplementary weight of $3Wv_{1}$ for every variable at the interface present in the propery. After this process, the final weight of a property is obtained and the properties will be ordered in a list with the weight  decreasing (the heaviest first). We will refer to the ordered list of properties related to the global property $\phi$ as $L_\phi$.
100%
101%
102%}
103%
104%\end{enumerate}
105
106%\bigskip
107
108%\emph{\underline{Example:}}  \\
109%
110%For example, if a global property $\phi$ consists of 3 variables: $ p, q, r $ where:
111%\begin{itemize}
112%\item{$p$ is dependent of $a$ and $b$}
113%\item{$b$ is dependent of $c$}
114%\item{$q$ is dependent of $x$}
115%\item{$r$ is independent}
116%\end{itemize}
117%
118%Example with unit weight= 50.
119%The primary variables: $p$, $q$ and $r$ are weighted $100*10=1000$ each. \\
120%The secondary level variables : $a$, $b$ and $x$ are weighted $50x2=100$ each. \\
121%The tertiary level variable $c$ is weighted $50$. \\
122%The weight of a non-related variable is $0$.
123%
124%
125%\bigskip
126%\begin{figure}[h!]
127%   \centering
128%%   \includegraphics[width=1.2\textwidth]{Dependency_graph_weight_PNG}
129%%     \hspace*{-15mm}
130%     \includegraphics{Dependency_graph_weight_PNG}
131%   \caption{\label{DepGraphWeight} Example of weighting}
132%\end{figure}
133
134%Dans la figure~\ref{étiquette} page~\pageref{étiquette},  
135
136
137
138%Each properties  pertinence is evaluated by adding the weights of all the variables in it.
139It is definitely not an exact pertinence calculation of properties but provides a good indicator
140of their possible impact on the global property.
141After this pre-processing phase, we  have a list of properties
142ordered according to their pertinence with regards to the global property.
143
144
145
146
147\subsection{Filtering properties}
148The refinement step consists of adding new AKS of properties selected according to
149their pertinence.
150%This refinement respects items 1 and 2 of definition
151%\ref{def:goodrefinement}. The first item comes from AKS definition and the
152%composition property \ref{prop:concrete_compose}.
153%Adding a new AKS in the abstraction leads to an abstraction where more behaviors
154%are characterized. Hence there is more constrains behavior and more concretize
155%states.
156As we would like to ensure the elimination of the counterexample previously found,
157we filter out properties that do not have an impact on the counterexample
158$\sigma$ thus will not eliminate it.
159In order to reach this objective, a Abstract Kripke structure of the counterexample $\sigma$, $K(\sigma)$
160is generated. $K(\sigma)$ is a succession of states corresponding to the counterexample path which dissatisfies
161the global property~$\Phi$.
162%as show in figure \ref{AKSNegCex}.
163%In case where the spurious counter-example exhibits a bounded path, we add a last
164%state $s_T$ where all variable are free({\it unknown}). The tree starting from this
165%state represents all the possible future of the counterexample.
166
167
168
169
170%\begin{enumerate}
171%
172%\item {\emph{\underline{Step 1:}}} \\
173%
174%As we would like to ensure the elimination of the counterexample previously found, we filter out properties that don't have an impact on the counterexample $\sigma_i$ thus won't eliminate it. In order to reach this obective, a Kripke Structure of the counterexample $\sigma_i$, $K(\sigma_i)$ is generated. $K(\sigma_i)$ is a succession of states corresponding to the counterexample path which dissatisfies the global property $\phi$.
175%
176%\bigskip
177%
178\begin{definition}
179Let $\sigma$ be a counterexample of length $n$ in $\widehat{M}_i$ such
180that $ \sigma =  s_{0}\rightarrow  s_{1}\rightarrow \ldots \rightarrow
181s_{n-1}$. The \emph{Kripke structure derived from $\sigma$} is 6-tuple
182$K(\sigma_i) = (AP_{\sigma}, S_{\sigma}, S_{0\sigma}, L_{\sigma},
183R_{\sigma},F_{\sigma})$
184such that:
185\vspace*{-2mm}
186\begin{itemize}
187%\parsep=2pt
188%\topsep 0pt
189\itemsep -0.3em
190\item $AP_{\sigma} = \widehat{AP}_i$ : a finite set of atomic propositions which corresponds to the variables in the abstract model     
191\item $S_{\sigma} = \{s_{i}|s_i\in \sigma\}\cup\{s_T\}$
192\item $S_{0\sigma} = \{s_{0}\}$
193\item $L_{\sigma} = \widehat{L}_i \cup L(s_T) = \{\top, \forall p \in AP_{\sigma}\}$
194\item $R_{\sigma} =  \{(s_{k}, s_{k+1})|(s_{k}\rightarrow s_{k+1})\in
195\sigma\}\cup\{(s_{n-1},s_T)\}$ 
196\item $F_{\sigma} = \emptyset$ 
197\end{itemize}
198\end{definition}
199
200%%\bigskip
201%All the properties available are then model-checked on $K(\sigma_i)$.
202%
203%If:
204%\begin{itemize}
205%\item {\textbf{$K(\sigma_i) \vDash \varphi  \Rightarrow \varphi $ will not eliminate $\sigma_i$}}
206%\item {\textbf{$K(\sigma_i) \nvDash \varphi  \Rightarrow \varphi $ will eliminate $\sigma_i$}}
207%\end{itemize}
208%
209%%\bigskip
210%
211%
212%%\begin{figure}[h!]
213%%   \centering
214%%%   \includegraphics[width=1.2\textwidth]{K_sigma_i_S_PNG}
215%%%     \hspace*{-15mm}
216%%     \includegraphics{K_sigma_i_S_PNG}
217%%   \caption{\label{AKSNegCex} Kripke Structure of counterexample $\sigma_i$, $K(\sigma_i)$}
218%%\end{figure}
219%
220%%Dans la figure~\ref{étiquette} page~\pageref{étiquette},  
221%
222%%\bigskip
223%
224%
225%\begin{figure}[h!]
226%   \centering
227%
228%\begin{tikzpicture}[->,>=stealth',shorten >=1.5pt,auto,node distance=2cm,
229%                    thick]
230%  \tikzstyle{every state}=[fill=none,draw=blue,text=black, minimum size=1.1cm]
231%
232%  \node[initial,state] (A)                    {$s_{0}$};
233%  \node[state]         (B) [below of=A]       {$s_{1}$};
234%  \node[node distance=1.5cm]      (C) [below of=B]       {$\ldots$};
235%  \node[state,node distance=1.5cm]       (D) [below of=C]     {$s_{n-1}$};
236%  \node[state]         (E) [below of=D]     {$s_T$};
237%
238%  \path (A) edge node {} (B)
239%        (B) edge node {} (C)
240%        (C) edge node {} (D)
241%        (D) edge node {} (E)
242%        (E) edge[loop right] node {} (E);
243%
244%\end{tikzpicture}
245%
246%   \caption{\label{AKSNegCex} Kripke Structure of counterexample $\sigma$, $K(\sigma)$}
247%\end{figure}
248
249All the properties available for refinement are then model-checked on $K(\sigma)$. If the
250property holds then the property will not eliminate the counterexample.
251Hence this property is not a good candidate for refinement.
252Therefore the highest weighted property not satisfied in $K(\sigma)$ is chosen to be
253integrated in the next refinement step. This process is iterated for each
254refinement step.
255%At this stage, we already have a
256%list of potential properties that definitely eliminates the current counterexample $\sigma$ and might converge the abstract model towards a model sufficient to verify the global property $\Phi$.
257
258\begin{property}{Counterexample eviction}
259\vspace*{-2mm}
260\begin{enumerate}
261\itemsep -0.3em
262\item If {\textbf{$K(\sigma) \vDash \varphi  \Rightarrow AKS(\varphi) $ will
263not eliminate $\sigma$}}.
264\item If {\textbf{$K(\sigma) \nvDash \varphi  \Rightarrow AKS(\varphi) $ will
265eliminate $\sigma$}}.
266\end{enumerate}
267\end{property}
268\begin{proof}
269\begin{enumerate}
270\item By construction, $AKS(\varphi)$ simulates all models that verify
271$\varphi$. Thus the tree described by $K(\sigma)$ is simulated by $AKS(\varphi)$,
272it implies that $\sigma$ is still a possible path in $AKS(\varphi)$.
273\item $K(\sigma)$, where $\varphi$ does not hold, is not simulated by
274$AKS(\varphi)$, thus $\sigma$ is not a possible path in $AKS(\varphi)$
275otherwise $AKS(\varphi)\not\models \varphi$ that is not feasible due to AKS
276definition and the composition with $M_i$ with $AKS(\varphi)$ will eliminate $\sigma$.
277\end{enumerate}
278\end{proof}
279
280\vspace*{-2mm}
281The proposed approach ensures that the refinement excludes the counterexample
282and  respects the Definition \ref{def:goodrefinement}.
283We will show in our experiments that first, the time needed to build an AKS is
284negligible and secondly the refinement converges rapidly.
285%The property at the top of the list (not yet selected and excluding the properties
286%which are satisfied by $K(\sigma)$) is selected to be integrated in the generation of $\widehat{M}_{i+1}$.
287%We ensure that our refinement respects the definition \ref{def:goodrefinement}.
288%Moreover, the time needed to build an AKS is neglectible and building the
289%next abstraction is just a parallel composition with the previous one. Thus the refinement
290% we propose is not time consuming.
291
292
293%
294%}
295%%\bigskip
296%
297%\item {\emph{\underline{Step 2:}} \\
298%
299%The property at the top of the list (not yet selected and excluding the properties which are satisfied by $K(\sigma_i)$) is selected to be integrated in the generation of $\widehat{M}_{i+1}$.
300%%\bigskip
301%
302%}
303%\end{enumerate}
304%
305%$\widehat{M}_{i+1}$ is model-checked and the refinement process is repeated until the model satisfies the global property or there is no property left to be integrated in next abstraction.
306%
307
Note: See TracBrowser for help on using the repository browser.